Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less)

Save this PDF as:

Size: px
Start display at page:

Download "Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less)"

Transcription

1 Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less) Raymond N. Greenwell 1 and Stanley Kertzner 2 1 Department of Mathematics, Hofstra University, Hempstead, NY Department of Mathematics, Hofstra University, Hempstead, NY May 16, 2009 Abstract Using a modification of the Smith normal form of a matrix, we give necessary and sufficient conditions for the integer matrix equation AX = B to have an integer solution, and we provide an explicit formula for that solution. We provide an algorithm for generating the solution, and we also show how the solution can be generated with Maple. AMS Subject Classification: 15A06, 15A24, 15A36 Key Words and Phrases: matrix equations, linear systems, Smith normal form, diophantine equations, integer solutions 1 Introduction Solving a system of linear equations with integer coefficients is of both theoretical and practical importance. This is why the topic is taught, not only to mathematics majors, but also to majors in business, engineering, and other disciplines. Finding solutions is equivalent to solving the matrix equation AX = B. The matrix B is often a single column vector, but it need not be so. When the unknowns represent numbers of objects, then the solution must be an integer matrix with nonnegative entries. Thus the primary problem of finding all integer solutions is of intrinsic theoretical interest. The substance of this paper is to resolve the primary problem. Thus we give necessary and sufficient 1

2 conditions for the integer matrix equation AX = B to have an integer solution, where A is m n, B is m p, and X is n p, and we provide an explicit formula for that solution. Example 1. The entries in each column of the matrix A below give the units of fiber, protein, and fat in a container of brands J, K, L, and M of feed for Vietnamese pot-bellied pigs. A breeder would like to know how many containers of each brand of feed he should use to end up with a mixture that contain the units of fiber, protein, and fat given by each column of matrix B below. Column 1 of B gives the nutrient requirements for a male, and column 2 for a female. A = fiber protein fat Brand of feed J K L M male female fiber , B = protein fat This problem boils down to finding all nonnegative integer solutions to the system AX = B. Such a system can be solved by the Gauss-Jordan method. But in an underdetermined system such as Example 1, with two free variables, it is not clear whether or not integer solutions exist, or what they are if they do exist. The reader may wish to try finding all integer solutions to Example 1 using Gauss-Jordan. Our procedure, which gives those solutions explicitly, is similar to those of Lazebnik 5, Newman 6, and Ward 10, but has the advantage of giving a simplified formula for the solutions. 2

3 2 An Old Theorem and a New One The procedure is based on a modification of the following theorem found in Jacobson (see 4, p. 79), which originated with a paper of Smith 9. Theorem 1. If A is any integer matrix, there exist invertible integer matrices P and Q, whose inverses are also integer matrices, such that P AQ = D, where D is a diagonal matrix of integers with the property that D = diag{d 11, d 22,, d rr, 0,, 0} with d ii a factor of d jj for i < j and for which d ii 0 for i r. The matrix D is known as the Smith normal form of the matrix A. Returning to our problem, for given matrices A and B of respective sizes m n and m p, let P, Q, and D be as in Theorem 1. Assume for the moment that r < m and r < n; we ll later consider the cases in which one or both of these inequalities is not true. Define D = diag{d 11, d 22,, d rr } and the (m r) p integer matrix U such that P B = P B U, where P B is the matrix consisting of the first r rows of P B. Then we have the following theorem. Theorem 2. AX = B for the integer matrix X if and only if 3

4 a) U = O, a matrix of zeros, and b) D 1 P B is an integer matrix. 1 Then the solutions are all matrices of the form X = Q D P B Z some (n r) p integer matrix Z. for Proof. Suppose AX = B for the integer matrix X. Then by Theorem 1, P 1 DQ 1 X = B, and DQ 1 X = P B. Let Q 1 X = Y = Y Z, where Y consists of the first r rows of the n p matrix Y. Note that since Q 1 and X are integer matrices, so are Y and the (n r) p matrix Z. Then DY = P B translates into D O O O Y Z = P B U. Thus U = O and D Y = P B, so Y is equal to D 1 P B, which must then be an integer matrix. We also have that 1 X = QY = Q D P B Z. Suppose now that 1 X = Q D P B Z for the integer matrices D 1 P B and Z. Also suppose that U = O. Thus P B = P B O. 4

5 X is an integer matrix, and by Theorem 1, 1 AX = P 1 DQ 1 Q D P B Z 1 = P 1 D O D P B O O Z = P 1 P B O = P 1 P B = B. Remark 1. Note that Theorem 2 is valid even if the non-zero entries d 11,, d rr do not divide each other as in the Smith normal form. 3 Jacobson s Algorithm The algorithm described in Jacobson proceeds by multiplying A on the left and right by integer elementary matrices with integer inverses, which is equivalent to 1) interchanging two rows or two columns, 2) multiplying a row or column by 1, and 3) adding an integer multiple of a row or column to another row or column. We refer the reader to Jacobson for details of the algorithm. When we perform the algorithm by hand, we modify it by allowing division of a row by a constant. One consequence is that we may eliminate a common factor in a row. More significantly, this allows us to make d ii = 1 for i r, so that the matrix D in the theorem, and hence D 1, is the identity matrix I r. As a result, P may not necessarily be an integer matrix, but with P and Q determined in this manner, we have the following corollary. Corollary 1. AX = B for the integer matrix X if and only if 5

6 a) U = O, a matrix of zeros, and b) P B is an integer matrix. Then the solutions are all matrices of the form X = Q integer matrix Z. P B Z for some When conditions a) and b) of Corollary 1 are true, then all integer solutions of AX = B are found from Corollary 1 by letting the entries of Z range over the integers. In this case, if we create an n r matrix Q 1 out of the first r columns of Q, and an n (n r) matrix Q 2 out of the last n r columns of Q, then it is straightforward to show that the solution X can be written as Q 1 P B + Q 2 Z, where the first term is a particular integer solution to the original equation AX = B. Since P AQ 2 = O and P is invertible, Q 2 is an integer solution to the homogeneous equation. That is, AQ 2 = O, where O here is m (n r). Remark 2. Note that Corollary 1 is true whenever invertible integer Ir O matrices P and Q can be found for which P AQ = and Q O O 1 is an integer matrix. 4 The Other Cases We now return to the cases we ignored before. The reader may verify the details for these cases. Case 1: r = m = n. In this case, there is no U and no Z. Corollary 1 is still valid, but part a) is no longer relevant, P B in part b) is just P B, and part c) reduces to X = QP B. Neither the row or column of zero matrices in P AQ will appear. 6

7 Case 2: r = m < n. In this case, there is no U. Corollary 1 is still valid, but without part a), and with P B replaced by P B in parts b) and c). The row of zero matrices in P AQ will not appear. Case 3: r = n < m. In this case, there is no Z. Corollary 1 is still valid, but part c) reduces to X = QP B. The column of zero matrices in P AQ will not appear. 5 The Modified Algorithm We now describe our modified version of the algorithm found in Jacobson, and we will then illustrate the procedure by completing Example 1. Step 1: Form the augmented matrix A B I n, where * is left blank. Step 2: Let k = 1. Step 3: Carry through the columns the repeated application of the division algorithm to the entries in row k of A. (If row k is all zeros but a later row is not, interchange rows to correct this.) With each application, retain the remainder when each entry is divided by the entry corresponding to the entry in row k of A with smallest nonzero absolute value. Step 4: Continue Step 3 until it produces a matrix in which the entries of row k of A are all 0, except for one entry, which is the greatest common divisor of this row. 7

8 Step 5: Divide row k of the augmented matrix by the entry described in Step 4. Step 6: Interchange columns of A (as well as the matrix below it) to produce a matrix with a 1 in the pivot position and 0 elsewhere in the row. Step 7: Use row operations to change the other elements in column k of A to 0. Step 8: Increase k by 1 and repeat Steps 3 through 7. Step 9: Continue Steps 3 through 8 until the matrix A has been changed to the form Ir O O O, where the row of zero matrices will only appear if r < m, and the column of zero matrices will only appear if r < n. This sequence of row and column operations is equivalent to producing invertible matrices P and Q for which Q and Q 1 are integer matrices and Ir O P AQ = O O. When dividing a row of the augmented matrix by the greatest common divisor of that row, that element must divide into the elements of that row where B originally appeared. If it doesn t, then there are no integer solutions. In that case, the columns of B for which the conditions are not met can be eliminated if one wants to find integer solutions for the remaining columns of B. 8

9 Because P is the product of the matrices corresponding to the row operations, and Q is the product of the matrices corresponding to the column operations, the augmented matrix at the end of the algorithm will be of the form I r O O O Q P B U, where the row of zero matrices will only appear if r < m, and the column of zero matrices will only appear if r < n. It is now immediately revealed whether or not U is a matrix of zeros and whether or not P B is an integer matrix. If either of these conditions is not met, then there are no integer solutions. If the conditions are met, the integer solutions are explicitly given by the equation for X in the corollary. We should add that there are various ways in which one can carry out the procedure. It doesn t matter whether one generates the standard D described by Jacobson and then divides through to convert D to I r, or one converts the diagonal elements to 1 as they are generated. Similarly it doesn t matter whether one simplifies the rows by dividing through by a common factor. Also, rather than strictly following the algorithm, someone might do the allowable operations in a different order, leading to a different P and Q, as long as the ultimate effect is to convert the matrix A to the form described in Step 9 of the algorithm. Example 1 (revisited). Find all integer solutions to the system AX = B, where A = and B =

10 We create the augmented matrix and start with row 1. Following Step 3 of the algorithm, we reduce the elements in the first column with the column operations C 1 4C 4 C 1, C 2 C 4 C 2, and C 3 C 4 C 3. In Step 4, we reduce the other elements in row 1 to 0 with C 3 7C 2 C 3 and C 4 9C 2 C 4. Step 5 is unnecessary, since the remaining element in row 1 is 1. In Step 6, we swap columns 1 and 2. In Step 7, we perform the row operations R R 1 R 2 and R R 1 R 3 to get ,410 21, ,115 32, Rather than continue to strictly follow the algorithm, we will reduce the numbers in rows 2 and 3 by dividing these rows by 10 and 5, respectively. We then let k = 2 and go through Steps 3 through 7 for row 2. We first perform C 2 + C 3 C 2 and C 4 C 3 C 4. We next perform C 3 + 3C 2 C 3 and C 4 + 2C 2 C 4. We finally get a 1 in row 2 with C 2 4C 4 C 2 and C 3 + C 4 C 3. To finish Step 4, we only need C 4 + 2C 3 C 4. Step 5 is again unnecessary, although it would have been necessary had we not divided the rows earlier. We perform Step 6 by interchanging columns 1 and 2. Step 7 10

11 only requires R 3 3R 2 R 3. The result is Now P B = X = QY = = , and z 1 z 3 z 2 z 4 12,705 7z z 2 10,695 7z z 4 111, z 1 104z 2 93, z 3 104z 4 12,705 3z z 2 10,695 3z z 4 50,196 30z z 2 42,255 30z z 4 where z 1, z 2, z 3, and z 4 are arbitrary integers. This procedure sometimes generates numbers in the final answer that are unnecessarily large. This may be remedied by a change of variable. Consider the first element 12,705+7z 1 12z 2. Since 12,705/ , it helps to replace, z 2 with z (The negative sign in front of z 2 is to make the new z 2 nonnegative when we look for nonnegative solutions. For the same reason, we will replace z 1 with z 1.) Similarly, the numbers in the second column can be reduced by replacing z 4 with z 4 891, and we will replace z 3 with z 3 so that z 3 will have a nonnegative value. The result is 3 + 7z 1 12z z 3 12z 4 X = z z z z z 1 11z z 3 11z 4, z 1, z 2, z 3, z 4 Z, z 1 48z z 3 48z 4 11

12 the complete set of integer solutions. As we mentioned after the corollary, this could also be written as 3 3 X = z1 z 3, z 2 z 4 where the first term is a particular integer solution to the original equation AX = B, and the second term is of the form Q 2 Z, where Q 2 is an integer solution to the homogeneous equation. The primary problem of finding all integer solutions to the matrix equation has now been solved. In the application to finding how many containers of each brand of feed should be used, we need to determine all nonnegative integer solutions, which is equivalent to finding the feasible region for an integer programming problem. Specifically, we must find all integers z 1, z 2, z 3, and z 4 such that all entries in the solution matrix for X are nonnegative. In this case, the solutions are not hard to find by graphing four inequalities. The first column has three solutions. The first is z 1 = 306 and z 2 = 178, from which we calculate that the numbers of containers of brand J, K, L, and M of feed are 3, 26, 16, and 0, respectively. The other two solutions are z 1 = 308, z 2 = 179 and z 1 = 310, z 2 = 180. These lead to the following numbers of containers of each brand of feed: 5, 16, 11, and 12, and 7, 6, 8, and 24, respectively. The second column has two solutions: z 3 = 259, z 4 = 151 and z 3 = 261, z 4 = 152. These lead to the following numbers of containers of each brand of feed: 4, 14, 10, and 9, and 6, 4, 5, and 21, respectively. 12

13 6 Using Maple Rather than carry out the algorithm by hand to get the solution in the form given by Corollary 1, we could take advantage of Maple s command for finding the Smith normal form of a matrix, as well as the matrices P and Q, and then apply Theorem 2. Using the matrix A of Example 1, the command (D, P, Q) := SmithF orm(a, method = integer, output = S, U, V ) gives the result D, P, Q := , , (Actually, since D is reserved in Maple, another variable name must be used. Notice also that the matrix D given by Maple is not square.) To see whether or not integer solutions exist, calculate P B = Notice that both D and P B have exactly two nonzero rows, so according to Theorem 2, there are indeed integer solutions, and the complete set is given by X = Q D P B = Q Z z 1 z 3 z 2 z 4 = z 1 z 3 21,189 27z 1 44z 2 17,874 27z 3 44z 4 19, z z 2 16, z z 4 11,598 6z 1 24z 2 9,783 6z 3 24z 4 As before, we can replace z 2 with z and z 4 with z The result.. is X = z 1 z z z z z z 1 41z z 3 41z z z z z 4, z 1, z 2, z 3, z 4 Z. 13

14 The values (z 1, z 2 ) = (3, 2), (5, 3), and (7, 4) lead to the three solutions for column 1 mentioned before. The values (z 3, z 4 ) = (4, 3) and (6, 4) lead to the two solutions for column 2 mentioned before. 7 Other issues The problem of finding solutions to linear Diophantine matrix equations has also been addressed by Contejean and Devie 1, Domenjoud 2, and Pottier 8, as well as by others who have refined their methods. Our method is quite different from these in that it gives an explicit formula for the set of solutions, and also because it allows B to have more than one column. Papp and Vizvári 7 investigate methods for applications in chemistry. Dumas et al. 3 have investigated ways to calculate the Smith normal form of a sparse matrix more efficiently. Our method, of course, also applies to matrix equations of the form XA = B, since this is equivalent to the equation A t X t = B t. Our method can also find all integer solutions to the equation AX = B where A and B are matrices with rational entries. One remaining issue we wish we could resolve is to give an intuitive explanation to what this procedure is doing to the system. It is well known that the row operations of Gauss-Jordan transform the solution hyperplanes so they are parallel to the coordinate axes. We would like to provide a similar insight into the column operations of the procedure we have described, but we are not able to do so. We invite others to explore this question. 14

15 References 1 E. Contejean and H. Devie, An efficient incremental algorithm for solving systems of linear Diophantine equations, Information and Computation 113 (1994) E. Domenjoud, Solving systems of linear Diophantine equations: an algebraic approach, Mathematical Foundations of Computer Science 1991, A. Tarlecki, ed., Vol. 520 of Lecture Notes in Computer Science, Springer, J.-G. Dumas, B. D. Saunders, and G. Villard, On efficient sparse integer matrix Smith normal form computations, Journal of Symbolic Computation, 32 (2001) N. Jacobson, Lectures in Abstract Algebra, Vol. 2: Linear Algebra, Van Nostrand, F. Lazebnik, On systems of linear Diophantine equations, The Mathematics Magazine 69 (1996) M. Newman, Integral Matrices, Academic Press, D. Papp and B. Vizvári, Effective solution of linear Diophantine equation systems with an application in chemistry, Journal of Mathematical Chemistry 39 (2006) L. Pottier, Minimal solutions of linear diophantine systems: bounds and algorithms, Rewriting Techniques and Applications, R. V. Book, ed., Vol. 488 of Lecture Notes in Computer Science, Springer,

16 9 H. J. S. Smith, On systems of linear indeterminate equations and congruences, Philosophical Transactions of the Royal Society of London 151 (1861) A. J. B. Ward, A matrix method for a system of linear Diophantine equations, The Mathematical Gazette 84 (2000)

Linear Equations in Linear Algebra

1 Linear Equations in Linear Algebra 1.2 Row Reduction and Echelon Forms ECHELON FORM A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties: 1. All nonzero

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

1111: Linear Algebra I

1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 3 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 3 1 / 12 Vector product and volumes Theorem. For three 3D vectors u, v, and w,

Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

Mathematics of Cryptography

CHAPTER 2 Mathematics of Cryptography Part I: Modular Arithmetic, Congruence, and Matrices Objectives This chapter is intended to prepare the reader for the next few chapters in cryptography. The chapter

Definition: Group A group is a set G together with a binary operation on G, satisfying the following axioms: a (b c) = (a b) c.

Algebraic Structures Abstract algebra is the study of algebraic structures. Such a structure consists of a set together with one or more binary operations, which are required to satisfy certain axioms.

Solutions to Linear Algebra Practice Problems 1. form (because the leading 1 in the third row is not to the right of the

Solutions to Linear Algebra Practice Problems. Determine which of the following augmented matrices are in row echelon from, row reduced echelon form or neither. Also determine which variables are free

Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

Elementary Row Operations and Matrix Multiplication

Contents 1 Elementary Row Operations and Matrix Multiplication 1.1 Theorem (Row Operations using Matrix Multiplication) 2 Inverses of Elementary Row Operation Matrices 2.1 Theorem (Inverses of Elementary

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

Chapter 4: Systems of Equations and Ineq. Lecture notes Math 1010

Section 4.1: Systems of Equations Systems of equations A system of equations consists of two or more equations involving two or more variables { ax + by = c dx + ey = f A solution of such a system is an

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

MAT Solving Linear Systems Using Matrices and Row Operations

MAT 171 8.5 Solving Linear Systems Using Matrices and Row Operations A. Introduction to Matrices Identifying the Size and Entries of a Matrix B. The Augmented Matrix of a System of Equations Forming Augmented

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

6 Gaussian Elimination

G1BINM Introduction to Numerical Methods 6 1 6 Gaussian Elimination 61 Simultaneous linear equations Consider the system of linear equations a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 +

Lecture Notes 2: Matrices as Systems of Linear Equations

2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

e-issn: p-issn: X Page 67

International Journal of Computer & Communication Engineering Research (IJCCER) Volume 2 - Issue 2 March 2014 Performance Comparison of Gauss and Gauss-Jordan Yadanar Mon 1, Lai Lai Win Kyi 2 1,2 Information

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

Systems of Linear Equations

A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Systems of Linear Equations Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

Question 2: How do you solve a matrix equation using the matrix inverse?

Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

Mathematics of Cryptography Modular Arithmetic, Congruence, and Matrices. A Biswas, IT, BESU SHIBPUR

Mathematics of Cryptography Modular Arithmetic, Congruence, and Matrices A Biswas, IT, BESU SHIBPUR McGraw-Hill The McGraw-Hill Companies, Inc., 2000 Set of Integers The set of integers, denoted by Z,

Math 313 Lecture #10 2.2: The Inverse of a Matrix

Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

MathQuest: Linear Algebra. 1. Which of the following matrices does not have an inverse?

MathQuest: Linear Algebra Matrix Inverses 1. Which of the following matrices does not have an inverse? 1 2 (a) 3 4 2 2 (b) 4 4 1 (c) 3 4 (d) 2 (e) More than one of the above do not have inverses. (f) All

Topics in Number Theory

Chapter 8 Topics in Number Theory 8.1 The Greatest Common Divisor Preview Activity 1 (The Greatest Common Divisor) 1. Explain what it means to say that a nonzero integer m divides an integer n. Recall

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor

Chapter 4 - Systems of Equations and Inequalities

Math 233 - Spring 2009 Chapter 4 - Systems of Equations and Inequalities 4.1 Solving Systems of equations in Two Variables Definition 1. A system of linear equations is two or more linear equations to

Matrices: 2.3 The Inverse of Matrices

September 4 Goals Define inverse of a matrix. Point out that not every matrix A has an inverse. Discuss uniqueness of inverse of a matrix A. Discuss methods of computing inverses, particularly by row operations.

The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

Lecture 21: The Inverse of a Matrix

Lecture 21: The Inverse of a Matrix Winfried Just, Ohio University October 16, 2015 Review: Our chemical reaction system Recall our chemical reaction system A + 2B 2C A + B D A + 2C 2D B + D 2C If we write

Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

Math 2331 Linear Algebra

2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2331 Jiwen He, University

DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

Lecture 23: The Inverse of a Matrix

Lecture 23: The Inverse of a Matrix Winfried Just, Ohio University March 9, 2016 The definition of the matrix inverse Let A be an n n square matrix. The inverse of A is an n n matrix A 1 such that A 1

max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Mathcad is designed to be a tool for quick and easy manipulation of matrix forms

Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

5.3 Determinants and Cramer s Rule

290 5.3 Determinants and Cramer s Rule Unique Solution of a 2 2 System The 2 2 system (1) ax + by = e, cx + dy = f, has a unique solution provided = ad bc is nonzero, in which case the solution is given

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

MODULAR ARITHMETIC KEITH CONRAD. Introduction We will define the notion of congruent integers (with respect to a modulus) and develop some basic ideas of modular arithmetic. Applications of modular arithmetic

Mathematics of Cryptography Part I

CHAPTER 2 Mathematics of Cryptography Part I (Solution to Odd-Numbered Problems) Review Questions 1. The set of integers is Z. It contains all integral numbers from negative infinity to positive infinity.

Definition 1 Let a and b be positive integers. A linear combination of a and b is any number n = ax + by, (1) where x and y are whole numbers.

Greatest Common Divisors and Linear Combinations Let a and b be positive integers The greatest common divisor of a and b ( gcd(a, b) ) has a close and very useful connection to things called linear combinations

Chapter R - Basic Algebra Operations (69 topics, due on 05/01/12)

Course Name: College Algebra 001 Course Code: R3RK6-CTKHJ ALEKS Course: College Algebra with Trigonometry Instructor: Prof. Bozyk Course Dates: Begin: 01/17/2012 End: 05/04/2012 Course Content: 288 topics

Math 018 Review Sheet v.3

Math 018 Review Sheet v.3 Tyrone Crisp Spring 007 1.1 - Slopes and Equations of Lines Slopes: Find slopes of lines using the slope formula m y y 1 x x 1. Positive slope the line slopes up to the right.

( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

Further linear algebra. Chapter I. Integers.

Further linear algebra. Chapter I. Integers. Andrei Yafaev Number theory is the theory of Z = {0, ±1, ±2,...}. 1 Euclid s algorithm, Bézout s identity and the greatest common divisor. We say that a Z divides

We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

Congruences. Robert Friedman

Congruences Robert Friedman Definition of congruence mod n Congruences are a very handy way to work with the information of divisibility and remainders, and their use permeates number theory. Definition

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

Solving Systems of Linear Equations; Row Reduction

Harvey Mudd College Math Tutorial: Solving Systems of Linear Equations; Row Reduction Systems of linear equations arise in all sorts of applications in many different fields of study The method reviewed

Linear Equations ! 25 30 35\$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

Row and column operations

Row and column operations It is often very useful to apply row and column operations to a matrix. Let us list what operations we re going to be using. 3 We ll illustrate these using the example matrix

5.5. Solving linear systems by the elimination method

55 Solving linear systems by the elimination method Equivalent systems The major technique of solving systems of equations is changing the original problem into another one which is of an easier to solve

Section Notes 4. Duality, Sensitivity, Dual Simplex, Complementary Slackness. Applied Math 121. Week of February 28, 2011

Section Notes 4 Duality, Sensitivity, Dual Simplex, Complementary Slackness Applied Math 121 Week of February 28, 2011 Goals for the week understand the relationship between primal and dual programs. know

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

ON THE FIBONACCI NUMBERS

ON THE FIBONACCI NUMBERS Prepared by Kei Nakamura The Fibonacci numbers are terms of the sequence defined in a quite simple recursive fashion. However, despite its simplicity, they have some curious properties

Section Notes 3. The Simplex Algorithm. Applied Math 121. Week of February 14, 2011

Section Notes 3 The Simplex Algorithm Applied Math 121 Week of February 14, 2011 Goals for the week understand how to get from an LP to a simplex tableau. be familiar with reduced costs, optimal solutions,

Interpolating Polynomials Handout March 7, 2012

Interpolating Polynomials Handout March 7, 212 Again we work over our favorite field F (such as R, Q, C or F p ) We wish to find a polynomial y = f(x) passing through n specified data points (x 1,y 1 ),

Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

Linear Programming Problems

Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

CLASS NOTES. We bring down (copy) the leading coefficient below the line in the same column.

SYNTHETIC DIVISION CLASS NOTES When factoring or evaluating polynomials we often find that it is convenient to divide a polynomial by a linear (first degree) binomial of the form x k where k is a real

Chapter 2. Linear Equations

Chapter 2. Linear Equations We ll start our study of linear algebra with linear equations. Lots of parts of mathematics arose first out of trying to understand the solutions of different types of equations.

NUMERICAL METHODS C. Carl Gustav Jacob Jacobi 10.1 GAUSSIAN ELIMINATION WITH PARTIAL PIVOTING

0. Gaussian Elimination with Partial Pivoting 0.2 Iterative Methods for Solving Linear Systems 0.3 Power Method for Approximating Eigenvalues 0.4 Applications of Numerical Methods Carl Gustav Jacob Jacobi

12 Greatest Common Divisors. The Euclidean Algorithm

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan 12 Greatest Common Divisors. The Euclidean Algorithm As mentioned at the end of the previous section, we would like to

Solving a System of Equations

11 Solving a System of Equations 11-1 Introduction The previous chapter has shown how to solve an algebraic equation with one variable. However, sometimes there is more than one unknown that must be determined

A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

Zeros of Polynomial Functions

Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of

Zeros of a Polynomial Function

Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11