# Week 5 Integral Polyhedra

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory of integral polyhedra that explains the integrality of these example by a simple property of the constraint matrix of these programs. 1 We have proved this for maximum weight bipartite matching, minimum vertex cover, and the minimum cut problems. 5.1 Totally Unimodular Matrices Consider the integer program minimize c x Ax = b x Z n + (IP) where A Z m n and b Z m. We would like to know what are the properties that A and b must have such that the linear relaxation minimize c x Ax = b x 0 (LP) always has an integral solution. Recall that if the linear program has bounded objective, there must a basic feasible solution x that is optimal. Let be a basis defining x. Then x = A 1 b, while all non-basic variables must be 0. It follows that if A 1 is an integral matrix then x must be integral as well. It turns out, as we shall prove shortly, that a sufficient condition for the integrality of A 1 is det(a ) 1, 1}. This motivates the following definition. Definition 5.1. We say a matrix A Z m n with full row rank is unimodular if the determinant of each basis of A is 1 or 1. First we will show that if the program (LP) is alway integral whenever A is unimodular, and that a weaker form of the converse also holds. Copyright 2015 Julián Mestre. This work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License. For the most recent version of these notes, please visit the class webpage.

2 week 5 integral polyhedra discrete optimization 2 Theorem 5.1. A full row rank matrix A Z m n is unimodular if and only if the polyhedron x R n : Ax = b, x 0} is integral for all b Z m. Proof. We first prove the forward direction. Let x be an extreme point and be a basis of x. Our goal is to show that x is integral. From Cramer s rules 2 we know that A 1 = Adj(A ) det(a ), where Adj(A ) is the adjugate 3 of A. Since A is integral, it follows that Adj(A ) must be integral as well. Together with the fact that det(a ) 1, 1}, this implies the integrality of A 1 and, therefore, that of x. The backward direction is surprisingly easy. Let be a basis of A. Our goal is to show that det(a ) = 1. Let b = A z + e i where e i is the unit vector with a single 1 in its ith coordinate and zeros elsewhere, i is an arbitrary index in 1,... m}, and z is a suitable integral vector such that x = A 1 b = z + A 1 e i 0. This ensures that the basic solution induced by is feasible. Since (LP) is integral it follows that the ith column of of A 1 must be integral. ecause i is arbitrary, it follows that every column of A 1 is integral. Therefore it must be that det(a ) and det(a 1 ) are integral. Together with the relation 2 Cramer s rule, Wikipedia. 3 The exact definition of the adjugate of a matrix is not important. The only thing you need to know is that the entries of the adjugate are defined in terms of determinants of submatrices of the orginal matrix. Just for completeness, the adjugate of A is defined as the transpose of its cofactor matrix, Adj(A ) = C T, where c ij is the (i, j)-cofactor of A, which is defined a ( 1) i+j times the determinant of the submatrix of A that results from removing its ith row and its jth column. det(a ) det(a 1 ) = 1, we conclude that det(a ) = det(a 1 ) = 1. This theorem can be generalized to linear programs that are not in standard form by refining the the property we require from the constraint matrix. Definition 5.2. We say a matrix A is totally unimodular if every square submatrix of A has determinate 0, 1, or 1. Theorem 5.2. A matrix A Z m n is totally unimodular if and only if the polyhedron x R n : Ax b, x 0} is integral for all b Z m. Proof sketch. For simplicity, here we assume b 0, but a similar argument holds for general b Z m. The forward direction is similar as Theorem 5.1. For the other direction, [ it suffices to argue that A is totally unimodular if and only if A I is unimodular, and that the extreme points of the polyhedron x R n : Ax b, x 0} are integral if and only if the extreme points of the polyhedron (x, z) R n+m : Ax + z = b, x 0, z 0 } are integral. Now that we have established the importance of these matrices, we shall explore some of their properties and alternative ways of proving that a given matrix is totally unimodular. Here are some example to clarify our definitions so far. The following matrix is unimodular [ , but the following matrix is not [ The following matrix is totally unimodular , but the following matrix is not ([ ) 1 1 since det =

3 week 5 integral polyhedra discrete optimization Properties of Totally Unimodular Matrices Let A be a totally unimodular matrix. There are a few observations we can make about these matrices: i) The entries of A must be either 0, 1, or 1. ii) The transpose of A is also totally unimodular. [ A iii) The matrix is also totally unimodular. I [ A iv) The matrix is also totally unimodular. A A neat corollary 4 of the these properties is that if A is totally unimodular then the polyhedron x R n + : a Ax b, l x u } is integral for all integral vector a, b, l, and u. Another corollary is that if c and b are integral and A is totally unimodular then both the primal program min c x : Ax b, x 0} and the dual program max b y : A T y c, y 0 } are integral. 4 This corollary is a bit surprising. It means that if P R n + is a polyhedron defined by a totally unimodular matrix, and Q = [l 1, u 1 [l n, u n then P Q is also an integral. In general, this is not the case. 5.3 Alternative Characterization While Definition 5.2 is well suited to prove the integrality of the polyhedron associated with a constraint matrix A, it is not always straightforward to prove that a particular matrix A is totally unimodular. In this section we introduce an alternative characterization of totally unimodular matrices that is easier to work with. A bi-coloring of a matrix M is a partition of its columns into red and blue columns. We say the bi-coloring equitable if for every row of M the sum of its red entries differs from the sum of its blue entries by at most 1. Theorem 5.3. A matrix A is totally unimodular if and only every column-induced submatrix of A admits an equitable bi-coloring. The proof of this theorem is beyond the scope of this class. Here we will just make use of the theorem to prove that certain matrices are totally unimodular. Lemma 5.1. The constraint matrix associated with the bipartite matching problem polyhedron x R E + : } e δ(u) x e 1 u V Notice that because A is totally unimodular if and only if A T is totally unimodular, we have the equivalent condition that any row induced submatrix of A admits a row bi-coloring such that for every column the sum of the red entries differs from the sum of the blue entries by at most 1. is totally unimodular. Proof. Let A be the constraint matrix associated with the polyhedron. Each row of A is associated with a vertex u in our bipartite

4 week 5 integral polyhedra discrete optimization 4 graph. If u is in the left shore of the graph, then we color the corresponding row red. If u is in the right shore of the graph, then we color the corresponding row blue. Each column of A is associated with an edge (u, v) E, and it has a 1 in each of the rows associated with u and v, and 0 elsewhere. It is easy to see that the bi-coloring is equitable for all row induced matrices of M. Therefore the matrix is totally unimodular. Lemma 5.2. The constraint matrix associated with the circulation problem polyhedron f R E + : e δ in (u) f e = } e δ out (u) f e u V is totally unimodular. Proof. Let A be the constraint matrix associated with the polyhedron. Each row of A is associated with a vertex u V. We color red all rows. Each column of A is associated with an edge (u, v) E, and has a 1 in the row associated with u and a 1 in the row associated with v, and 0 elsewhere. It is easy to see that the bi-coloring is equitable for all row induced matrices of M. Therefore the matrix is totally unimodular. To get a formulation for the maximum s-t flow problem we just need to add the edge capacity constraints, which do not affect the total unimodularity of the constraint matrix, and maximize the linear objective f ts. Furthermore, since the transpose of the resulting matrix is also totally unimodular, it follows that the dual is also integral, which, as we saw last week, corresponds to the minimum s-t cut problem. 5.4 Subset Systems We now turn our attention to a broad class of subset selection problems. These problems are defined by a pair (U, I), where U is a universal set of elements and I 2 U is a collection of feasible 5 subsets of U. The pair (U, I) is called a subset system if for any S T U it holds that T I implies S I. Given a cost function c : U R +, the canonical optimization problem associated with (U, I) is maximize c(s) S I Many natural problems 6 fit under this framework. Some of them are hard and some, easy. We would like to identify the properties of I that make the corresponding optimization problem easy. Definition 5.3. The rank function r : 2 U Z + of the system (U, I) is defined as follows r(s) = max T. T S:T I It is easy to show that the following is an integer formulation for the canonical problem. maximize j U c jx j j S x j r(s) x j 0, 1} S U j U (IP-IS) 5 In the optimization literature feasible sets are some times also called independent sets. Originally, a particular type of subset system was developed as an abstraction of linear independence: U = collection of vectors, and } vectors in S are I = S U :. linearly independent 6 For example, maximum weight spanning tree, maximum matching, and maximum weight independent set in graphs.

5 week 5 integral polyhedra discrete optimization 5 As we shall prove shortly, the following property is key in arguing that the linear relaxation of (IP-IS) is integral. Definition 5.4. The subset system (U, I) is called a matroid if for any two subsets S, T I such that S < T the following is true x T \ S : S + x I (5.1) Equation 5.1 is referred to as the matroid exchange axiom. Theorem 5.4. Let (U, I) be a matroid, then the following polyhedron is integral } x R U + : x(s) r(s) S U. Proof. Let x be a vertex of the polyhedron. Our goal is to show that x is integral. Let c be the cost vector that make x the unique optimal solution of the linear program maximize j U c jx j j S x j r(s) x j 0 S U j U The matroid exchange axiom: S x T S < T Not every problem has the matroid exchange property. For example, consider the subset system (U, I) defined by the matching problem on the following graph T S T oth S, T I and S < T but it is not possible to add an edge from T \ S to S and remain feasible. The dual program is given by minimize S U r(s) y S S:j S y S c j y S 0 j U S U Let us re-number the elements of our universal set U = 1,..., U } so that c j c j+1 for j = 1,..., U 1. We define a sequence of sets = S (0) S (1) S ( U ) = U as follows S (j) = 1,..., j} for j = 0,..., U. ased on these sets we create a pair of primal and dual solutions x j = r(s (j) ) r(s (j 1) ) for j = 1,..., U, and c j c j+1 S = S (j) and j = 0,..., U 1, y S = c U S = S ( U ), 0 otherwise. To show that x is integral, we will argue that x = x. We will do this by proving the following properties: i) j U c jx j = S U r(s)y S ii) y is a feasible solution for the dual program iii) x is a feasible solution for the primal program

6 week 5 integral polyhedra discrete optimization 6 These properties imply that x is optimal, and since there is a unique optimal x = x. It is clear that x is integral, so the theorem follows. Let us start with the first property U ) c j x j = c j (r(s (j) ) r(s (j 1) ) j U = j=1 U 1 (c j c j+1 )r(s (j) ) + c U r(s ( U ) ) j=1 U = y S (j)r(s (j) ) j=1 = S U y S r(s) For the second property we note that for all j U S:j S U y S = y S (i) = i=j U 1 (c i c i+1 ) + c U = c j. i=j For the third property we let S be an arbitrary subset of U. Choose T to be a maximum cardinality feasible subset of S, and set R = j S : x j = 1 }. We would like to to argue that R T. For the sake of contradiction, suppose that was not the case. Then, by the matroid exchange axiom (5.1), we know that there exists j R \ T S such that T + j I, contradicting the fact that T is a maximum cardinality feasible subset of S. Notice that we have not used anything about the subset system to prove the first and the second properties. For the last property we need to make use the matroid exchange axiom. Exercises 1. [ Argue that the matrix A is totally unimodular if and only if A I is unimodular. 2. Let A Z m n and b Z m +. Argue that the polyhedron x R n : Ax b, x 0} is integral if and only if the polyhedron (x, z) R n+m : Ax + z = b, x 0, z 0 } is integral. 3. Let I 1,..., I n be a collection of open intervals of the real line. Each interval I has associated a profit p(i). The objective is to pick a maximum profit subset of intervals such that no two intervals overlap. Consider the following program: maximize I p(i)x I I:p I x I 1 x I 0, 1} p R I = I 1,..., I n This is not a proper integer linear program, as it contains an infinite number of constraint. Show that there is a set Q of cardinality 2n 1 such that for any q R there is p Q such that the constraints generated by p and q are equivalent.

7 week 5 integral polyhedra discrete optimization 7 Then show that the linear relaxation maximize I p(i)x I I:p I x I 1 x I [0, 1 p Q I = I 1,..., I n has a totally unimodular constraint matrix. 4. Prove that the subset system associated with the interval selection problem described in the previous exercise does not have the matroid property. 5. Let r be the rank function of a matroid (U, I). Prove that for any S, T U we must have r(s T) + r(s T) r(s) + r(t). 6. Let (U, I) be the subset system defined by a set of vector U and the collection of linearly independent subsets of U. Prove that the system is a matroid. 7. Let (U, I) be a matroid and r be its rank function. Argue that the following linear program 7 is integral. minimize j U c jx j x(s) r(s) x(u) = r(u) x j 0 S U 7 Notice that for a matroid whose feasible sets correspond to acyclic subset of edges, the program corresponds to finding a minimum weight spanning tree.

8 week 5 integral polyhedra discrete optimization 8 Solutions of selected exercises 1. [ Argue that the matrix A is totally unimodular if and only if A I is unimodular. Solution. We show that that for every [ square submatrix of A there is a column-induced submatrix of A I having the same determinant up to a sign change. Let M be the square submatrix of A induced by the rows R 1,..., m} and the columns [ C 1,..., n}. Let T be the the columninduced submatrix of A I defined by the columns of A indexed by C and the columns of I indexed by 1,..., m} \ R. Notice that the relation is reversible starting from T, we can construct M. At this point, it should be easy to see that det(m) = ± det(t). Indeed, we can compute det(t) by first expanding the determinant along the columns that come from I until we are left with M. It follows that the determinant of every square submatrix of A is in 1, 0, 1} if [ and only if the determinant of every column-induced submatrix of A I is in 1, 0, 1}. 2. Let A Z m n and b Z m +. Argue that the polyhedron x R n : Ax b, x 0} is integral if and only if the polyhedron (x, z) R n+m : Ax + Iz = b, x 0, z 0 } is integral. Solution. We establish a one-to-one integrality-preserving correspondence between the extreme points of the two polyhedra. Let us denote the two polyhedra with P 1 and P 2 respectively. Let x R n and z = b Ax. Clearly, x P 1 if and only if (x, z) P 2. Furthermore, because A and b are integral, we know that x is integral if and only if (x, z) is integral. It only remains to how that x is a basic feasible solution of P 1 if and only if (x, z) is a feasible solution of P 2. Suppose x is a basic feasible solution of P 1. Consider a maximal set of linearly independent tight constraints at x. Let R 1,..., m} be the indices of these constraints than come from the systems Ax b and C 1,..., n} be the indices of these constraints that come from x 0. Recall that because x is basic, we know that it is the only solution that meets those constraints with equality. Consider the basis formed by taking the columns of I indexed by R and the columns of A indexed by 1,..., n} \ C. It is not hard to see that (x, z) is the basic feasible solution induced by the basis. Using a similar argument, we can justify the reverse direction: If (x, z) is a basic feasible solution of P 2 then x must be a basic feasible solution of P 1.

### Markov Chains, part I

Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

### Good luck, veel succes!

Final exam Advanced Linear Programming, May 7, 13.00-16.00 Switch off your mobile phone, PDA and any other mobile device and put it far away. No books or other reading materials are allowed. This exam

### 10. Graph Matrices Incidence Matrix

10 Graph Matrices Since a graph is completely determined by specifying either its adjacency structure or its incidence structure, these specifications provide far more efficient ways of representing a

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### 1 Determinants. Definition 1

Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### . P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

### Definition of a Linear Program

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### MATHEMATICAL BACKGROUND

Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a

### (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

### Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

### 1 Polyhedra and Linear Programming

CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

### Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

### 2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

### Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

### THEORY OF SIMPLEX METHOD

Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

### Min-cost flow problems and network simplex algorithm

Min-cost flow problems and network simplex algorithm The particular structure of some LP problems can be sometimes used for the design of solution techniques more efficient than the simplex algorithm.

### Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

### The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices

The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created:

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q.

Chapter 3 Linear Codes An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q. Definition 3.1 (Linear code). A linear code C is a code in Fq n for

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

### Chapter 4: Binary Operations and Relations

c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,

### Geometry of Linear Programming

Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### T ( a i x i ) = a i T (x i ).

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

### Section Notes 4. Duality, Sensitivity, Dual Simplex, Complementary Slackness. Applied Math 121. Week of February 28, 2011

Section Notes 4 Duality, Sensitivity, Dual Simplex, Complementary Slackness Applied Math 121 Week of February 28, 2011 Goals for the week understand the relationship between primal and dual programs. know

### Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

### Introduction to Linear Programming.

Chapter 1 Introduction to Linear Programming. This chapter introduces notations, terminologies and formulations of linear programming. Examples will be given to show how real-life problems can be modeled

### 2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

### 10.1 Integer Programming and LP relaxation

CS787: Advanced Algorithms Lecture 10: LP Relaxation and Rounding In this lecture we will design approximation algorithms using linear programming. The key insight behind this approach is that the closely

### Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

### Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, In which we introduce the theory of duality in linear programming.

Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, 2011 Lecture 6 In which we introduce the theory of duality in linear programming 1 The Dual of Linear Program Suppose that we

### Lecture 6: Approximation via LP Rounding

Lecture 6: Approximation via LP Rounding Let G = (V, E) be an (undirected) graph. A subset C V is called a vertex cover for G if for every edge (v i, v j ) E we have v i C or v j C (or both). In other

### Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques. My T. Thai

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques My T. Thai 1 Overview An overview of LP relaxation and rounding method is as follows: 1. Formulate an optimization

### THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

### MATH 304 Linear Algebra Lecture 16: Basis and dimension.

MATH 304 Linear Algebra Lecture 16: Basis and dimension. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis. Equivalently, a subset S V is a basis for

### Theory of Linear Programming

Theory of Linear Programming Debasis Mishra April 6, 2011 1 Introduction Optimization of a function f over a set S involves finding the maximum (minimum) value of f (objective function) in the set S (feasible

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Review: Vector space

Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### = [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are

This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In

### Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

### Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### Unit 18 Determinants

Unit 18 Determinants Every square matrix has a number associated with it, called its determinant. In this section, we determine how to calculate this number, and also look at some of the properties of

### DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

### Partitioning edge-coloured complete graphs into monochromatic cycles and paths

arxiv:1205.5492v1 [math.co] 24 May 2012 Partitioning edge-coloured complete graphs into monochromatic cycles and paths Alexey Pokrovskiy Departement of Mathematics, London School of Economics and Political

### MATH36001 Background Material 2015

MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.

The Real Numbers Here we show one way to explicitly construct the real numbers R. First we need a definition. Definitions/Notation: A sequence of rational numbers is a funtion f : N Q. Rather than write

### 160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

### Discrete (and Continuous) Optimization Solutions of Exercises 1 WI4 131

Discrete (and Continuous) Optimization Solutions of Exercises 1 WI4 131 Kees Roos Technische Universiteit Delft Faculteit Informatietechnologie en Systemen Afdeling Informatie, Systemen en Algoritmiek

### Basic solutions in standard form polyhedra

Recap, and outline of Lecture Previously min s.t. c x Ax = b x Converting any LP into standard form () Interpretation and visualization () Full row rank assumption on A () Basic solutions and bases in

### princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

### AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC

AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC W. T. TUTTE. Introduction. In a recent series of papers [l-4] on graphs and matroids I used definitions equivalent to the following.

### Solutions to Exercises 8

Discrete Mathematics Lent 2009 MA210 Solutions to Exercises 8 (1) Suppose that G is a graph in which every vertex has degree at least k, where k 1, and in which every cycle contains at least 4 vertices.

### Mathematics of Cryptography

CHAPTER 2 Mathematics of Cryptography Part I: Modular Arithmetic, Congruence, and Matrices Objectives This chapter is intended to prepare the reader for the next few chapters in cryptography. The chapter

### Discrete Optimization

Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

### MATH10212 Linear Algebra B Homework 7

MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

### SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

### Counting spanning trees

Counting spanning trees Question Given a graph G, howmanyspanningtreesdoesg have? (G) =numberofdistinctspanningtreesofg Definition If G =(V,E) isamultigraphwithe 2 E, theng e (said G contract e ) is the

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### Derivatives of Matrix Functions and their Norms.

Derivatives of Matrix Functions and their Norms. Priyanka Grover Indian Statistical Institute, Delhi India February 24, 2011 1 / 41 Notations H : n-dimensional complex Hilbert space. L (H) : the space

### Relations Graphical View

Relations Slides by Christopher M. Bourke Instructor: Berthe Y. Choueiry Introduction Recall that a relation between elements of two sets is a subset of their Cartesian product (of ordered pairs). A binary

### 1 Unrelated Parallel Machine Scheduling

IIIS 014 Spring: ATCS - Selected Topics in Optimization Lecture date: Mar 13, 014 Instructor: Jian Li TA: Lingxiao Huang Scribe: Yifei Jin (Polishing not finished yet.) 1 Unrelated Parallel Machine Scheduling

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### v 1. v n R n we have for each 1 j n that v j v n max 1 j n v j. i=1

1. Limits and Continuity It is often the case that a non-linear function of n-variables x = (x 1,..., x n ) is not really defined on all of R n. For instance f(x 1, x 2 ) = x 1x 2 is not defined when x

### Notes on Linear Recurrence Sequences

Notes on Linear Recurrence Sequences April 8, 005 As far as preparing for the final exam, I only hold you responsible for knowing sections,,, 6 and 7 Definitions and Basic Examples An example of a linear

### The tree-number and determinant expansions (Biggs 6-7)

The tree-number and determinant expansions (Biggs 6-7) André Schumacher March 20, 2006 Overview Biggs 6-7 [1] The tree-number κ(γ) κ(γ) and the Laplacian matrix The σ function Elementary (sub)graphs Coefficients

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

### APPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the co-factor matrix [A ij ] of A.

APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the co-factor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

### GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics

GRA635 Mathematics Eivind Eriksen and Trond S. Gustavsen Department of Economics c Eivind Eriksen, Trond S. Gustavsen. Edition. Edition Students enrolled in the course GRA635 Mathematics for the academic

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances