# Lecture 11: Integer Linear Programs

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Lecture 11: Integer Linear Programs 1

2 Integer Linear Programs Consider the following version of Woody s problem: x 1 = chairs per day, x 2 = tables per day max z = 35x x 2 profits 14x x pine 15x 2 60 mahogany 8x 1 + 3x 2 92 labor x 1 0 x 2 0 Suppose that Woody insists on making a whole number of items each day. This is an example of an integer linear program (ILP), where the objective function and constraints satisfy the proportionality and additivity assumptions, but the variables are required in addition to be integer, that is, they violate the continuity condition. Mixed integer LPs are those where some of the variables are required to be integer while others can be continuously valued. 2

3 Graphical Picture of Woody s ILP 3

4 Dealing with Integer LPs We wish to solve an ILP in canonical max form: max z = c 1 x 1 + c 2 x c n x n a 11 x 1 + a 12 x a 1n x n b 1 a 21 x 1 + (I). a 22 x a 2n x n b 2. a m1 x 1 + a m2 x a mn x n b m x 1 0, x 2 0,... x n 0 x 1,..., x n integer Main Tool: Try to solve this problem as a LP, by dropping the integrality requirement. The corresponding LP is called the LP relaxation (R) to (I). Facts: 1. The optimal solution value for (R) provides an upper bound on that of (I). (Proof: The set of feasible points for (R) is a superset of those of (I).) 2. If (R) has an optimal solution that is integer, then this is also the optimal solution to (I). (Proof: Any integer solution to (I) provides a lower bound on the objective function value.) 4

5 Example: Woody s original ILP (2- and 3-variable versions) have optimal solutions that are integer (x 1 = 12 and x 2 = 2, and hence optimal to the corresponding ILP. Unfortunately, this version of the LP has optimal solution x 1 = and x 2 = 1.4 with objective function value However, we do know that the optimal integer solution has value no greater than (in fact, no greater than 468). Question: How can we tell in advance that the relaxed LP will have an integer optimal solution? Are there classes of LPs that are guaranteed to always produce integer optimal solutions? Answer: Yes! The main class comprises those LPs associated with network flow problems. Network matrix: Any matrix A whose columns have the property that their nonzero entries consist of at most one +1 and at most one -1. 5

6 Example: Shortest Path Problem Consider finding the shortest path from s = 1 to t = 4 in the following network: The LP for this problem can be thought of as sending one unit of flow through the system starting at s and ending at t, that is, min 2x x x x x x x 12 x 13 x 23 x 32 x 24 x 34 = where x ij = 1 if arc (i, j) is on the path 0 otherwise 6

7 Lemma: Let A be any m m network matrix. Then A is unimodular, that is det A = +1, -1 or 0. Proof: Recall the definition of determinant: det A = m a 11, m = 1 i=1 ( 1)i+j a ij det A [i,j], m > 1 where A [i,j] is the matrix obtained by deleting the i th row and j th column, and the sum can be taken over any column j. Clearly if m = 1 then A = [±1, 0], and so has determinant ±1, 0. Now proceed by induction, and suppose m > 1. Since A is a network matrix, then every column of A can have at most one +1 and one -1. If there is a column j with only one nonzero element a kj, then det A = ( 1) k+j a kj A [k,j] = (±1)(±1) det A [k,j] which is ±1, 0 by induction, since A [k,j] is an (m 1) (m 1) network matrix. If every column has two nonzero elements, then adding the rows of A together gives the zero vector, and so A is singular, and hence has determinant 0. 7

8 Theorem: Let (P ) be an LP whose associated matrix is a network matrix, and whose right-hand side coefficients are integer. Then all basic solutions to (P ) are integer. Proof: We will consider the constraints of (P ) to be in equality form Ax = b, since any general LP can be put into equality form by adding slack columns, and the resulting matrix will continue to be a network matrix. We will also assume that A has full row rank m, since A will continue to be a network matrix after the deletion of any redundant rows. Then any basic solution is of the form ˆx = [ˆx B, ˆx n ], where B is a nonsingular submatrix of columns of A and ˆx B = b = B 1 b, ˆx N = 0, is partitioned corresponding to B. Then recalling Cramer s Rule, we write the coefficients of x B as x Bi = det B i det B, i = 1,..., m where B i is the matrix obtained from B by substituting b for the i th column of B. Now the numerators of these fractions are always integer as long as b is integer. The denominator, moreover, is always ±1 by the lemma. Thus x B will always be integer. 8

9 Corollary: For any standard form LP (P ) whose associated matrix is a network matrix, if (P ) is feasible then it has an integer feasible solution, and if (P ) has an optimal solution then it has an integer optimal solution. Proof: Recall that if (P ) has a feasible solution, then it always has a basic feasible solution, and if (P ) has an optimal solution, then it always has a basic feasible optimal solution. Comments: 1. The integrality conclusions of the theorem continue to hold for the dual solution to a network LP, so long as the costs are likewise integer. 2. Actually constructing the solutions to network LPs can be done much more easily than for general LPs, and in fact, there are specialized LP solvers for network LPs that perform several times faster than applying a general LP solver. (STOR724 provides the details.) 3. The network matrices above are almost the entire class of matrices that have this integrality property, in a technical sense we will not go into here. 9

10 Dealing with Noninteger LP Solutions What if the LP does not produce an integer solution? Rounding Heuristic: Try rounding the noninteger values up or down to the nearest feasible integer point. duality gap: Difference between the current best integer solution and the optimal relaxation LP solution. If the duality gap is 0 or < 1 if costs are integer then the current best solution is optimal. Example: For our Woody s example, round the current solution x 1 = and x 2 = 1.4 down to x 1 = 10, x 2 = 1 (z = 410), or if you are more clever, to x 1 = 11, x 2 = 1 (z = 445). This has a duality gap of = 23. The optimal ILP solution, surprisingly, is x 1 = 8, x 2 = 3 (z = 460), still with a duality gap of 8. Rounding is a good heuristic when the integer solutions have large values, and there are no equality constraints. When the problem has a restricted feasible region, or the variables are binary (0-1), it does not perform well. Here we need more sophisticated techniques for determining optimality. 10

11 The Branch-and-Bound Technique for Solving ILPs When the relaxation fails to produce an integer optimum, we need to restrict the LP to cut out the noninteger points from the feasible region. This is done by branching on a variable with noninteger value. In particular, suppose ˆx is the current optimal LP relaxation solution, with ˆx j noninteger. Produce the two branching subproblems, by bounding x j away from its noninteger value: max z = cx Ax b (I ) x 0 integer x j ˆx j max z = cx Ax b (I + ) x 0 integer x j ˆx j where a is the greatest integer less than or equal to a and a is the least integer greater than or equal to a. Fact: All integer solutions to (I) will be feasible to one of (I + ) and (I ). Thus the optimal solution to (I) can be found by solving (I + ) and (I ) and choosing the solution with the larger objective function value. 11

12 The Branch-and-Bound Tree Solving each of the subproblems can in turn result in more branching, and many problems must be considered. To keep track of these problems, we use a branch-and-bound tree. Each node of this tree consists of one of the subproblems to be solved, with the root of the tree the original problem. At each node the relaxation LP is solved, and a branching is performed on a chosen noninteger optimal variable value, creating two new nodes corresponding to the two subproblems given above. fathoming: A node is fathomed if it is determined that no more branching needs to be done at that node. 12

13 Fathoming Criteria fathom by infeasibility: If the relaxation LP has no feasible solutions, then it will have no integer feasible solutions, and so no more work needs to be done on that subproblem. fathom by integrality: If the optimal relaxation LP solution is integer, then it is clearly the optimal solution to that subproblem, and in fact a feasible called candidate solution to the original ILP. Again, no more work needs to be done on that subproblem. fathom by nonoptimality: Whenever a fathom by integrality occurs, then the candidate solution is noted, and the best of the candidate solutions for the entire problem is retained. If at a particular node, the current relaxation LP solution has objective function value less than or equal to the objective function value of the current best candidate solution, then no integer solution for this subproblem will be better than the current best candidate solution (since the LP objective function value is an upper bound on the optimal integer solution value for that subproblem), and again no more work needs to be done on that problem. 13

14 Example Starting with the ILP max z = 35x x 2 14x x (S 1 ) 15x x 1 + 3x 2 92 x 1 0 x 2 0 x 1, x 2 integer We have the optimal solution ˆx 1 = and ˆx 2 = 1.4 with objective function value This is not integer, so we branch on a noninteger variable, say x 2, to form the two LPs max z = 35x x 2 14x x (S 2 ) 15x x 1 + 3x 2 92 x 2 1 x 1 0 x 2 0 x 1, x 2 integer max z = 35x x 2 14x x (S 3 ) 15x x 1 + 3x 2 92 x 2 2 x 1 0 x 2 0 x 1, x 2 integer S 3 S 2 * * 14

15 Select node (S 3 ) for consideration. The optimal solution for (S 3 ) is ˆx 1 = 9.86, ˆx 2 = 2, with objective value z = 465. This is also not integer, so we branch on noninteger variable x 1 to get LPs max z = 35x x 2 14x x (S 4 ) 15x x 1 + 3x 2 92 x 2 2 x 1 9 x 1 0 x 2 0 x 1, x 2 integer max z = 35x x 2 14x x (S 5 ) 15x x 1 + 3x 2 92 x 2 2 x 1 10 x 1 0 x 2 0 x 1, x 2 integer S 4 S 5 * 15

16 Select node (S 5 ) for consideration. The relaxed LP here is infeasible, and thus node (S 5 ) is fathomed by infeasiblity. We next select node (S 4 ) for consideration. The optimal solution for (S 4 ) is ˆx 1 = 9, ˆx 2 = 2.46, with objective value z = This is also not integer, so we we branch on noninteger variable x 2 to get LPs max z = 35x x 2 14x x (S 6 ) 15x x 1 + 3x 2 92 x 2 2 x 1 9 x 2 2 x 1 0 x 2 0 x 1, x 2 integer max z = 35x x 2 14x x (S 7 ) 15x x 1 + 3x 2 92 x 2 2 x 1 10 x 2 3 x 1 0 x 2 0 x 1, x 2 integer S 7 S 6 * * 16

17 Select node (S 6 ) for consideration. The optimal solution for (S 6 ) is ˆx 1 = 9, ˆx 2 = 2, with objective value z = 435. This solution is integer, and so (S 6 ) is fathomed by integrality, and the candidate solution is stored as the current best solution. We next select node (S 7 ). The optimal solution for (S 7 ) is ˆx 1 = 8, ˆx 2 = 3, with objective value z = 460. This is also integer, and its objective function value is larger than the current best solution found at node (S 6 ). We therefore replace that solution with the current one as the best candidate solution, and (S 7 ) is fathomed by integrality. Finally, select node (S 2 ). The optimal solution for (S 2 ) is ˆx 1 = 11.13, ˆx 2 = 1, with objective value z = This value is less than that of the current best candidate solution, and so node (S 2 ) is fathomed by nonoptimality. S 7 * S 2 * Since all nodes have now been fathomed, we have completed the evaluation of all relevant subproblems, and therefore the current best candidate solution, ˆx 1 = 8, ˆx 2 = 3, z = 460, is the optimal solution to the ILP. 17

18 Solving ILPs Over the Convex Hull Recall Theorem 6.8: For any set I of feasible integer points of an ILP, any optimal BFS for the linear program max cx x C(I) will be an optimal solution for the set I. Example: The convex hull of integer solutions for Woody s ILP looks like with the following minimal description in terms of linear inequalities, in which each inequality describes a facet of C(I): 14x x x x x x 1 88 x 1 0 x

19 Cutting Planes We don t know what these best inequalities are in advance, but we want to be able to produce good inequalities that get us as close as possible to describing C(I). Example: Suppose we solve Woody s original ILP to obtain optimal solution ˆx 1 = and ˆx 2 = 1.4. What if, instead of proceeding with a complicated branch and bound routine, we simply add the constraint 2x 1 + 3x 2 25 to the ILP? This constraint has the property that it (1) cuts out the current noninteger solution, and (2) does not cut off any other integer points. level set added constraint In this case, the the optimal relaxation solution ˆx 1 = 8, ˆx 2 = 3 is in fact integer, and hence it is immediately an optimal solution to Woody s LP. 19

20 Such a constraint is called a cutting plane for the ILP relaxation. It follows that adding a cutting plane insures that the optimal integer solution will continue to be that of the original ILP, and the optimal solution to the new relaxation solution will not be the same solution that we obtained in the previous LP. Cutting planes are extremely powerful tools in solving ILPs, since they can produce optimal solutions in a relatively small number of steps. Types of Cutting Planes: Gomory-Hu cuts: These are surrogate inequalities obtained from a row of the optimal tableau with noninteger RHS. The cutting plane algorithm makes intelligent use of these to obtain an optimal solution to the IP after adding a finite (but not necessarily polynomial) number of GH cuts. Cuts for specialized problems: Many important OR problems most notably the Traveling Salesman Problem have been solved for fairly large instances by constructing specific classes of cuts usually facets of C(I) that can separate noninteger points from C(I). 20

### 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

### Definition of a Linear Program

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

### Solving Integer Programming with Branch-and-Bound Technique

Solving Integer Programming with Branch-and-Bound Technique This is the divide and conquer method. We divide a large problem into a few smaller ones. (This is the branch part.) The conquering part is done

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

### CHAPTER 9. Integer Programming

CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

### In this paper we present a branch-and-cut algorithm for

SOLVING A TRUCK DISPATCHING SCHEDULING PROBLEM USING BRANCH-AND-CUT ROBERT E. BIXBY Rice University, Houston, Texas EVA K. LEE Georgia Institute of Technology, Atlanta, Georgia (Received September 1994;

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### Jianlin Cheng, PhD Computer Science Department University of Missouri, Columbia Fall, 2013

Jianlin Cheng, PhD Computer Science Department University of Missouri, Columbia Fall, 2013 Princeton s class notes on linear programming MIT s class notes on linear programming Xian Jiaotong University

### Linear Programming. Solving LP Models Using MS Excel, 18

SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

### Optimization in R n Introduction

Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

### 5.1 Bipartite Matching

CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture - 15 Limited Resource Allocation Today we are going to be talking about

### International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### SYSTEMS OF EQUATIONS

SYSTEMS OF EQUATIONS 1. Examples of systems of equations Here are some examples of systems of equations. Each system has a number of equations and a number (not necessarily the same) of variables for which

### A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,

### Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

### Lecture 5: Conic Optimization: Overview

EE 227A: Conve Optimization and Applications January 31, 2012 Lecture 5: Conic Optimization: Overview Lecturer: Laurent El Ghaoui Reading assignment: Chapter 4 of BV. Sections 3.1-3.6 of WTB. 5.1 Linear

### A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

Abstract A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Fang He and Rong Qu The Automated Scheduling, Optimisation and Planning (ASAP) Group School of Computer Science,

### Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### Dantzig-Wolfe bound and Dantzig-Wolfe cookbook

Dantzig-Wolfe bound and Dantzig-Wolfe cookbook thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline LP strength of the Dantzig-Wolfe The exercise from last week... The Dantzig-Wolfe

### Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

### Linear Programming in Matrix Form

Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

### Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

### Proximal mapping via network optimization

L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

### Chapter 13: Binary and Mixed-Integer Programming

Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

### We seek a factorization of a square matrix A into the product of two matrices which yields an

LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### This exposition of linear programming

Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

### Discrete Optimization

Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### Duality in Linear Programming

Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

### . P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

### Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

### IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

### Noncommercial Software for Mixed-Integer Linear Programming

Noncommercial Software for Mixed-Integer Linear Programming J. T. Linderoth T. K. Ralphs December, 2004. Revised: January, 2005. Abstract We present an overview of noncommercial software tools for the

### Linear Programming Sensitivity Analysis

Linear Programming Sensitivity Analysis Massachusetts Institute of Technology LP Sensitivity Analysis Slide 1 of 22 Sensitivity Analysis Rationale Shadow Prices Definition Use Sign Range of Validity Opportunity

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

### Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

### 7.1 Modelling the transportation problem

Chapter Transportation Problems.1 Modelling the transportation problem The transportation problem is concerned with finding the minimum cost of transporting a single commodity from a given number of sources

### NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS

NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical

### A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem

[J. Res. Natl. Inst. Stand. Technol. 111, 161-185 (2006)] A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem Volume 111 Number 2 March-April 2006 Christine Villa and Karla

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

### EXCEL SOLVER TUTORIAL

ENGR62/MS&E111 Autumn 2003 2004 Prof. Ben Van Roy October 1, 2003 EXCEL SOLVER TUTORIAL This tutorial will introduce you to some essential features of Excel and its plug-in, Solver, that we will be using

### Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many

### Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

### Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

### Solving Linear Programs

Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

### 7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

### Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

### 8.1 Makespan Scheduling

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programing: Min-Makespan and Bin Packing Date: 2/19/15 Scribe: Gabriel Kaptchuk 8.1 Makespan Scheduling Consider an instance

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### 4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

### 1 Review of Least Squares Solutions to Overdetermined Systems

cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

### 9. Numerical linear algebra background

Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

### Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start