# Lecture 3: Linear Programming Relaxations and Rounding

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can be cast as an integer linear program: minimizing a linear function over integer points in a polyhedron. Although this by itself doesn t benefit us, one can remove the integrality constraints on variables to get a lower bound on the optimum value of the problem. Linear programs can be solved in polynomial time to give what is called a fractional solution. This solution is then rounded to a integer solution, and one normally shows that the rounded solution s cost can is within a factor, say, of the fractional solution s cost. It is clear then that the solution s cost is within of the optimum cost. Example. Let us take the set cover problem we did in the first class. Here is an integer program for it. min m c(s j )x j (1) j=1 j:i S j x j 1 i U (2) x j {0, 1} j = 1...m (3) To get the LP relaxation, we replace the constraint (3) by 0 x j 1. Suppose x is an optimal solution to the resulting LP (to stand for linear program henceforth). By the discussion above, we get opt m j=1 c(s j)x j. Now suppose we are told that each element appears in at most f sets; that is, the frequency of each element is at most f. For example, in the special case of vertex cover, the frequency of each element (edge) is 2. What can we say then? Well, let s look at (2). For each element in U, there are at most f terms in the corresponding summation, and thus at least one of the x j s must be 1/f. In other words, the algorithm which picks all sets S j with x j 1/f, that is rounds their value up to 1, is assured to be a set cover. Furthermore, the cost of this integral solution is at most f m j=1 c(s j)x j f opt. Thus we get a simple f-approximation to set cover using linear programs. Given a particular LP relaxation for an optimization problem, there is a limit on the best approximation factor obtainable by the process described above. Take an instance I of the problem (set cover, say), and let lp(i) be the value of the LP relaxation, and opt(i) be the value of the optimum. Since our solution can t cost less than opt(i), the best approximation factor we can hope 1

2 for in this instance is opt(i) lp(i). Thus, the best approximation factor one can get for the minimization problem via this LP relaxation is at most opt(i) sup instances I lp(i) This quantity is called the integrality gap of the LP relaxation and is very useful in judging the efficacy of a linear program. What is the integrality gap of the LP relaxation of (1) in terms of f? 1.1 Some facts regarding linear programs. A minimization linear program, in its generality, can be denoted as min c x (4) Ax b The number of variables is the number of columns of A, the number of constraints is the number of rows of A. The ith column of A is denoted as A i, the ith row as a i. Note that constraints of A could also include constraints of the form x i 0. These non-negativity constraints are often considered separately (for a reason). Let s call the other constraints non-trivial constraints. For the discussion of this section, let n be the number of variables and let m denote the number of non-trivial constraints. Thus the number of rows in A are (m + n) and rank(a) = n (since it contains an I n n as a submatrix). A solution x is a feasible solution if it satisfies all the constraints. Some constraints may be satisfied with equality, others as a strict inequality. A feasible solution x is called basic if the constraints satisfied with equality span the entire space. Let B be the equality rows of A such that Bx = b B, where b B are the entries of b corresponding to the rows of B. Note that B depends on x. x is a basic feasible solution if rank(b) = n. The following fact is a cornerstone in the theory of linear optimization. Fact 1. There is an optimal solution to every LP which is a basic feasible solution. Let x be a basic feasible solution, and let supp(x) := {i : x i > 0}. Let B be the basis such that Bx = b B. Note that B contains at most n supp(x) non-negativity constraints. Therefore, at least supp(x) linearly independent non-trivial constraints which are satisfied with equality. This fact will be used crucially a few classes later. Fact 2. There is an optimal solution to every LP with at least supp(x) linearly independent nontrivial constraints satisfied with equality. A linear program can be solved in polynomial time, polynomial in the number of variables and constraints, by using the ellipsoid algorithm. Actually more is true. Suppose there is a polynomial (in n) time algorithm which given a candidate solution x checks if whether Ax b, and if not returns a violated inequality a i x < b i. Such an algorithm is called the separation oracle. If a separation oracle exists, then LPs can be solved in time polynomial in only the number of variables. This can possibly be (and will be) used to solve LPs with exponentially (in n) many constraints. By the way, note that if m is polynomial in n, then clearly there is a polytime separation oracle. 2

3 Let s talk a bit more about how the ellipsoid algorithm works. Actually, the ellipsoid method checks feasibility of a system of inequalities in polynomial time. This is equivalent to solving the LP: one guesses the optimum M of the LP and adds the constraint c x M to the system Ax b. Suppose that the system Ax b is bounded; if not, it can be made bounded by intersecting with a fairly large cube. In each iteration of the ellipsoid algorithm, a candidate solution x t is generated. (It is the centre of a carefully constructed ellipsoid containing {x : Ax b} - hence the name of the algorithm.) Now, the separation oracle either says that x t is feasible in which case we are done, or else it returns a constraint a t x b t (or c x M) which is violated by x t. The nontrivial theorem regarding the ellipsoid algorithm is that in polynomial (in n) many iterations, we either get a feasible solution or a proof that the system is infeasible. If the system is infeasible, then our guess M was too low and we raise it; if the system is feasible then we lower our guess and run again. By doing a binary search, we can reach the true optimum M in polynomial time. Consider the run of the ellipsoid algorithm when the guess was M, just below M. That is, the LP cannot take any value between M and M. Let (Â, ˆb) be the collection of polynomially many constraints/rhs es returned by the separation oracle in the various iterations, after which the algorithm guarantees infeasibility. Consider the system Âx ˆb; we claim that min{c x : Ax b} = min{c x : Âx ˆb}. Surely, the second is lower since we have removed constraints. However, the ellipsoid algorithm certifies infeasibility of {c x M, Âx ˆb}. So, min{c x : Âx ˆb} = M as well. Thus given a cost vector c, the exponentially many constraints of the LP can be essentially reduced to the polynomial sized LP. We end our discussion with one final point. We claimed that there exists an optimum solution which is basic feasible. However, the ellipsoid algorithm is not guaranteed to return a basic feasible solution. Can we get a optimal basic feasible solution in polynomial time? More generally, given a point x P := {x : Ax b} which is guaranteed to minimize c x over all points of P, can we move to a basic feasible solution of P of the same value. (We have reverted to the old notation where A includes the non-negativity constraints as well). Consider the equality submatrix Bx = b B. If B is full rank, we are at a basic feasible solution. If not, let v be a vector in the null space of B; that is a i v = 0 for all a i B. Such a vector can be found by solving a system of linear equations, for instance. Suppose c v 0. Consider x = x αv where α is a positive scalar such that for all i / B, a i (x αv) b i. Since P is bounded, we have that a i v > 0 for some i. Choose α := min a x b a i v over the i s / B with a i v > 0. What can we say about x? It is feasible. c x c x (implying c v = 0). Since a i v > 0 and v is in the null space of B, a i and B form a linearly independent system. So, we move to a solution with an extra linearly independent constraint in the equality matrix. Repeat this till one gets a basic feasible solution. Fact 3. An optimum basic feasible solution to a LP can be found in polynomial time. Remark 1. A word of caution. Whatever written above about the ellipsoid method should be taken with a pinch of salt (although the above fact is true). After all, I have not described how the carefully constructed ellipsoids are generated. To make all this work in polynomial time is more non-trivial and we refer the reader to the book Geometric Algorithms and Combinatorial Optimization by Grötschel, Lovász, and Schrijver. Let s get back to approximation algorithms. 3

4 2 Facility Location LP Relaxation min f i y i + c(i, j)x ij (x ij, y i 0) (5) i F i F,j C x ij 1 j C (6) i F y i x ij i F, j C (7) Let (x, y) be a fractional solution to the above LP. We define some notation. Let F LP := i F f iy i. Let C j := i F c(i, j)x ij. Let C LP = j C C j. We now show an algorithm which returns a solution of cost at most 4(F LP + C LP ) 4opt. The algorithm proceeds in two stages. The first stage is called filtering which will take care of the x ij s, the second stage is called clustering which will take care of the y i s. Filtering. Given a client j, order the facilities in increasing order of c(i, j). That is, c(1, j) c(2, j) c(n, j). The fractional cost of connecting j to the facilities is C j ; our goal is to pay not much more than this cost (say within a factor > 1 which we will figure out what it should be later). So, we look at the set of facilities N j () := {i : c(i, j) C j }. Now it is possible that x ij > 0 for some i / N j (). However, we can change the solution so that j is fractionally connected only to facilities in N j (). This is called filtering the fractional solution. Define x ij as follows. x ij = 1 x ij for i N j (), x ij = 0 for i / N j (). Claim 1. x satisfies (6). Proof. Let us consider the sum C j = i c(i, j)x ij = i N j () c(i, j)x ij + i/ N j () c(i, j)x ij. Since c(i, j) > C j for all i / N j (), we get i/ N j () x ij < 1, for otherwise the second term in the RHS above exceeds C j. This implies i N j () x ij (1 1 ) implying i F x ij 1. Suppose we could guarantee that a client j is always connected to a facility i with x ij > 0, then we could argue that the cost paid to connect this pair is at most C j. However, we have to argue about the facility opening costs as well. This is done using a clustering idea. Clustering. Suppose we could find sets of disjoint facilities F 1, F 2,... such that in each F k, we have i F k y i 1, and we only open the minimum cost facility i k in each F k. Then, our total facility opening cost would be k f i k k i F k f i y i F LP. In fact, if the 1 is replaced by α, then our facility opening cost is at most F LP /α. We now show how to obtain such a clustering. The clustering algorithm proceeds in rounds. In round l, the client with minimum C j in C is picked. The facility F l is defined as the set of facilities {i : x ijl > 0}. All clients j such that there exists an i F l with x ij > 0 are deleted from C. Let all these clients be denoted by the set N 2 (j l ). The procedure continues till C becomes empty. Note that F k s are disjoint. The algorithm opens the cheapest facility i l F l and connects all clients j N 2 (j l ) to i l. Let s argue about the facility opening cost. Note that for each F l, we have i F l y i i F l x(i, j l ) 1 i F l x ijl 1 by Claim 1. Thus, the total facility opening cost is at most 1 F LP. 4

5 Consider a client j. Either it is a j l, in which case the assignment cost is at most C j. Otherwise, it is connected to i l F l for some l. However, there is some i (not necessarily i l ) in F l such that x ij > 0. Now, c(i l, j) c(i, j) + c(i, j l ) + c(i l, j l ) C j + 2 C jl 3 C j since C j C jl. The latter s because the j s are considered in increasing order of C j. So, alg 1 F LP + 3C LP. If = 4/3, then the above gives alg 4(F LP + C LP ). 3 Generalized Assignment Problem In GAP, we are given m items J, and n bins I. Each bin i can take a maximum load of B i. Each item j weighs w ij in bin i, and gives profit p ij when put in it. The goal is to find a max-profit feasible allocation. LP Relaxation max i I,j J p ij x ij (x ij 0) (8) w ij x ij B i i I (9) j J x ij 1 j J (10) i I Solve the LP to get a fractional solution x. Suppose all the weights w ij were B i, say. Then the first constraint is equivalent to j J x ij 1. Consider the bipartite graph (I, J, E) where we have an edge (i, j) if x ij > 0. Then, the solution x above is a maximum weight fractional matching in this bipartite graph. This implies that there is an integral matching of the same weight, and thus there is an allocation equalling the LP value (and hence the optimum value). This is a non-trivial combinatorial optimization fact and is key to the approximation algorithm below. Let s come to this fact later. Of course w ij is not equal to B i, although we can assume w ij B i since if not we know x ij has to be 0 for such a pair. (Note this must be put into the above LP explicitly, that is, we must set x ij = 0 for such pairs as constraints.) Thus, in general, j J x ij 1. Let n i := j J x ij. The algorithm proceeds as follows: it uses x to define a fractional matching on a different bipartite graph, use it to get an integral matching of equal weight in it, and then do a final step of pruning. New Bipartite Graph. One side of the bipartition is J. The other side is I which consists of n i copies of each bin i in I. For each bin i I, consider the items in J in increasing weight order. That is, suppose, w i1 w i2 w im. Consider the fractions x i1, x i2,..., x im in this order. Let j 1, j 2,..., j ni 1 be the boundary items defined as follows. x i1 + + x j1 1, and x i1 + + x j1 1 < 1; x i1 + + x j2 2, and x i1 + + x j1 2 < 2; and so on. Formally, for each 1 l n i 1, j l j=1 x ij l; j l 1 j=1 x ij < l Recall there are n i copies of the bin i in I. The lth copy, call it i l, has an edge to items j l 1 to j l (j 0 is the item 1). The n i th copy has an edge to items j ni 1 1 to m. 5

6 New Fractional Matching. x is so defined such that for every copy i l, the total fractional weight incident on it is at most 1 (in fact, it ll be exactly 1 for all copies but the n i th copy). The total fractional weight incident on item j is the same as that induced by x. It s clear how to do it given the way edges are defined above; formally it s the mess given below. x i l,j = x ij, for j l 1 < j < j l x i l,j l 1 = x i,jl 1 x i l 1,j l 1 (if l = 1, then the second term is 0). x i l,j l = 1 j l 1 j=j l 1 x i l,j Integral Matching and Pruning. Note that x defines a fractional matching in (I, J, E ). Thus, there is an allocation of items to I whose profit is at least the LP profit (and thus at least opt). The assignment to I implies an assignment to I in the natural way: bin i gets all items allocated to the n i copies in I. Is this feasible? Not necessarily. But, the total weight allocated to bin i is not too much larger than B i. Claim 2. The total weight of items allocated by the integral matching above is at most B i + i, where i := max j J w ij. Proof. We use the fact that the items (for bin i) were ordered in increasing order of weights. Let J l be the set of items from j l 1 to j l, and let J ni be the items from j l to m. Since x is a feasible solution to the LP, we get B i n i n i w ij x ij = w ij x i l,j w i,jl 1 x i l,j w i,j1 + + w i,jni 1 j J j J l j J l l=1 l=1 The second inequality uses the increasing order of weights, the last uses the fact that x forms a fractional matching. Now, bin i gets at most one item from each J l (since each copy i l gets at most item from it s neighbors in J l.) Note that the heaviest item in J l weighs w i,jl, and i = w i,ni. Thus, the load on bin i is at most w i,j1 + + w i,ni 1 + w i,ni which is at most B i + i. To summarize, we have found an allocation whose profit is at least opt and each bin i has load at most B i + i. For each bin i now consider the heaviest item j allocated to it. We know that w ij B i. We also know weight of all the other items allocated to it is B i. To make the allocation feasible, keep either j or the rest, whichever gives more profit. In this pruned allocation, each bin thus gets profit at least half of what it got in the earlier infeasible allocation. Thus, the profit of this new feasible allocation is at least opt/2. Thus, the above algorithm is a 1/2-approximation. 3.1 Fractional Matching to Integral Matching Given a fractional matching x, construct the fractional bipartite graph G = (I, J, E f ) where there is an edge between i I and j J with 0 < x ij < 1. We now describe a procedure which takes x and converts it to x such that two things occur: a) the number of edges in the corresponding E f 6

7 is strictly less, and b) i,j w ijx ij i,j w ijx ij. The fact follows. Procedure Rotate. 1. Pick a path or cycle in G = (I, J, E f ). Call the edges picked F. Decompose F into two matchings M 1 and M Let ( ) ε 1 := min min x ij, min (1 x ij ) (i,j) M 1 (i,j) M 2 ( ) ε 2 := min min x ij, min (1 x ij ) (i,j) M 2 (i,j) M 1 3. Define x (1) as follows. For each (i, j) M 1, x (1) ij = x ij ε 1, for each (i, j) M 2, x (1) ij = x ij +ε 1, for all other edges x (1) ij = x ij. Define x (2) similarly. 4. Let x be the x (1) or x (2) with larger i,j w ijx ij. It is clear that the number of edges in E f decreases in this procedure. Also note that the sum i,j w ijx ij is at least i,j w ijx ij. This is because the increase in weight in x (1) over x is precisely ε 1 ( (i,j) M 2 w ij (i,j) M 1 w ij ), and that in x (2) is ε 2 ( (i,j) M 1 w ij (i,j) M 2 w ij ). One of them is at least 0. The following claims ends the analysis of the procedure. Claim 3. x is a feasible fractional matching. Proof. If F forms a cycle, then it s clear that the fractional load on any vertex remains unchanged. If F forms a path, then we need to only concern about end vertices. Let i be such a vertex. Note that there are no edges (i, j ) with x ij = 1 incident on it, and exactly one edge (i, j) E f incident on it. In the end, x ij 1. 7

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Lecture 2: August 29. Linear Programming (part I)

10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

### Discuss the size of the instance for the minimum spanning tree problem.

3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

### 9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1 Seffi Naor Computer Science Dept. Technion Haifa, Israel Introduction

### Basic Components of an LP:

1 Linear Programming Optimization is an important and fascinating area of management science and operations research. It helps to do less work, but gain more. Linear programming (LP) is a central topic

### (67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7

(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition

### Near Optimal Solutions

Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

### Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

### Solutions for Practice problems on proofs

Solutions for Practice problems on proofs Definition: (even) An integer n Z is even if and only if n = 2m for some number m Z. Definition: (odd) An integer n Z is odd if and only if n = 2m + 1 for some

### An Approximation Algorithm for Bounded Degree Deletion

An Approximation Algorithm for Bounded Degree Deletion Tomáš Ebenlendr Petr Kolman Jiří Sgall Abstract Bounded Degree Deletion is the following generalization of Vertex Cover. Given an undirected graph

### Discrete Optimization

Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

### Solving Linear Programs

Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

### EXCEL SOLVER TUTORIAL

ENGR62/MS&E111 Autumn 2003 2004 Prof. Ben Van Roy October 1, 2003 EXCEL SOLVER TUTORIAL This tutorial will introduce you to some essential features of Excel and its plug-in, Solver, that we will be using

### Math 181 Handout 16. Rich Schwartz. March 9, 2010

Math 8 Handout 6 Rich Schwartz March 9, 200 The purpose of this handout is to describe continued fractions and their connection to hyperbolic geometry. The Gauss Map Given any x (0, ) we define γ(x) =

### Constant Factor Approximation Algorithm for the Knapsack Median Problem

Constant Factor Approximation Algorithm for the Knapsack Median Problem Amit Kumar Abstract We give a constant factor approximation algorithm for the following generalization of the k-median problem. We

### Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve

### 8.1 Min Degree Spanning Tree

CS880: Approximations Algorithms Scribe: Siddharth Barman Lecturer: Shuchi Chawla Topic: Min Degree Spanning Tree Date: 02/15/07 In this lecture we give a local search based algorithm for the Min Degree

### SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

### Proximal mapping via network optimization

L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

### Design of LDPC codes

Design of LDPC codes Codes from finite geometries Random codes: Determine the connections of the bipartite Tanner graph by using a (pseudo)random algorithm observing the degree distribution of the code

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### 1 The Line vs Point Test

6.875 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 5: Low Degree Testing Lecturer: Dana Moshkovitz Scribe: Gregory Minton and Dana Moshkovitz Having seen a probabilistic verifier for linearity

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### Approximating Minimum Bounded Degree Spanning Trees to within One of Optimal

Approximating Minimum Bounded Degree Spanning Trees to within One of Optimal ABSTACT Mohit Singh Tepper School of Business Carnegie Mellon University Pittsburgh, PA USA mohits@andrew.cmu.edu In the MINIMUM

### Chapter 13: Binary and Mixed-Integer Programming

Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

### Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

### Routing in Line Planning for Public Transport

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Germany MARC E. PFETSCH RALF BORNDÖRFER Routing in Line Planning for Public Transport Supported by the DFG Research

### FRACTIONAL COLORINGS AND THE MYCIELSKI GRAPHS

FRACTIONAL COLORINGS AND THE MYCIELSKI GRAPHS By G. Tony Jacobs Under the Direction of Dr. John S. Caughman A math 501 project submitted in partial fulfillment of the requirements for the degree of Master

### Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

A Factor 1 2 Approximation Algorithm for Two-Stage Stochastic Matching Problems Nan Kong, Andrew J. Schaefer Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA Abstract We introduce

### 11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

### Weakly Secure Network Coding

Weakly Secure Network Coding Kapil Bhattad, Student Member, IEEE and Krishna R. Narayanan, Member, IEEE Department of Electrical Engineering, Texas A&M University, College Station, USA Abstract In this

### The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

### The Generalized Assignment Problem with Minimum Quantities

The Generalized Assignment Problem with Minimum Quantities Sven O. Krumke a, Clemens Thielen a, a University of Kaiserslautern, Department of Mathematics Paul-Ehrlich-Str. 14, D-67663 Kaiserslautern, Germany

### AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC

AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC W. T. TUTTE. Introduction. In a recent series of papers [l-4] on graphs and matroids I used definitions equivalent to the following.

### Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

### Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen

MASTER STHESIS Minimizing costs for transport buyers using integer programming and column generation Eser Esirgen DepartmentofMathematicalSciences CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### Random graphs with a given degree sequence

Sourav Chatterjee (NYU) Persi Diaconis (Stanford) Allan Sly (Microsoft) Let G be an undirected simple graph on n vertices. Let d 1,..., d n be the degrees of the vertices of G arranged in descending order.

### GENERALIZED INTEGER PROGRAMMING

Professor S. S. CHADHA, PhD University of Wisconsin, Eau Claire, USA E-mail: schadha@uwec.edu Professor Veena CHADHA University of Wisconsin, Eau Claire, USA E-mail: chadhav@uwec.edu GENERALIZED INTEGER

### NP-Completeness I. Lecture 19. 19.1 Overview. 19.2 Introduction: Reduction and Expressiveness

Lecture 19 NP-Completeness I 19.1 Overview In the past few lectures we have looked at increasingly more expressive problems that we were able to solve using efficient algorithms. In this lecture we introduce

### Optimal File Sharing in Distributed Networks

Optimal File Sharing in Distributed Networks Moni Naor Ron M. Roth Abstract The following file distribution problem is considered: Given a network of processors represented by an undirected graph G = (V,

### Transportation Polytopes: a Twenty year Update

Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,

### The Trip Scheduling Problem

The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems

### Single machine parallel batch scheduling with unbounded capacity

Workshop on Combinatorics and Graph Theory 21th, April, 2006 Nankai University Single machine parallel batch scheduling with unbounded capacity Yuan Jinjiang Department of mathematics, Zhengzhou University

### Mathematical Induction

Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

### Ph.D. Thesis. Judit Nagy-György. Supervisor: Péter Hajnal Associate Professor

Online algorithms for combinatorial problems Ph.D. Thesis by Judit Nagy-György Supervisor: Péter Hajnal Associate Professor Doctoral School in Mathematics and Computer Science University of Szeged Bolyai

### On the effect of forwarding table size on SDN network utilization

IBM Haifa Research Lab On the effect of forwarding table size on SDN network utilization Rami Cohen IBM Haifa Research Lab Liane Lewin Eytan Yahoo Research, Haifa Seffi Naor CS Technion, Israel Danny Raz

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Optimization in R n Introduction

Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

### Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

### Module1. x 1000. y 800.

Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

### Large induced subgraphs with all degrees odd

Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order

### Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

### ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN. Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015

ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015 ONLINE STEINER FOREST PROBLEM An initially given graph G. s 1 s 2 A sequence of demands (s i, t i ) arriving

### THE SCHEDULING OF MAINTENANCE SERVICE

THE SCHEDULING OF MAINTENANCE SERVICE Shoshana Anily Celia A. Glass Refael Hassin Abstract We study a discrete problem of scheduling activities of several types under the constraint that at most a single

### Social Media Mining. Graph Essentials

Graph Essentials Graph Basics Measures Graph and Essentials Metrics 2 2 Nodes and Edges A network is a graph nodes, actors, or vertices (plural of vertex) Connections, edges or ties Edge Node Measures

### The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

### An interval linear programming contractor

An interval linear programming contractor Introduction Milan Hladík Abstract. We consider linear programming with interval data. One of the most challenging problems in this topic is to determine or tight

### arxiv:1203.1525v1 [math.co] 7 Mar 2012

Constructing subset partition graphs with strong adjacency and end-point count properties Nicolai Hähnle haehnle@math.tu-berlin.de arxiv:1203.1525v1 [math.co] 7 Mar 2012 March 8, 2012 Abstract Kim defined

### ON THE COMPLEXITY OF THE GAME OF SET. {kamalika,pbg,dratajcz,hoeteck}@cs.berkeley.edu

ON THE COMPLEXITY OF THE GAME OF SET KAMALIKA CHAUDHURI, BRIGHTEN GODFREY, DAVID RATAJCZAK, AND HOETECK WEE {kamalika,pbg,dratajcz,hoeteck}@cs.berkeley.edu ABSTRACT. Set R is a card game played with a

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### Existence of pure Nash equilibria (NE): Complexity of computing pure NE: Approximating the social optimum: Empirical results:

Existence Theorems and Approximation Algorithms for Generalized Network Security Games V.S. Anil Kumar, Rajmohan Rajaraman, Zhifeng Sun, Ravi Sundaram, College of Computer & Information Science, Northeastern

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### Why? A central concept in Computer Science. Algorithms are ubiquitous.

Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

### Completely Positive Cone and its Dual

On the Computational Complexity of Membership Problems for the Completely Positive Cone and its Dual Peter J.C. Dickinson Luuk Gijben July 3, 2012 Abstract Copositive programming has become a useful tool

### Minimal Cost Reconfiguration of Data Placement in a Storage Area Network

Minimal Cost Reconfiguration of Data Placement in a Storage Area Network Hadas Shachnai Gal Tamir Tami Tamir Abstract Video-on-Demand (VoD) services require frequent updates in file configuration on the

### 9.2 Summation Notation

9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

### The Conference Call Search Problem in Wireless Networks

The Conference Call Search Problem in Wireless Networks Leah Epstein 1, and Asaf Levin 2 1 Department of Mathematics, University of Haifa, 31905 Haifa, Israel. lea@math.haifa.ac.il 2 Department of Statistics,

### Lecture 4: BK inequality 27th August and 6th September, 2007

CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound

### Mean Ramsey-Turán numbers

Mean Ramsey-Turán numbers Raphael Yuster Department of Mathematics University of Haifa at Oranim Tivon 36006, Israel Abstract A ρ-mean coloring of a graph is a coloring of the edges such that the average

### Fairness in Routing and Load Balancing

Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria

### The positive minimum degree game on sparse graphs

The positive minimum degree game on sparse graphs József Balogh Department of Mathematical Sciences University of Illinois, USA jobal@math.uiuc.edu András Pluhár Department of Computer Science University

### The Equivalence of Linear Programs and Zero-Sum Games

The Equivalence of Linear Programs and Zero-Sum Games Ilan Adler, IEOR Dep, UC Berkeley adler@ieor.berkeley.edu Abstract In 1951, Dantzig showed the equivalence of linear programming problems and two-person

### MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

Adaptive Linear Programming Decoding Mohammad H. Taghavi and Paul H. Siegel ECE Department, University of California, San Diego Email: (mtaghavi, psiegel)@ucsd.edu ISIT 2006, Seattle, USA, July 9 14, 2006

### 1 Approximating Set Cover

CS 05: Algorithms (Grad) Feb 2-24, 2005 Approximating Set Cover. Definition An Instance (X, F ) of the set-covering problem consists of a finite set X and a family F of subset of X, such that every elemennt

### 1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

1 Linear Programming 1.1 Introduction Problem description: motivate by min-cost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities

### The Ideal Class Group

Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned

### Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

### Codes for Network Switches

Codes for Network Switches Zhiying Wang, Omer Shaked, Yuval Cassuto, and Jehoshua Bruck Electrical Engineering Department, California Institute of Technology, Pasadena, CA 91125, USA Electrical Engineering

### Energy Efficient Monitoring in Sensor Networks

Energy Efficient Monitoring in Sensor Networks Amol Deshpande, Samir Khuller, Azarakhsh Malekian, Mohammed Toossi Computer Science Department, University of Maryland, A.V. Williams Building, College Park,

### A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Coloring the vertices or edges of a graph leads to a variety of interesting applications in graph theory These applications include various

### A Sublinear Bipartiteness Tester for Bounded Degree Graphs

A Sublinear Bipartiteness Tester for Bounded Degree Graphs Oded Goldreich Dana Ron February 5, 1998 Abstract We present a sublinear-time algorithm for testing whether a bounded degree graph is bipartite