Near Optimal Solutions



Similar documents
CMPSCI611: Approximating MAX-CUT Lecture 20

Tutorial 8. NP-Complete Problems

Approximation Algorithms

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

Guessing Game: NP-Complete?

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Applied Algorithm Design Lecture 5

The Taxman Game. Robert K. Moniot September 5, 2003

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

On the Unique Games Conjecture

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Introduction to Algorithms. Part 3: P, NP Hard Problems

Problem Set 7 Solutions

Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

Fairness in Routing and Load Balancing

Dynamic Programming. Lecture Overview Introduction

arxiv: v1 [math.pr] 5 Dec 2011

CoNP and Function Problems

Math 55: Discrete Mathematics

Algorithm Design and Analysis

Randomized algorithms

Solutions to Homework 6

1 if 1 x 0 1 if 0 x 1

Definition Given a graph G on n vertices, we define the following quantities:

11. APPROXIMATION ALGORITHMS

Triangle deletion. Ernie Croot. February 3, 2010

CSC2420 Spring 2015: Lecture 3

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

Closest Pair Problem

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

8 Primes and Modular Arithmetic

Reading 13 : Finite State Automata and Regular Expressions

Small Maximal Independent Sets and Faster Exact Graph Coloring

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

Page 1. CSCE 310J Data Structures & Algorithms. CSCE 310J Data Structures & Algorithms. P, NP, and NP-Complete. Polynomial-Time Algorithms

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Christmas Gift Exchange Games

Solutions for Practice problems on proofs

6.2 Permutations continued

Mathematical Induction. Lecture 10-11

Lecture 1: Course overview, circuits, and formulas

3. Mathematical Induction

A Working Knowledge of Computational Complexity for an Optimizer

Lecture 7: NP-Complete Problems

14.1 Rent-or-buy problem

Lecture 3: Finding integer solutions to systems of linear equations

Mechanisms for Fair Attribution

24. The Branch and Bound Method

MATH 60 NOTEBOOK CERTIFICATIONS

3.3 Real Zeros of Polynomials

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

Single machine parallel batch scheduling with unbounded capacity

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

Online and Offline Selling in Limit Order Markets

Lies My Calculator and Computer Told Me

Lecture 19: Introduction to NP-Completeness Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY

Introduction to Hypothesis Testing

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

2.5 Zeros of a Polynomial Functions

Linear Equations and Inequalities

2.1 Complexity Classes

Scheduling Single Machine Scheduling. Tim Nieberg

Math 120 Final Exam Practice Problems, Form: A

Vector and Matrix Norms

Elementary Number Theory and Methods of Proof. CSE 215, Foundations of Computer Science Stony Brook University

Zeros of a Polynomial Function

Arrangements And Duality

Math 181 Handout 16. Rich Schwartz. March 9, 2010

Microeconomic Theory: Basic Math Concepts

Analysis of Algorithms I: Optimal Binary Search Trees

Integers are positive and negative whole numbers, that is they are; {... 3, 2, 1,0,1,2,3...}. The dots mean they continue in that pattern.

Outline Introduction Circuits PRGs Uniform Derandomization Refs. Derandomization. A Basic Introduction. Antonis Antonopoulos.

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

1. Write the number of the left-hand item next to the item on the right that corresponds to it.

CSC 373: Algorithm Design and Analysis Lecture 16

Why Study NP- hardness. NP Hardness/Completeness Overview. P and NP. Scaling 9/3/13. Ron Parr CPS 570. NP hardness is not an AI topic

Cartesian Products and Relations

THE SCHEDULING OF MAINTENANCE SERVICE

Computability Theory

csci 210: Data Structures Recursion

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

Solving Rational Equations

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL

Math 319 Problem Set #3 Solution 21 February 2002

Welcome to... Problem Analysis and Complexity Theory , 3 VU

The theory of the six stages of learning with integers (Published in Mathematics in Schools, Volume 29, Number 2, March 2000) Stage 1

Answer: (a) Since we cannot repeat men on the committee, and the order we select them in does not matter, ( )

Exponential time algorithms for graph coloring

CHAPTER 9. Integer Programming

1 Error in Euler s Method

Treemaps with bounded aspect ratio

Transcription:

Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems. NP-Completeness only says that no polynomial time algorithm can solve every instance of the problem to optimality. Examples: TSP, Graph coloring, Clique, 3-SAT, Set covering, Knapsack.

Approximation Algorithms When faced with NP-Complete problems, our choices are: 1. Algorithms that compute optimal solution, but some times may take exponential time. 2. Algorithms that always run in polynomial time but may not always produce the optimal. We focus on second kind, and study algorithms that always produce solutions of a guaranteed quality. We call them approximation algorithms.

0/1 Knapsack Problem We begin with the 0/1 knapsack problem. Input is a set of items {1, 2,..., n}. Item i has size s i and value v i. The knapsack can hold items of total size upto K. Knapsack problem is to choose the subset of items with maximum value of total size K. Assume that all sizes are integers. Items cannot be taken partially.

0/1 Knapsack Problem Example: K = 50. Three items: Item Size Value 1 10 60 2 20 100 3 30 120 Greedy strategy of always picking the next items with maximum value/size ratio does not give optimal solution.

Dynamic Program List the items in any order, 1, 2,..., n. We consider subproblems of the following form: A[i, s]: maximum value of items in the set {1, 2,..., i} that fit in a knapsack of size s. We want to compute A[n, K]. We develop a recursive formula for A[i, s].

Dynamic Program First, can we compute A[1, s], for all s? Yes! A[1, s] = 0 if s < s 1, and A[1, s] = v 1 if s s 1. Supposing we know A[i 1, s], for all s, how can we compute A[i, s]?

Knapsack Recurrence A[i, s] = max { A[i 1, s] A[i 1, s s i ] + v i if s i s } By Principle of Optimality, we have two choices: 1. We reject item i. In this case, A[i, s] equals the optimal solution for A[i 1, s]. 2. We accept item i (only if s s i ). In this case, we must optimize the knapsack of size s s i over items {1, 2,..., i 1}.

Algorithm Knapsack (S, K) for s = 0 to s 1 1 do A[1, s] 0; for s = s 1 to K do A[1, s] v 1 ; for i = 2 to n do for s = 0 to K do if s i > s then A[i, s] = A[i 1, s]; else A[i, s] = max{a[i 1, s], A[i 1, s s i ] + v i }; return A[n, K]

Analysis The algorithms runs in time O(nK) if S has n items. It produces optimal solution. But knapsack problem is NP-Complete! So, what is the catch?

Algorithm The algorithm is not strictly polynomial. Its running time depends on magnitudes of item sizes and profits. A poly algorithm s time should depend only on the number of items, not their magnitudes. Knapsack algorithm is called pseudo-polynomial.

Poly-time Approximation Sort the items in descending order of value/size. Assume v 1 s 1 v 2 s 2 v n sn. Scan the items in this order, taking each item if there is still room in the knapsack. Quit when we can t take an item. Call this algorithm G (greedy), and A G denote the solution produced by this algorithm.

Poly-time Approximation Define the approximation ratio to be: max S A O (S) A G (S) That is, over all possible inputs, how large is the ratio between optimal and greedy s output?

Approx Ratio Unfortunately, this ratio is unbounded. Consider two items: (1, $1.01), and (K, $K). The greedy will take only item 1, while the optimal will take the second item. The ratio is K/1.01, which can be made arbitrarily large.

Poly-time Approximation Suppose we try to fix the algorithm as follows: Scan the items in sorted order, taking each item if there is still room in the knapsack. When the next item doesn t fit, output either the current knapsack solution or the single item of maximum value in S, whichever is better. This solves the pathological case above. But how good is this in general? Call it Modified Greedy.

2 Approximation Theorem: A O (S) A MG (S) 2 always. Proof. First consider a relaxed version of knapsack, in which items can be taken fractionally, if needed. Then, A O (S) A F (S). Because the fractional algorithm can never do worse than the optimal of 0/1 knapsack. Fractional optimal solution will take items in sorted order; only the last item may be fractional.

Proof If j was the first item in the list that Modified Greedy couldn t accept, then A F (S) A G (S) + v j. Modified Greedy outputs either A G (S) or v v j. Thus, A O A F A G + v j 2 max{a G, v j }. Modified Greedy never returns a solution worse than half of the optimal. Algorithms run in O(n log n) time.

TSP Approximation Input: A set of n cities, 1, 2,..., n, and distance d(i, j) between each pair (i, j). Find a minimum length tour that visits each city exactly once.

TSP Approximation The problem is a famous NP-Complete problem. Similarity to the MST problem only superficial: no polynomial time algorithm for TSP exists, or is likely to exist. If distances satisfy the triangle inequality: d(i, j) d(i, k) + d(k, j), then a good approximation is possible.

TSP Approximation Compute a MST of the set. Do a walk around the MST visiting each edge twice. This walk visits each city at least once, and its cost is exactly 2 lenmst. Because lenmst lent SP, the cost is also at most 2 lent SP. We can turn this into a tour by short circuiting any city that has already been visited.

TSP Approximation Because of -inequality, the length of the new tour does not increase. s s Doubling Walk Modified Tour This is a 2-approximation for the TSP.

A Game: Fair Division A cake needs to be divided among n players. Protocol for cutting the cake so each player gets 1/n of the cake by his own measure. Questions: Can it be done? How many cuts needed?

A Game: Fair Division Cake cutting modeled as 1-dimensional problem. Cut Players are selfish, non-cooperative, and care only about their own piece.

2 Players Folk Theorem: You cut, I choose! 1 2 Player A cuts the cake in two parts. Player B then chooses the piece he wants. Cut

A Game: Fair Division 1 2 What s the best strategy for Player A? To cut the cake fairly into 2 parts. Cut A gets at least 1/2 of the cake, because he makes the fair cut. B gets the at least 1/2, because he can choose the bigger half.

Modeling n Player Game Protocol is a prog. interactive procedure. Instructions like: cut a piece of the cake of these specifications, or give piece X to player A etc. The strategy of a player is an adaptive sequence of moves compatible with the protocol. A winning strategy for a player is one that ensures him 1/n of the cake (irrespective of the other players strategies). A protocol is fair if every participant has a winning strategy.

The n Player Game We saw that there is a fair protocol for 2 players, which requires only 1 cut. Is there a fair division protocol for n players? How many cuts are needed?

Fair Protocol for n Players Fair (n) If n = 1, give the cake to 1. Else, let players be 1,..., n Ask 1 to cut piece α he is willing to take. Set i 1. for j = 2 to n 1 do 1. Ask player j if i should get α 2. If j disagrees, tell j to cut a piece out of α that he is willing to take 3. Call the new piece α. Set i j. Ask n if he agrees that i should get α. If n agrees, give α to i; else give α to n. Set n n 1 Call Fair (n 1) on the remaining cake.

Analysis In the protocol Fair (n), every player has a winning strategy. If asked to make a cut, make an honest cut. Disagree only if another player is to be allotted a bigger than 1/n piece. Every player who follows this strategy ends up with at least 1/n of the cake.

Analysis How about the number of cuts made? With i players remaining, at most (i 1) cuts are made before one new player gets a piece. So, the worst-case upper bound on cuts is n i=1 (i 1) = Θ(n2 ). Fair but unattractive protocol too many cuts. How about a fair protocol with N 1 cuts?

Negative Result No protocol exists that is fair and makes exactly N 1 cuts. Proof uses contradiction. Suppose such a protocol exists. The proof uses two facts: Fact 1: All cuts divide cake into multiples of 1/n. This is true because N 1 cuts divide the cake into N pieces, and all measures are the same.

Negative Result Fact 1: If player B makes a cut, then the winning strategy of any player A should allow A to avoid either of the those pieces, if he so desires. Without this, A could be forced to take a piece < 1/n.

The Proof Consider the first time when all pieces become of size 2/n. Suppose A made the last cut. He either cut a 4/n size piece in the middle, or a 3/n size piece into a 1/n and a 2/n piece. We now argue that even if all players follow their winning strategy, someone will get a piece < 1/n. So, the protocol is not fair!

The Proof [Case 1: A cut a 3/n piece.] A decides to makes an unfair cut: (1 + ɛ)/n and (2 ɛ)/n pieces, where 0 < ɛ < 1. Other players winning strategies will force A to get his share out of this 3/n piece he cut. We will argue that at least one of the players A will get < 1/n.

The Proof If A cuts again, he can cut either (1 + ɛ)/n or the (2 ɛ)/n piece into two. But both pieces are < 1/n, so some other player get a smaller piece. If B A makes the additional cut, then A can avoid getting the smaller pieces created by B, so again some other player gets < 1/n.

The Proof [Case 2: A cut a 4/n piece.] A makes (2 + ɛ)/n and (2 ɛ)/n pieces. Other players force A to get his share out of this 4/n piece he cut. Suppose A cuts again. [Cuts (2 ɛ)/n piece.] Even if he cuts fairly, some player gets < 1/n cake. [Cuts (2 + ɛ)/n piece.] A gets a 1/n piece himself, but he creates the Case 1: 3 players to share the (1 + ɛ)/n and (2 ɛ)/n pieces.

The Proof Some B A makes the cut. A can ensure he doesn t get the < 1/n piece created by B. If B cuts the (2 ɛ)/n piece, at least one player A gets the < 1/n piece. If B cuts the (2 + ɛ)/n piece, then A rejects the smaller piece, and again someone A gets a smaller piece.

O(n log n) Cut Fair Protocol Think of cake as the [0, 1] interval. Asks each player to cut the cake into a left and a right half with ratio n 2 : n 2. Let c i be the size of the left part, created by player i. Sort players so c 1 c 2... c n < 1.

O(n log n) Cut Fair Protocol Game is divided into two separate games. Players 1, 2,..., n 2 play on the cake [0, c n 2 ]. Players n 2 + 1, n 2 + 2,..., n play on the cake [c n 2, 1]. Game is played recursively until one player remains; he gets the remaining piece.

Correctness of the Protocol Consider a player A. If A participates in game division only once before he gets his share, either n = 3 and A = i 1, or n = 2. In the first case, if he cuts in the ratio 1 : 2, then he will get 1/3 of the cake. In the latter case, if he is i 1, he gets [0, c 1 ], which is exactly 1/2. Otherwise, he is i 2 and he gets [c 1, 1], which is at least 1/2 by his measure.

Correctness of the Protocol If A participates in more than one game division, consider the first division. If in the new sorted order, he is among 1, 2,..., n 2, then he plays with n 2 on a cake of size at least n 2 /n. By induction, he should get 1/n. The same reasoning holds if he is among the second group of players. So, the protocol is fair.

Number of Cuts Let f(n) denote the number of cuts made by the protocol to divide the cake among n players. This satisfies the recurrence: f(n) = n + f( n 2 ) + f( n 2 ) This solves to f(n) 2n log n.