Lecture 14: Greedy Algorithms

Similar documents
Near Optimal Solutions

CSCE 310J Data Structures & Algorithms. Dynamic programming 0-1 Knapsack problem. Dynamic programming. Dynamic Programming. Knapsack problem (Review)

Approximation Algorithms

International Journal of Advanced Research in Computer Science and Software Engineering

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

Linear Programming. March 14, 2014

One natural response would be to cite evidence of past mornings, and give something like the following argument:

Problem Set 7 Solutions

Welcome to the course Algorithm Design

Data Structures and Algorithms Written Examination

Network Flow I. Lecture Overview The Network Flow Problem

Lecture 13: The Knapsack Problem

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

3. Mathematical Induction

Lecture Notes 10

Fundamentele Informatica II

The Goldberg Rao Algorithm for the Maximum Flow Problem

5.1 Bipartite Matching

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

Cosmological Arguments for the Existence of God S. Clarke

CSC2420 Spring 2015: Lecture 3

Dynamic Programming Problem Set Partial Solution CMPSC 465

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Scheduling Shop Scheduling. Tim Nieberg

Fairness in Routing and Load Balancing

Math 319 Problem Set #3 Solution 21 February 2002

Linear Programming. April 12, 2005

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Factoring & Primality

Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

26 Integers: Multiplication, Division, and Order

THE BANACH CONTRACTION PRINCIPLE. Contents

Pre-Algebra Lecture 6

Chapter 7. Sealed-bid Auctions

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Analysis of Algorithms, I

Warshall s Algorithm: Transitive Closure

Classification - Examples

MATHEMATICAL INDUCTION. Mathematical Induction. This is a powerful method to prove properties of positive integers.

Tiers, Preference Similarity, and the Limits on Stable Partners

Linear Programming Notes VII Sensitivity Analysis

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

Cpt S 223. School of EECS, WSU

The Graphical Method: An Example

Offline sorting buffers on Line

Solutions to Homework 6

Binary Number System. 16. Binary Numbers. Base 10 digits: Base 2 digits: 0 1

1 Solving LPs: The Simplex Algorithm of George Dantzig

Page 1. CSCE 310J Data Structures & Algorithms. CSCE 310J Data Structures & Algorithms. P, NP, and NP-Complete. Polynomial-Time Algorithms

JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004

1. Prove that the empty set is a subset of every set.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Page 331, 38.4 Suppose a is a positive integer and p is a prime. Prove that p a if and only if the prime factorization of a contains p.

Dynamic programming formulation

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

3 Some Integer Functions

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1

Real Roots of Univariate Polynomials with Real Coefficients

Data Structures Fibonacci Heaps, Amortized Analysis

Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

GREATEST COMMON DIVISOR

csci 210: Data Structures Recursion

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Duality in Linear Programming

Minimum cost maximum flow, Minimum cost circulation, Cost/Capacity scaling

24. The Branch and Bound Method

MATH 289 PROBLEM SET 4: NUMBER THEORY

Loop Invariants and Binary Search

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 19, NO. 3, JUNE

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

POLYNOMIAL RINGS AND UNIQUE FACTORIZATION DOMAINS

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

Christmas Gift Exchange Games

1.204 Lecture 10. Greedy algorithms: Job scheduling. Greedy method

CS 103X: Discrete Structures Homework Assignment 3 Solutions

(a) Write each of p and q as a polynomial in x with coefficients in Z[y, z]. deg(p) = 7 deg(q) = 9

Linear Programming. Solving LP Models Using MS Excel, 18

MATH 22. THE FUNDAMENTAL THEOREM of ARITHMETIC. Lecture R: 10/30/2003

8 Primes and Modular Arithmetic

Kenken For Teachers. Tom Davis June 27, Abstract

14.1 Rent-or-buy problem

Dynamic Programming. Lecture Overview Introduction

Linear Programming Notes V Problem Transformations

Question 2: How will changes in the objective function s coefficients change the optimal solution?

How To Solve The Stable Roommates Problem

Basic Proof Techniques

CS 3719 (Theory of Computation and Algorithms) Lecture 4

Mathematical Induction

Best Monotone Degree Bounds for Various Graph Parameters

Graph Theory Problems and Solutions

CMPSCI611: Approximating MAX-CUT Lecture 20

6.231 Dynamic Programming and Stochastic Control Fall 2008

Lecture Notes on Linear Search

Chapter 6 Experiment Process

Topic: Greedy Approximations: Set Cover and Min Makespan Date: 1/30/06

Transcription:

Lecture 14: Greedy Algorithms CLRS section 16 Outline of this Lecture We have already seen two general problem-solving techniques: divide-and-conquer and dynamic-programming. In this section we introduce a third basic technique: the greedy paradigm. A greedy algorithm for an optimization problem always makes the choice that looks best at the moment and adds it to the current subsolution. Examples already seen are Dijkstra s shortest path algorithm and Prim/Kruskal s MST algorithms. Greedy algorithms don t always yield optimal solutions but, when they do, they re usually the simplest and most efficient algorithms available. 1

The Knapsack Problem We review the knapsack problem and see a greedy algorithm for the fractional knapsack. We also see that greedy doesn t work for the 0-1 knapsack (which must be solved using DP). A thief enters a store and sees the following items: A B C 100 10 120 2 2 3 His Knapsack holds 4 pounds. What should he steal to maximize profit? 2

1. Fractional Knapsack Problem : Thief can take a fraction of an item. Solution = + 2 s A 100 2 pounds of item A 2 pounds of item C 2 s C 80 2. 0-1 Knapsack Problem : Thief can only take or leave item. He can t take a fraction. Solution = 3 pounds of item C 3 s C 120 3

Greedy solution for Fractional Knapsack Sort items by decreasing cost per pound D 1 A 200 3 B 240 2 C 140 5 150 cost/ weight 200 80 70 30 If knapsack holds s, solution is: 1 s A 3 s B 1 s C 4

" Greedy solution for Fractional Knapsack General Algorithm. Given a set of item : weight cost Let be the problem of selecting items from, with weight limit, such that the resulting cost (value) is maximum. 1. Calculate! for " #%'&(. 2. Sort the items by decreasing. Let the sorted item sequence be #%'&) *, and the corresponding and weight be + and, respectively. 5

Greedy solution for Fractional Knapsack 3. Let be the current weight limit (Initially, ). In each iteration, we choose item " from the head of the unselected list. If, we take item ", and, then consider the next unselected item. If, we take a fraction of item ", i.e., we only take! ( # ) of item ", which weights exactly. Then the algorithm is finished. Observe that the algorithm may take a fraction of an item, which can only be the last selected item. We claim that the total cost for this set of items is an optimal cost. 6

Correctness Let + be the optimal solution of the problem. Let the greedy solution, where the items are ordered according to the sequence of greedy choices. The trick of the proof is to show there exist an optimal solution such that it also takes the greedy choice in each iteration. The first step is to show there exist an optimal solution such that it selects (a fraction or 1 unit of) item %, our first greedy choice. There are two cases to consider. Suppose takes 1 unit of (implies ). If also takes 1 unit %, then we are done. Suppose does not take 1 unit of. Then we take away weight from and put 1 unit of to it, yielding a new solution. Observe has weight (the weight constraint). Moreover, since has the maximum!, is as good as (or else there is a contradiction). Hence is also an optimal solution for. 7

Correctness On the other hand, suppose takes a fraction of (implies ). If also takes unit %, then we are done. Suppose takes less than unit of ( can t take larger unit of % ). Then we take away weight from and put unit of % to it, yielding a new solution. Similarly, is as good as (or else there is a contradiction). Hence unit of is also an optimal solution for. Observe that if we have shown an optimal solution selects a fraction of, we are done with the proof. 8

Correctness The second step is to show the current problem exhibits optimal substructure property. A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions of subproblems. In the fractional knapsack problem, we have shown there is an optimal solution that selects 1 unit of %. After we select %, the weight constraint decreases to, the item set becomes. Let be a fractional knapsack problem such that the weight constraint is, and the item set is. Let. To prove the optimal substructure property, we need to show is an optimal solution of (an optimal solution to the problem contains within it optimal solution of subproblem). 9

Correctness Suppose on the contrary that is not an optimal solution of. Let be an optimal solution of, which is more valuable that. Let ; observe that is a feasible selection for. On the other hand, the value of (an optimal solution for ) equals the value of +, which is less than the cost of (since we assume the value of ). Hence we find a selection which is more valuable than the optimal solution. A contradiction! Hence is an optimal solution for. Therefore, after each greedy choice is made, we are left with an problem of the same form as the original problem. Since needs to be solved optimally, we can show there exists an optimal solution for which selects. Inductively, we have shown the greedy solution is an optimal solution. 10

Greedy solution for 0-1 Knapsack Problem? The 0-1 Knapsack Problem does not have a greedy solution! Example: 3 A B C 2 2 300 190 180 cost/ weight 100 95 90 Solution is item B + item C Question : Suppose we try to prove the greedy algorithm for 0-1 knapsack problem is correct. We follow exactly the same lines of arguments as fractional knapsack problem. Of course, it must fail. Where is the problem in the proof? 11