The Knapsack Problem. Decisions and State. Dynamic Programming Solution Introduction to Algorithms Recitation 19 November 23, 2011

Save this PDF as:

Size: px
Start display at page:

Download "The Knapsack Problem. Decisions and State. Dynamic Programming Solution Introduction to Algorithms Recitation 19 November 23, 2011"

Transcription

1 The Knapsack Problem You find yourself in a vault chock full of valuable items. However, you only brought a knapsack of capacity S pounds, which means the knapsack will break down if you try to carry more than S pounds in it). The vault has n items, where item i weighs s i pounds, and can be sold for v i dollars. You must choose which items to take in your knapsack so that you ll make as much money as possible after selling the items. The weights, s i, are all integers. Decisions and State A solution to an instance of the Knapsack problem will indicate which items should be added to the knapsack. The solution can be broken into n true / false decisions d 0... d n 1. For 0 i n 1, d i indicates whether item i will be taken into the knapsack. In order to decide whether to add an item to the knapsack or not, we need to know if we have enough capacity left over. So the current state when making a decision must include the available capacity or, equivalently, the weight of the items that we have already added to the knapsack. Dynamic Programming Solution dp[i][j] is the maximum value that can be obtained by using a subset of the items i... n 1 (last n i items) which weighs at most j pounds. When computing dp[i][j], we need to consider all the possible values of d i (the decision at step i): 1. Add item i to the knapsack. In this case, we need to choose a subset of the items i+1... n 1 that weighs at most j s i pounds. Assuming we do that optimally, we ll obtain dp[i+1][j s i ] value out of items i n 1, so the total value will be v i + dp[i + 1][j s i ]. 2. Don t add item i to the knapsack, so we ll re-use the optimal solution for items i+1... n 1 that weighs at most j pounds. That answer is in dp[i + 1][j]. We want to maximize our profits, so we ll choose the best possible outcome. ( ) dp[i + 1][j] dp[i][j] = max dp[i + 1][j s i ] + v i if j s i To wrap up the loose ends, we notice that dp[n][j] = 0, 0 j S is a good base case, as the interval n... n 1 contains no items, so there s nothing to add to the knapsack, which means the total sale value will be 0. The answer to our original problem can be found in dp[0 ][S]. The value in dp[i][j] depends on values of dp[i + 1][k] where k < j, so a good topological sort would be: dp[n][0] dp[n][1]... dp[n][s] dp[n 1][0] dp[n 1][1]... dp[n 1][S]... dp[0][0], dp[0][1]... dp[0][s] This topological sort can be produced by the pseudo-code below.

2 KNAPSACKDPTOPSORT(n, S) 1 for i in {n, n } 2 for j in {0, 1... S} 3 print (i, j) The full pseudo-code is straight-forward to write, as it closely follows the topological sort and the Dynamic Programming recurrence. KNAPSACK(n, S, s, v) 1 for i in {n, n } 2 for j in {0, 1... S} 3 if i == n 4 dp[i][j] = 0 // initial condition 5 else 6 choices = [] 7 APPEND(choices, dp[i + 1][j]) 8 if j s i 9 APPEND(choices, dp[i + 1][j s i ] + v i ) 10 dp[i][j] = MAX(choices) 11 return dp[0][s] DAG Shortest-Path Solution The Knapsack problem can be reduced to the single-source shortest paths problem on a DAG (directed acyclic graph). This formulation can help build the intuition for the dynamic programming solution. The state associated with each vertex is similar to the dynamic programming formulation: vertex (i, j) represents the state where we have considered items 0... i, and we have selected items whose total weight is at most j pounds. We can consider that the DAG has n + 1 layers (one layer for each value of i, 0 i n), and each layer consists of S + 1 vertices (the possible total weights are 0... S). As shown in figure 1, each vertex (i, j) has the following outgoing edges: Item i is not taken: an edge from (i, j) to (i + 1, j) with weight 0 Item i is taken: an edge from (i, j) to (i + 1, j + s i ) with weight v i if j + s i S The source vertex is (0, 0), representing an initial state where we haven t considered any item yet, and our backpack is empty. The destination is the vertex with the shortest path out of all vertices (n, j), for 0 j S. Alternatively, we can create a virtual source vertex s, and connect it to all the vertices (0, j) for 0 j S, meaning that we can leave j pounds of capacity unused (the knapsack will end up weighing S j pounds). In this case, the destination is the vertex (n, S). This approach is less intuitive, but matches the dynamic programming solution better.

3 (2, 1) 0-5 (3, 1) (3, 4) Figure 1: Edges coming out of the vertex (2, 1) in a Knapsack problem instance where item 2 has weight 3 and value 5. If item 2 is selected, the new total weight will be = 4, and the total value will increase by 5. Edge weights are negative so that the shortest path will yield the maximum knapsack value. Running Time The dynamic programming solution to the Knapsack problem requires solving O(nS) sub-problems. The solution of one sub-problem depends on two other sub-problems, so it can be computed in O(1) time. Therefore, the solution s total running time is O(nS). The DAG shortest-path solution creates a graph with O(nS) vertices, where each vertex has an out-degree of O(1), so there are O(nS) edges. The DAG shortest-path algorithm runs in O(V +E), so the solution s total running time is also O(nS). This is reassuring, given that we re performing the same computation. The solution running time is not polynomial in the input size. The next section explains the subtle difference between polynomial running times and pseudo-polynomial running times, and why it matters.

4 Polynomial Time vs Pseudo-Polynomial Time To further understand the difference between algorithms with polynomial and pseudo-polynomial running times, let s compare the performance of the Dynamic Programming solution to the Knapsack problem with the performance of Dijkstra s algorithm for solving the single-source shortest paths problem. Dynamic Programming for Knapsack The input for an instance of the Knapsack problem can be represented in a reasonably compact form as follows (see Figure 2): The number of items n, which can be represented using O(log n) bits. n item weights. We notice that item weights should be between 0... S because we can ignore any items whose weight exceeds the knapsack capacity. This means that each weight can be represented using O(log S) bits, and all the weights will take up O(n log S) bits. n item values. Let V be the maximum value, so we can represent each value using O(log V ) bits, and all the values will take up O(n log V ) bits. n s 1 s 2... s n v 1 v 2... v n log n log S log S log S Figure 2: A compact representation of an instance of the Knapsack problem. The item weights, s i, should be between 0 and S. An item whose weight exceeds the knapsack capacity (s i > S) can be immediately discarded from the input, as it wouldn t fit the knapsack. The total input size is O(log(n) + n(log S + log V )) = O(n(log S + log V )). Let b = log S, v = log V, so the input size is O(n(v + b)). The running time for the Dynamic Programming solution is O(nS) = O(n 2 b ). Does this make you uneasy? It should, and the next paragraph explains why. In this course, we use asymptotic analysis to understand the changes in an algorithm s performance as its input size increases the corresponding buzzword is scale, as in does algorithm X scale?. So, how does our Knapsack solution runtime change if we double the input size? We can double the input size by doubling the number of items, so n = 2n. The running time is O(nS), so we can expect that the running time will double. Linear scaling isn t too bad! However, we can also double the input size by doubling v and b, the number of bits required to represent the item weights and values. v doesn t show up in the running time, so let s study the impact of doubling the input size by doubling b. If we set b = 2b, the O(n 2 b ) result of our algorithm analysis suggests that the running time will increase quadratically! To drive this point home, Table 1 compares the solution s running time for two worst-case inputs of 6, 400 bits against a baseline worst-case input of 3, 200 bits.

5 Metric Baseline Double n Double b n 100 items 200 items 100 items b 32 bits 32 bits 64 bits Input size 3, 200 bits 6, 400 bits 6, 400 bits Worst-case S = = Running time ops ops ops Input size 1x 2x 2x Time 1x 2x x Table 1: The amounts of time required to solve some worst-case inputs to the Knapsack problem. The Dynamic Programming solution to the Knapsack problem is a pseudo-polynomial algorithm, because the running time will not always scale linearly if the input size is doubled. Let s look at Dijkstra s algorithm, for comparison. Dijkstra for Shortest-Paths Given a graph G with V vertices and E edges, Dijkstra s algorithm implemented with binary heaps can compute single-source shortest-paths in O(E log V ) 1. The input graph for an instance of the single-source shortest-paths problem can be represented in a reasonably compact form as follows (see Figure 3): The number of vertices V, which can be represented using O(log V ) bits. The number of edges E, which can be represented using O(log E) bits. E edges, represented as a list of tuples (s i, t i, w i ) (directed edge i goes from vertex s i to vertex t i, and has weight w i ). We notice that s i and t i can be represented using O(logV ) bits, since they must address V vertices. Let the maximum weight be W, so that each w i takes up O(log W ) bits. It follows that all edges take up O(E(log V + log W )) bits. V E s 1 t 1 w 1 s 2 t 2 w 2... s E t E w E log V log E log V log V log W log V log V log W log V log V log W Figure 3: A compact representation of an instance of the single-source shortest-paths problem. Each edge i is represented by a tuple, (s i, t i, w i ), where s i and t i are between 0 and V 1 and w i is the edge s weight. The total input size is O(log(V ) + log(e) + E(2 log V + log W )) = O(E(log V + log W )). Let b = log W, so the input size is O(E(log V + b)). 1 A Fibonacci heaps-based implementation would have a running time of O(E + V log V ). Using that time would our analysis a bit more difficult. Furthermore, Fibonacci heaps are only important to theorests, as their high constant factors make them slow in practice.

6 Note that the graph representation above, though intuitive, is not suitable for running Dijkstra. However, the edge list representation can be used to build an adjacency list representation in O(V + E), so the representation choice does not impact the total running time of Dijkstra s algorithm, which is O(E log V ). Compare this result to the Knapsack solution s running time. In this case, the running time is polynomial (actually linear) in the number of bits required to represent the input. If we double the input size by doubling E, the running time will double. If we double the input size by doubling the width of the numbers used to represent V (in effect, we d be squaring V, but not changing E), the running time will also double. If we double the input size by doubling b, the width of the numbers used to represent edge weights, the running time doesn t change. No matter the reason why how the input size doubles, there is no explosion in the running time. Computation Model Did that last paragraph make you uneasy? We just said that doubling the width of the numbers used to represent the edge weights doesn t change the running time at all. How can that be?! We re assuming the RAM computation model, where we can perform arithmetic operations on word-sized numbers in O(1) time. This simplifies our analysis, but the main result above (Dijkstra is polynomial, Knapsack s solution is pseudo-polynomial) hold in any reasonable (deterministic) computational model. You can convince yourself that, even if arithmetic operations take time proportional to the inputs sizes (O(b), O(n), or O(log V ), depending on the case) Dijkstra s running time is still polynomial in the input size, whereas the Dynamic Programming solution to the Knapsack problem takes an amount of time that is exponential in b.

8 Either of the above avenues yields a solution, so dp[i][j] is TRUE if at least one of the decision possibilities results in a solution. Recall that OR is implemented using max in our Boolean logic. The recurrence (below) ends up being simpler than the one in the Knapsack solution! ( ) dp[i + 1][j] dp[i][j] = max dp[i + 1][j s i ] if j s i The initial conditions for this problem are dp[n][0] = 1 (TRUE) and dp[n][j] = 0 (FALSE) 1 j S. The interval n... n 1 contains no items, the corresponding knapsack is empty, which means the only achievable weight is 0. Just like in the Knapsack problem, the answer the original problem is in dp[0 ][S]. The topological sort is identical to Knapsack s, and the pseudo-code only has a few small differences. SUBSETSUM(n, S, s) 1 for i in {n, n } 2 for j in {0, 1... S} 3 if i == n // initial conditions 4 if j == 0 5 dp[i][j] = 1 6 else 7 dp[i][j] = 0 8 else 9 choices = [] 10 APPEND(choices, dp[i + 1][j]) 11 if j s i 12 APPEND(choices, dp[i + 1][j s i ]) 13 dp[i][j] = MAX(choices) 14 return dp[0][s] DAG Shortest-Path Solution The DAG representation of the problem is also very similar to the Knapsack representation, and slightly simpler. A vertex (i, j) represents the possibility of finding a subset of the first i bars that weighs exactly j pounds. So, if there is a path to vertex (i, j) (the shortest path length is less than inf), then the answer to sub-problem (i, j) is TRUE. The starting vertex is (0, 0) because we can only achieve a weight of 0 pounds using 0 items. The destination vertex is (n, S), because that maps to the goal of finding a subset of all items that weighs exactly S pounds. We only care about the existence of a path, so we can use BFS or DFS on the graph. Alternatively, we can assign the same weight of 1 to all edges, and run the DAG shortest-paths algorithm. Both approaches yield the same running time, but the latter approach maps better to the dynamic programming solution.

9 K-Sum A further restriction of the Subset Sum problem is that the backpack has K pockets, and you can fit exactly one gold bar in a pocket. You must fill up all the pockets, otherwise the gold bars will move around as you walk, and the noises will trigger the vault s alarm. The problem becomes: given n gold bars of weights s 0... s n 1, can you select exactly K bars whose weights add up to exactly S? Decisions and State A solution to a problem instance looks the same as for the Knapsack problem, so we still have n Boolean decisions d 0... d n 1. However, the knapsack structure changed. In the original problem, the knapsack capacity placed a limit on the total weight of the items, so we needed to keep track of the total weight of the items that we added. That was our state. In the k-sum problem, the knapsack also has slots. When deciding whether to add a gold bar to the knapsack or not, we need to know if there are any slots available, which is equivalent to knowing how many slots we have used up. So the problem state needs to track both the total weight of the bars in the knapsack, and how many slots they take up. The number of slots equals the number of bars in the backpack. Dynamic Programming Solution dp[i][j][k] is TRUE (1) if there is a k-element subset of the gold bars i... n 1 that weighs exactly j pounds. dp[i][j][k] is computed considering all the possible values of d i (the decision at step i, which is whether or not to include bar i): 1. Add bar i to the knapsack. In this case, we need to choose a k 1-bar subset of the bars i n 1 that weighs exactly j s i pounds. dp[i + 1][j s i ][k 1] indicates whether such a subset exists. 2. Don t add item i to the knapsack. In this case, the solution rests on a k-bar subset of the bars i n 1 that weighs exactly j pounds. The answer of whether such a subset exists is in dp[i + 1][j][k]. Either of the above avenues yields a solution, so dp[i][j][k] is TRUE if at least one of the decision possibilities results in a solution. The recurrence is below. ( ) dp[i + 1][j][k] dp[i][j][k] = max dp[i + 1][j s i ][k 1] if j s i and k > 0 The initial conditions for this problem are dp[n][0][0] = 1 (TRUE) and dp[n][j][k] = 0 (FALSE) 1 j S, 0 k K. The interval n... n 1 contains no items, the corresponding knapsack is empty, which means the only achievable weight is 0. The answer the original problem is in dp[0 ][S][K ]. A topological sort can be produced by the following pseudo-code.

10 DPTOPSORT(n, S) 1 for i in {n 1, n } 2 for j in {0, 1... S} 3 for k in {0, 1... K} 4 print (i, j, k) Once again, the topological sort can be combined with the Dynamic Programming recurrence to obtain the full solution pseudo-code in a straight-forward manner. KSUM(n, K, S, s) 1 for i in {n, n } 2 for j in {0, 1... S} 3 for k in {0, 1... K} 4 if i == n // initial conditions 5 if j == 0 and k == 0 6 dp[i][j][k] = 1 7 else 8 dp[i][j][k] = 0 9 else 10 choices = [] 11 APPEND(choices, dp[i + 1][j][k]) 12 if j s i and k > 0 13 APPEND(choices, dp[i + 1][j s i ][k 1]) 14 dp[i][j][k] = MAX(choices) 15 return dp[0][s][k] DAG Shortest-Path Solution The reduction to shortest-paths in a DAG offers a different avenue that leads to the same solution, and helps build intuition. Let s start with the reduction of Subset Sum. We have a graph of vertices (i, j), and want a path from (0, 0) to (n, S) that picks up exactly k bars. Let s attempt to use edge weights to count the number of bars. Edges from (i, j) to (i + 1, j) will have a cost of 0, because it means that bar i is not added to the knapsack. Edges from (i, j) to (i + 1, j + s i ) have a cost of 1, as they imply that one bar is added to the knapsack, and therefore one more slot is occupied. Figure 4 shows an example. Given the formulation above, we want to find a path from (0, 0) to (n, S) whose cost is exactly K. We can draw upon our intution built from previous graph transformation problems, and realize that we can represent K-cost requirement by making K copies of the graph, where each copy k is a layer that captures the paths whose cost is exactly k. Therefore, in the new graph, a vertex (i, j, k) indicates whether there is a k-cost path to vertex (i, j) in the Subset Sum problem graph. Each 0-cost edge from (i, j) to (i + 1, j) in the old graph

11 (2, 1) 0 1 (3, 1) (3, 4) Figure 4: Edges coming out of the vertex (2, 1) in a k-sum problem instance where gold bar 2 has weight 3. If bar 2 is added to the knapsack, the new total weight will be = 4, and the number of occupied slots increases by 1. maps to K edges from (i, j, k) to (i + 1, j, k) in the new graph. Each 1-cost edge from (i, j) to (i, j + s i ) maps to K edges (i, j, k) to (i, j + s i, k + 1). A comparison between figures 5 and 4 illustrates the transformation. (2, 1, 1) (3, 1, 1) (3, 4, 2) Figure 5: Edges coming out of the vertex (2, 1, 1) in a k-sum problem instance where gold bar 2 has weight 3. If bar 2 is added to the knapsack, the new total weight will be = 4, and the number of occupied slots increases by 1. In the new graph, we want to find a path from vertex (0, 0, 0) to vertex (n, S, K). All edges have the same weight. Convince yourself that the DAG described above is equivalent to the dynamic programming formulation in the previous section. Running Time The Dynamic Programming solution solves O(K ns) sub-problems, and each sub-problem takes O(1) time to solve. The solution s total running time is O(KnS). The DAG has K + 1 layers of O(nS) vertices (vertex count borrowed from the Knapsack problem), and K copies of the O(nS) edges in the Knapsack graph. Therefore, V = O(K S) and E = O(Kc S). The solution s running time is O(V + E) = o(kns)

12 On your own: Multiset k-sum How does the previous problem change if there is an endless supply of copies of each bar so, effectively, each bar can be added to the knapsack more than once? Compare the O(nK S) solution to the O(n k 2 ) solution in Quiz 1 (Problem 6: Shopping Madness).

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

Lecture 11: All-Pairs Shortest Paths

Lecture 11: All-Pairs Shortest Paths Introduction Different types of algorithms can be used to solve the all-pairs shortest paths problem: Dynamic programming Matrix multiplication Floyd-Warshall algorithm

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Outline NP-completeness Examples of Easy vs. Hard problems Euler circuit vs. Hamiltonian circuit Shortest Path vs. Longest Path 2-pairs sum vs. general Subset Sum Reducing one problem to another Clique

Near Optimal Solutions

Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

Quiz 2 Solutions. Solution: False. Need stable sort.

Introduction to Algorithms April 13, 2011 Massachusetts Institute of Technology 6.006 Spring 2011 Professors Erik Demaine, Piotr Indyk, and Manolis Kellis Quiz 2 Solutions Problem 1. True or false [24

Informatique Fondamentale IMA S8

Informatique Fondamentale IMA S8 Cours 4 : graphs, problems and algorithms on graphs, (notions of) NP completeness Laure Gonnord http://laure.gonnord.org/pro/teaching/ Laure.Gonnord@polytech-lille.fr Université

Data Structures and Algorithms Written Examination

Data Structures and Algorithms Written Examination 22 February 2013 FIRST NAME STUDENT NUMBER LAST NAME SIGNATURE Instructions for students: Write First Name, Last Name, Student Number and Signature where

Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

Lecture Notes on Spanning Trees

Lecture Notes on Spanning Trees 15-122: Principles of Imperative Computation Frank Pfenning Lecture 26 April 26, 2011 1 Introduction In this lecture we introduce graphs. Graphs provide a uniform model

Problem Set 7 Solutions

8 8 Introduction to Algorithms May 7, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Handout 25 Problem Set 7 Solutions This problem set is due in

Lecture 6: Approximation via LP Rounding

Lecture 6: Approximation via LP Rounding Let G = (V, E) be an (undirected) graph. A subset C V is called a vertex cover for G if for every edge (v i, v j ) E we have v i C or v j C (or both). In other

Cpt S 223. School of EECS, WSU

The Shortest Path Problem 1 Shortest-Path Algorithms Find the shortest path from point A to point B Shortest in time, distance, cost, Numerous applications Map navigation Flight itineraries Circuit wiring

Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

CS325: Analysis of Algorithms, Fall Midterm Solutions

CS325: Analysis of Algorithms, Fall 2016 Midterm Solutions I don t know policy: you may write I don t know and nothing else to answer a question and receive 25 percent of the total points for that problem

Unit 4: Layout Compaction

Unit 4: Layout Compaction Course contents Design rules Symbolic layout Constraint-graph compaction Readings: Chapter 6 Unit 4 1 Design rules: restrictions on the mask patterns to increase the probability

Algorithms 2/17/2015. Double Summations. Enough Mathematical Appetizers! Algorithms. Algorithms. Algorithm Examples. Algorithm Examples

Double Summations Table 2 in 4 th Edition: Section 1.7 5 th Edition: Section.2 6 th and 7 th Edition: Section 2.4 contains some very useful formulas for calculating sums. Enough Mathematical Appetizers!

Resource Allocation with Time Intervals

Resource Allocation with Time Intervals Andreas Darmann Ulrich Pferschy Joachim Schauer Abstract We study a resource allocation problem where jobs have the following characteristics: Each job consumes

CSE 101: Design and Analysis of Algorithms Homework 4 Winter 2016 Due: February 25, 2016

CSE 101: Design and Analysis of Algorithms Homework 4 Winter 2016 Due: February 25, 2016 Exercises (strongly recommended, do not turn in) 1. (DPV textbook, Problem 7.11.) Find the optimal solution to the

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92.

Name: Email ID: CSE 326, Data Structures Section: Sample Final Exam Instructions: The exam is closed book, closed notes. Unless otherwise stated, N denotes the number of elements in the data structure

Two Ways of Thinking about Dynamic Programming Jeff Edmonds

Two Ways of Thinking about Dynamic Programming Jeff Edmonds Here are two set of steps that you could follow to design a new dynamic programming algorithm. The first are the steps that I teach in 3101 using

CMSC 451: Graph Properties, DFS, BFS, etc.

CMSC 451: Graph Properties, DFS, BFS, etc. Slides By: Carl Kingsford Department of Computer Science University of Maryland, College Park Based on Chapter 3 of Algorithm Design by Kleinberg & Tardos. Graphs

Lecture Notes 6: Approximations for MAX-SAT. 2 Notes about Weighted Vertex Cover Approximation

Algorithmic Methods //00 Lecture Notes 6: Approximations for MAX-SAT Professor: Yossi Azar Scribe:Alon Ardenboim Introduction Although solving SAT is nown to be NP-Complete, in this lecture we will cover

Presented By: Ms. Poonam Anand

Presented By: Ms. Poonam Anand Know the different types of numbers Describe positional notation Convert numbers in other bases to base 10 Convert base 10 numbers into numbers of other bases Describe the

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery

Warshall s Algorithm: Transitive Closure

CS 0 Theory of Algorithms / CS 68 Algorithms in Bioinformaticsi Dynamic Programming Part II. Warshall s Algorithm: Transitive Closure Computes the transitive closure of a relation (Alternatively: all paths

COT5405 Analysis of Algorithms Homework 3 Solutions

COT0 Analysis of Algorithms Homework 3 Solutions. Prove or give a counter example: (a) In the textbook, we have two routines for graph traversal - DFS(G) and BFS(G,s) - where G is a graph and s is any

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques. My T. Thai

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques My T. Thai 1 Overview An overview of LP relaxation and rounding method is as follows: 1. Formulate an optimization

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm

Minimum cost maximum flow, Minimum cost circulation, Cost/Capacity scaling

6.854 Advanced Algorithms Lecture 16: 10/11/2006 Lecturer: David Karger Scribe: Kermin Fleming and Chris Crutchfield, based on notes by Wendy Chu and Tudor Leu Minimum cost maximum flow, Minimum cost circulation,

Lecture 21: Competitive Analysis Steven Skiena. skiena

Lecture 21: Competitive Analysis Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Motivation: Online Problems Many problems

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

Algorithm Efficiency and Sorting

Algorithm Efficiency and Sorting How to Compare Different Problems and Solutions Two different problems Which is harder/more complex? Two different solutions to the same problem Which is better? Questions:

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

SIMS 255 Foundations of Software Design Complexity and NP-completeness Matt Welsh November 29, 2001 mdw@cs.berkeley.edu 1 Outline Complexity of algorithms Space and time complexity ``Big O'' notation Complexity

CS2 Algorithms and Data Structures Note 11. Breadth-First Search and Shortest Paths

CS2 Algorithms and Data Structures Note 11 Breadth-First Search and Shortest Paths In this last lecture of the CS2 Algorithms and Data Structures thread we will consider the problem of computing distances

Logistics Big Data Algorithmic Introduction Prof. Yuval Shavitt Contact: shavitt@eng.tau.ac.il Final grade: 4 6 home assignments (will try to include programing assignments as well): 2% Exam 8% Big Data

Linear Programming. April 12, 2005

Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

Exam study sheet for CS2711. List of topics

Exam study sheet for CS2711 Here is the list of topics you need to know for the final exam. For each data structure listed below, make sure you can do the following: 1. Give an example of this data structure

Dynamic Programming. Jaehyun Park. CS 97SI Stanford University. June 29, 2015

Dynamic Programming Jaehyun Park CS 97SI Stanford University June 29, 2015 Outline Dynamic Programming 1-dimensional DP 2-dimensional DP Interval DP Tree DP Subset DP Dynamic Programming 2 What is DP?

Distributed Computing over Communication Networks: Maximal Independent Set

Distributed Computing over Communication Networks: Maximal Independent Set What is a MIS? MIS An independent set (IS) of an undirected graph is a subset U of nodes such that no two nodes in U are adjacent.

Introduction to Diophantine Equations

Introduction to Diophantine Equations Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles September, 2006 Abstract In this article we will only touch on a few tiny parts of the field

A. V. Gerbessiotis CS Spring 2014 PS 3 Mar 24, 2014 No points

A. V. Gerbessiotis CS 610-102 Spring 2014 PS 3 Mar 24, 2014 No points Problem 1. Suppose that we insert n keys into a hash table of size m using open addressing and uniform hashing. Let p(n, m) be the

Introduction to Algorithms Review information for Prelim 1 CS 4820, Spring 2010 Distributed Wednesday, February 24

Introduction to Algorithms Review information for Prelim 1 CS 4820, Spring 2010 Distributed Wednesday, February 24 The final exam will cover seven topics. 1. greedy algorithms 2. divide-and-conquer algorithms

Lecture 21: Dynamic Programming III

Lecture 21: Dynamic Programming III Lecture Overview Subproblems for strings Parenthesization Edit distance (& longest common subseq.) Knapsack Pseudopolynomial Time Review: * 5 easy steps to dynamic programming

Final Exam Solutions

Introduction to Algorithms May 19, 2011 Massachusetts Institute of Technology 6.006 Spring 2011 Professors Erik Demaine, Piotr Indyk, and Manolis Kellis Final Exam Solutions Problem 1. Final Exam Solutions

Graphs. Graph G=(V,E): representation

Graphs G = (V,E) V the vertices of the graph {v 1, v 2,..., v n } E the edges; E a subset of V x V A cost function cij is the cost/ weight of the edge (v i, v j ) Graph G=(V,E): representation 1. Adjacency

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

Complexity Theory IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar Outline Goals Computation of Problems Concepts and Definitions Complexity Classes and Problems Polynomial Time Reductions Examples

COMP 250 Fall Mathematical induction Sept. 26, (n 1) + n = n + (n 1)

COMP 50 Fall 016 9 - Mathematical induction Sept 6, 016 You will see many examples in this course and upcoming courses of algorithms for solving various problems It many cases, it will be obvious that

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

Induction. Margaret M. Fleck. 10 October These notes cover mathematical induction and recursive definition

Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing

All-Pairs Shortest Paths

All-Pairs Shortest Paths Input: A directed graph G = (V, E) where each edge (v i, v j ) has a weight w(i, j). Output: A shortest path from u to v, for all u, v V. Weight of path: Given a path p =< v 1,...,

A couple of Max Min Examples. 3.6 # 2) Find the maximum possible area of a rectangle of perimeter 200m.

A couple of Max Min Examples 3.6 # 2) Find the maximum possible area of a rectangle of perimeter 200m. Solution : As is the case with all of these problems, we need to find a function to minimize or maximize

Recursion and Induction

Recursion and Induction Themes Recursion Recurrence Definitions Recursive Relations Induction (prove properties of recursive programs and objects defined recursively) Examples Tower of Hanoi Gray Codes

9.3 Solving Quadratic Equations by Using the Quadratic Formula 9.3 OBJECTIVES 1. Solve a quadratic equation by using the quadratic formula 2. Determine the nature of the solutions of a quadratic equation

1 Representing Graphs With Arrays

Lecture 0 Shortest Weighted Paths I Parallel and Sequential Data Structures and Algorithms, 5-20 (Spring 202) Lectured by Margaret Reid-Miller 6 February 202 Today: - Representing Graphs with Arrays -

Persistent Data Structures

6.854 Advanced Algorithms Lecture 2: September 9, 2005 Scribes: Sommer Gentry, Eddie Kohler Lecturer: David Karger Persistent Data Structures 2.1 Introduction and motivation So far, we ve seen only ephemeral

Homework 15 Solutions

PROBLEM ONE (Trees) Homework 15 Solutions 1. Recall the definition of a tree: a tree is a connected, undirected graph which has no cycles. Which of the following definitions are equivalent to this definition

NUMBERING SYSTEMS C HAPTER 1.0 INTRODUCTION 1.1 A REVIEW OF THE DECIMAL SYSTEM 1.2 BINARY NUMBERING SYSTEM

12 Digital Principles Switching Theory C HAPTER 1 NUMBERING SYSTEMS 1.0 INTRODUCTION Inside today s computers, data is represented as 1 s and 0 s. These 1 s and 0 s might be stored magnetically on a disk,

5.1 Bipartite Matching

CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

Final Exam, Spring 2007

10-701 Final Exam, Spring 2007 1. Personal info: Name: Andrew account: E-mail address: 2. There should be 16 numbered pages in this exam (including this cover sheet). 3. You can use any material you brought:

Page 1. CSCE 310J Data Structures & Algorithms. CSCE 310J Data Structures & Algorithms. P, NP, and NP-Complete. Polynomial-Time Algorithms

CSCE 310J Data Structures & Algorithms P, NP, and NP-Complete Dr. Steve Goddard goddard@cse.unl.edu CSCE 310J Data Structures & Algorithms Giving credit where credit is due:» Most of the lecture notes

Memoization/Dynamic Programming. The String reconstruction problem. CS125 Lecture 5 Fall 2016

CS125 Lecture 5 Fall 2016 Memoization/Dynamic Programming Today s lecture discusses memoization, which is a method for speeding up algorithms based on recursion, by using additional memory to remember

CSE 100: HASHING, BOGGLE

CSE 100: HASHING, BOGGLE Probability of Collisions If you have a hash table with M slots and N keys to insert in it, then the probability of at least 1 collision is: 2 Hashtable collisions and the "birthday

2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

GRAPH THEORY and APPLICATIONS. Trees

GRAPH THEORY and APPLICATIONS Trees Properties Tree: a connected graph with no cycle (acyclic) Forest: a graph with no cycle Paths are trees. Star: A tree consisting of one vertex adjacent to all the others.

Analysis of Computer Algorithms. Algorithm. Algorithm, Data Structure, Program

Analysis of Computer Algorithms Hiroaki Kobayashi Input Algorithm Output 12/13/02 Algorithm Theory 1 Algorithm, Data Structure, Program Algorithm Well-defined, a finite step-by-step computational procedure

Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design

Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design Hongsik Choi and Hyeong-Ah Choi Department of Electrical Engineering and Computer Science George Washington University Washington,

Discrete Mathematics, Chapter 3: Algorithms

Discrete Mathematics, Chapter 3: Algorithms Richard Mayr University of Edinburgh, UK Richard Mayr (University of Edinburgh, UK) Discrete Mathematics. Chapter 3 1 / 28 Outline 1 Properties of Algorithms

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

Data Structures Fibonacci Heaps, Amortized Analysis

Chapter 4 Data Structures Fibonacci Heaps, Amortized Analysis Algorithm Theory WS 2012/13 Fabian Kuhn Fibonacci Heaps Lacy merge variant of binomial heaps: Do not merge trees as long as possible Structure:

Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

Computer Algorithms NP-Complete Problems NP-completeness The quest for efficient algorithms is about finding clever ways to bypass the process of exhaustive search, using clues from the input in order

Sample Problems in Discrete Mathematics

Sample Problems in Discrete Mathematics This handout lists some sample problems that you should be able to solve as a pre-requisite to Computer Algorithms Try to solve all of them You should also read

Chapter 15: Dynamic Programming

Chapter 15: Dynamic Programming Dynamic programming is a general approach to making a sequence of interrelated decisions in an optimum way. While we can describe the general characteristics, the details

1.1 Solving Inequalities

2 1.1. Solving Inequalities www.ck12.org 2.1 1.1 Solving Inequalities Learning Objectives Write and graph inequalities in one variable on a number line. Solve inequalities using addition and subtraction.

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

Question 1: How do you maximize the revenue or profit for a business?

Question 1: How do you maximize the revenue or profit for a business? For a business to optimize its total revenue, we need to find the production level or unit price that maximizes the total revenue function.

Dynamic Programming. Applies when the following Principle of Optimality

Dynamic Programming Applies when the following Principle of Optimality holds: In an optimal sequence of decisions or choices, each subsequence must be optimal. Translation: There s a recursive solution.

Graphs and Network Flows IE411 Lecture 1

Graphs and Network Flows IE411 Lecture 1 Dr. Ted Ralphs IE411 Lecture 1 1 References for Today s Lecture Required reading Sections 17.1, 19.1 References AMO Chapter 1 and Section 2.1 and 2.2 IE411 Lecture

Secondly, you may have noticed that the vertex is directly in the equation written in this new form.

5.5 Completing the Square for the Vertex Having the zeros is great, but the other key piece of a quadratic function is the vertex. We can find the vertex in a couple of ways, but one method we ll explore

The Taxman Game. Robert K. Moniot September 5, 2003

The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game

Introduction to LAN/WAN. Network Layer

Introduction to LAN/WAN Network Layer Topics Introduction (5-5.1) Routing (5.2) (The core) Internetworking (5.5) Congestion Control (5.3) Network Layer Design Isues Store-and-Forward Packet Switching Services

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

Analysis of Algorithms, I

Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadth-first search (BFS) 4 Applications

For the common dynamic programming problems, there is a systematic way of searching for a solution that consists of repeating the following steps:

Lecture 8 Solving Dynamic Programming Problems For the common dynamic programming problems, there is a systematic way of searching for a solution that consists of repeating the following steps: 1. Suggest

Discrete Optimization

Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

Simplified External memory Algorithms for Planar DAGs. July 2004

Simplified External Memory Algorithms for Planar DAGs Lars Arge Duke University Laura Toma Bowdoin College July 2004 Graph Problems Graph G = (V, E) with V vertices and E edges DAG: directed acyclic graph

Algorithms. Theresa Migler-VonDollen CMPS 5P

Algorithms Theresa Migler-VonDollen CMPS 5P 1 / 32 Algorithms Write a Python function that accepts a list of numbers and a number, x. If x is in the list, the function returns the position in the list

CHAPTER 3 Numbers and Numeral Systems

CHAPTER 3 Numbers and Numeral Systems Numbers play an important role in almost all areas of mathematics, not least in calculus. Virtually all calculus books contain a thorough description of the natural,

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

Lecture 3: Summations and Analyzing Programs with Loops

high school algebra If c is a constant (does not depend on the summation index i) then ca i = c a i and (a i + b i )= a i + b i There are some particularly important summations, which you should probably

Discuss the size of the instance for the minimum spanning tree problem.

3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

Algorithms and Data Structures

Algorithm Analysis Page 1 BFH-TI: Softwareschule Schweiz Algorithm Analysis Dr. CAS SD01 Algorithm Analysis Page 2 Outline Course and Textbook Overview Analysis of Algorithm Pseudo-Code and Primitive Operations

Chapter There are non-isomorphic rooted trees with four vertices. Ans: 4.

Use the following to answer questions 1-26: In the questions below fill in the blanks. Chapter 10 1. If T is a tree with 999 vertices, then T has edges. 998. 2. There are non-isomorphic trees with four

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about

The Graphical Method: An Example

The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

Helvetic Coding Contest 2016

Helvetic Coding Contest 6 Solution Sketches July, 6 A Collective Mindsets Author: Christian Kauth A Strategy: In a bottom-up approach, we can determine how many brains a zombie of a given rank N needs

CMPS 102 Solutions to Homework 1

CMPS 0 Solutions to Homework Lindsay Brown, lbrown@soe.ucsc.edu September 9, 005 Problem..- p. 3 For inputs of size n insertion sort runs in 8n steps, while merge sort runs in 64n lg n steps. For which