CSCE 310J Data Structures & Algorithms. Dynamic programming 0-1 Knapsack problem. Dynamic programming. Dynamic Programming. Knapsack problem (Review)



Similar documents
Lecture 13: The Knapsack Problem

Analysis of Algorithms I: Optimal Binary Search Trees

Big Data & Scripting Part II Streaming Algorithms

Page 1. CSCE 310J Data Structures & Algorithms. CSCE 310J Data Structures & Algorithms. P, NP, and NP-Complete. Polynomial-Time Algorithms

Near Optimal Solutions

Dynamic Programming. Lecture Overview Introduction

Lecture 7: NP-Complete Problems

Solutions to Homework 6

Lecture 10: Regression Trees

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

Dynamic Programming Problem Set Partial Solution CMPSC 465

Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

Binary Number System. 16. Binary Numbers. Base 10 digits: Base 2 digits: 0 1

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

Euclidean Minimum Spanning Trees Based on Well Separated Pair Decompositions Chaojun Li. Advised by: Dave Mount. May 22, 2014

Data Structures and Algorithms Written Examination

MA 1125 Lecture 14 - Expected Values. Friday, February 28, Objectives: Introduce expected values.

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

About the inverse football pool problem for 9 games 1

Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

Data Preprocessing. Week 2

Faster deterministic integer factorisation

Notes on Excel Forecasting Tools. Data Table, Scenario Manager, Goal Seek, & Solver

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

Binary Ant Colony Evolutionary Algorithm

Lecture 1: Course overview, circuits, and formulas

3.2 The Factor Theorem and The Remainder Theorem

Data Mining Classification: Decision Trees

Notes on Factoring. MA 206 Kurt Bryan

PROJECT COST MANAGEMENT

Polynomials. Dr. philippe B. laval Kennesaw State University. April 3, 2005

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM)

Decision-Tree Learning

Useful Number Systems

Discuss the size of the instance for the minimum spanning tree problem.

Lecture Notes on Database Normalization

Algorithm Design and Analysis

The Taxman Game. Robert K. Moniot September 5, 2003

Data Mining with R. Decision Trees and Random Forests. Hugh Murrell

Less naive Bayes spam detection

Computational Finance Options

CSI 333 Lecture 1 Number Systems

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Annuities. Lecture: Weeks Lecture: Weeks 9-11 (STT 455) Annuities Fall Valdez 1 / 43

MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research

Exponential time algorithms for graph coloring

!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"

CSC 373: Algorithm Design and Analysis Lecture 16

CSC2420 Spring 2015: Lecture 3

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi

Supervised Learning (Big Data Analytics)

Temperature Scales. The metric system that we are now using includes a unit that is specific for the representation of measured temperatures.

SOLUTIONS FOR PROBLEM SET 2

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

IP Subnetting: Practical Subnet Design and Address Determination Example

About Compound Interest

Option Properties. Liuren Wu. Zicklin School of Business, Baruch College. Options Markets. (Hull chapter: 9)

Chapter 20: Data Analysis

Approximation Algorithms

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

24. The Branch and Bound Method

Max Flow, Min Cut, and Matchings (Solution)

Chapter 13: Binary and Mixed-Integer Programming

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Finance 197. Simple One-time Interest

Dimitris Chatzidimitriou. Corelab NTUA. June 28, 2013

School Timetabling in Theory and Practice

On the independence number of graphs with maximum degree 3

Solutions of Linear Equations in One Variable

BENEFIT-COST ANALYSIS Financial and Economic Appraisal using Spreadsheets

Static Program Transformations for Efficient Software Model Checking

2. Filling Data Gaps, Data validation & Descriptive Statistics

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Worksheet for Teaching Module Probability (Lesson 1)

Definition Given a graph G on n vertices, we define the following quantities:

1 Lecture: Integration of rational functions by decomposition

Transaction Analysis SPOTLIGHT. 2 Chapter Page 53 09/25/07 jhr APPLE COMPUTER, INC.

International Journal of Advanced Research in Computer Science and Software Engineering

TU e. Advanced Algorithms: experimentation project. The problem: load balancing with bounded look-ahead. Input: integer m 2: number of machines

Populating a Data Quality Scorecard with Relevant Metrics WHITE PAPER

THE ENTITY- RELATIONSHIP (ER) MODEL CHAPTER 7 (6/E) CHAPTER 3 (5/E)

Activity 1: Using base ten blocks to model operations on decimals

2) Write in detail the issues in the design of code generator.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

EMBA in Management & Finance. Corporate Finance. Eric Jondeau

Fairness in Routing and Load Balancing

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis

An Autonomous Agent for Supply Chain Management

Statistical Machine Translation: IBM Models 1 and 2

Exhibit memory of previously-learned materials by recalling facts, terms, basic concepts, and answers. Key Words

Data Mining: Partially from: Introduction to Data Mining by Tan, Steinbach, Kumar

Gerry Hobbs, Department of Statistics, West Virginia University

Why Query Optimization? Access Path Selection in a Relational Database Management System. How to come up with the right query plan?

Notes on Complexity Theory Last updated: August, Lecture 1

Lecture 3: Finding integer solutions to systems of linear equations

Transcription:

CSCE J Data Structures & Algorithms Dynamic programming - Knapsac problem Dr. Steve Goddard goddard@cse.unl.edu CSCE J Data Structures & Algorithms Giving credit where credit is due:» Most of slides for this lecture are based on slides created by Dr. David Luebe, University of Virginia.» Some slides are based on lecture notes created by Dr. Chuc Cusac, UNL.» I have modified them and added new slides. http://www.cse.unl.edu/~goddard/courses/cscej Dynamic Programming Another strategy for designing algorithms is dynamic programming» A metatechnique, not an algorithm (lie divide & conquer)» The word programming is historical and predates computer programming Use when problem breas down into recurring small subproblems Dynamic programming It is used when the solution can be recursively described in terms of solutions to subproblems (optimal substructure). Algorithm finds solutions to subproblems and stores them in memory for later use. More efficient than brute-force methods, which solve the same subproblems over and over again. Summarizing the Concept of Dynamic Programming Basic idea:» Optimal substructure: optimal solution to problem consists of optimal solutions to subproblems» Overlapping subproblems: few subproblems in total, many recurring instances of each» Solve bottom-up, building a table of solved subproblems that are used to solve larger ones Variations:» Table could be -dimensional, triangular, a tree, etc. Knapsac problem (Review) Given some items, pac the napsac to get the maximum total value. Each item has some weight and some value. Total weight that we can carry is no more than some fixed number W. So we must consider weights of items as well as their value. Item # Weight Value 8 6 6

Knapsac problem There are two versions of the problem:. - napsac problem and. Fractional napsac problem. Items are indivisible; you either tae an item or not. Solved with dynamic programming. Items are divisible: you can tae any fraction of an item. Solved with a greedy algorithm. We have already seen this version - Knapsac problem Given a napsac with maximum capacity W, and a set S consisting of n items Each item i has some weight w i and benefit value b i (all w i, b i and W are integer values) Problem: How to pac the napsac to achieve maximum total value of paced items? 8 - Knapsac problem: a picture - Knapsac problem This is a napsac Max weight: W = W = Items Weight w i 9 Benefit value b i 8 Problem, in other words, is to find max i T b subject to i i T w W The problem is called a - problem, because each item must be entirely accepted or rejected. In the Fractional Knapsac Problem, we can tae fractions of items. i 9 - Knapsac problem: brute-force approach - Knapsac problem: brute-force approach Let s first solve this problem with a straightforward algorithm Since there are n items, there are n possible combinations of items. We go through all combinations and find the one with maximum value and with total weight less or equal to W Running time will be O( n ) Can we do better? Yes, with an algorithm based on dynamic programming We need to carefully identify the subproblems Let s try this: If items are labeled..n, then a subproblem would be to find an optimal solution for S = {items labeled,,.. }

Defining a Subproblem If items are labeled..n, then a subproblem would be to find an optimal solution for S = {items labeled,,.. } This is a reasonable subproblem definition. The question is: can we describe the final solution (S n ) in terms of subproblems (S )? Unfortunately, we can t do that. Defining a Subproblem w = b = w = b = w = b =8 w = Max weight: W = For S : Total weight: ; Maximum benefit: w = b = w = b = w = b =8 Weight Benefit Item # b = w i b i? w =9 b = For S : Total weight: Maximum benefit: 6 S S 9 8 Solution for S is not part of the solution for S!!! Defining a Subproblem (continued) Recursive Formula for subproblems As we have seen, the solution for S is not part of the solution for S So our definition of a subproblem is flawed and we need another one! Let s add another parameter: w, which will represent the exact weight for each subset of items The subproblem then will be to compute,w] Recursive formula for subproblems:, w] =, w] if w > w max{, w],, w w ] + b } It means, that the best subset of S that has total weight w is: ) the best subset of S - that has total weight w, or ) the best subset of S - that has total weight w-w plus the item 6 Recursive Formula, w] =, w] if w > w max{, w],, w w ] + b } The best subset of S that has the total weight w, either contains item or not. First case: w >w. Item can t be part of the solution, since if it was, the total weight would be > w, which is unacceptable. Second case: w w. Then the item can be in the solution, and we choose the case with greater value. - Knapsac Algorithm for w = to W,w] = for i = to n i,] = for i = to n for w = to W if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w 8

Running time for w = to W,w] = for i = to n i,] = for i = to n for w = to W < the rest of the code > O(W) Repeat n times O(W) What is the running time of this algorithm? O(n*W) Remember that the brute-force algorithm taes O( n ) 9 Example Let s run our algorithm on the following data: n = (# of elements) W = (max weight) Elements (weight, benefit): (,), (,), (,), (,6) Example () Example () i\w i\w for w = to W,w] = for i = to n i,] = Example () i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i =- Example () i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i =

Example (6) i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i = Example () i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i = 6 Example (8) i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i = Example (9) i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i =- 8 Example () i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i =- Example () i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i = 9

Example () i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i = Example () i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w-w i = Example () i\w : (,) : (,) : (,) i= : (,6) b i = w i = w=.. Example () i\w : (,) : (,) : (,) i= : (,6) b i = w i = w= w- w i = if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w Example (6) i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i = w i = w= w- w i = Example () i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i =6 w i = w=.. 6 6

Example (8) i\w if b i + i-,w-w i ] > i-,w] i,w] = b i + i-,w- w i ] i,w] = i-,w] i,w] = i-,w] // w i > w : (,) : (,) : (,) i= : (,6) b i =6 w i = w= w- w i = Comments This algorithm only finds the max possible value that can be carried in the napsac» I.e., the value in n,w] To now the items that mae this maximum value, an addition to this algorithm is necessary. 8 How to find actual Knapsac Items All of the information we need is in the table. n,w] is the maximal value of items that can be placed in the Knapsac. Let i=n and =W if i,] i,] then mar the i th item as in the napsac i = i, = -w i i = i // Assume the i th item is not in the napsac // Could it be in the optimally paced napsac? 9 i\w i= : (,) : (,) : (,) : (,6) = b i =6 w i = i,] = i,] = Finding the Items while i, > if i,] i,] then mar the i th item as in the napsac i = i, = -w i i = i i\w i= : (,) : (,) : (,) : (,6) = b i =6 w i = i,] = i,] = Finding the Items () while i, > if i,] i,] then mar the i th item as in the napsac i = i, = -w i i = i i\w i= : (,) : (,) : (,) : (,6) = b i =6 w i = i,] = i,] = Finding the Items () while i, > if i,] i,] then mar the i th item as in the napsac i = i, = -w i i = i

Finding the Items () i\w i= while i, > if i,] i,] then mar the i th item as in the napsac i = i, = -w i i = i : (,) : (,) : (,) : (,6) = b i = w i = i,] = i,] = w i = Finding the Items () i\w i= while i, > if i,] i,] then mar the i th item as in the napsac i = i, = -w i i = i : (,) : (,) : (,) : (,6) = b i = w i = i,] = i,] = w i = Finding the Items (6) i\w while i, > if i,] i,] then mar the n th item as in the napsac i = i, = -w i i = i i= = : (,) : (,) : (,) : (,6) The optimal napsac should contain {, } Finding the Items () i\w while i, > if i,] i,] then mar the n th item as in the napsac i = i, = -w i i = i : (,) : (,) : (,) : (,6) The optimal napsac should contain {, } 6 Review: The Knapsac Problem And Optimal Substructure Solving The Knapsac Problem Both variations exhibit optimal substructure To show this for the - problem, consider the most valuable load weighing at most W pounds» If we remove item j from the load, what do we now about the remaining load?» A: remainder must be the most valuable load weighing at most W - w j that thief could tae, excluding item j The optimal solution to the fractional napsac problem can be found with a greedy algorithm» Do you recall how?» Greedy strategy: tae in order of dollars/pound The optimal solution to the - problem cannot be found with the same greedy strategy» Example: items weighing,, and pounds, napsac can hold pounds Suppose item is worth $. Assign values to the other items so that the greedy strategy will fail 8 8

The Knapsac Problem: Greedy Vs. Dynamic Memoization The fractional problem can be solved greedily The - problem can be solved with a dynamic programming approach Memoization is another way to deal with overlapping subproblems in dynamic programming» After computing the solution to a subproblem, store it in a table» Subsequent calls just do a table looup With memoization, we implement the algorithm recursively:» If we encounter a subproblem we have seen, we loo up the answer» If not, compute the solution and add it to the list of subproblems we have seen. Must useful when the algorithm is easiest to implement recursively» Especially if we do not need solutions to all subproblems. 9 Conclusion Dynamic programming is a useful technique of solving certain ind of problems When the solution can be recursively described in terms of partial solutions, we can store these partial solutions and re-use them as necessary (memoization) Running time of dynamic programming algorithm vs. naïve algorithm:» - Knapsac problem: O(W*n) vs. O( n ) 9