The Greedy Method. Fundamental Techniques. The Greedy Method. Fractional Knapsack Problem

Similar documents
Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

CS473 - Algorithms I

Dynamic Programming. Lecture Overview Introduction

Near Optimal Solutions

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

Approximation Algorithms

Full and Complete Binary Trees

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

Sample Induction Proofs

Chapter 3. if 2 a i then location: = i. Page 40

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

We can express this in decimal notation (in contrast to the underline notation we have been using) as follows: b + 90c = c + 10b

CHAPTER 5. Number Theory. 1. Integers and Division. Discussion

Section IV.1: Recursive Algorithms and Recursion Trees

CSCE 310J Data Structures & Algorithms. Dynamic programming 0-1 Knapsack problem. Dynamic programming. Dynamic Programming. Knapsack problem (Review)

Lecture 3: Finding integer solutions to systems of linear equations

Mathematical Induction. Lecture 10-11

Linear Programming. March 14, 2014

14.1 Rent-or-buy problem

Lecture 1: Course overview, circuits, and formulas

3. Mathematical Induction

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, Notes on Algebra

Lecture 18: Applications of Dynamic Programming Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY

Linear Programming. April 12, 2005

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

5.1 Bipartite Matching

6.2 Permutations continued

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

GREATEST COMMON DIVISOR

6.3 Conditional Probability and Independence

Data Structures Fibonacci Heaps, Amortized Analysis

11 Multivariate Polynomials

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Lecture 13: The Knapsack Problem

Row Echelon Form and Reduced Row Echelon Form

LS.6 Solution Matrices

8 Divisibility and prime numbers

csci 210: Data Structures Recursion

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Algorithms. Margaret M. Fleck. 18 October 2010

Binary Search Trees. Data in each node. Larger than the data in its left child Smaller than the data in its right child

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

Data Structures and Algorithms Written Examination

Math 55: Discrete Mathematics

1 Solving LPs: The Simplex Algorithm of George Dantzig

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

COMPUTER SCIENCE TRIPOS

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

Solutions to Homework 6

the recursion-tree method

Math 55: Discrete Mathematics

Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

Homework until Test #2

Partial Fractions. p(x) q(x)

24. The Branch and Bound Method

Basic Proof Techniques

Linear Programming. Solving LP Models Using MS Excel, 18

Computing exponents modulo a number: Repeated squaring

8.1 Min Degree Spanning Tree

Solving Rational Equations

Some facts about polynomials modulo m (Full proof of the Fingerprinting Theorem)

Dynamic Programming Problem Set Partial Solution CMPSC 465

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Faster deterministic integer factorisation

Small Maximal Independent Sets and Faster Exact Graph Coloring

15. Symmetric polynomials

Continued Fractions and the Euclidean Algorithm

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

Regular Expressions and Automata using Haskell

Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

THE SCHEDULING OF MAINTENANCE SERVICE

Fairness in Routing and Load Balancing

Pigeonhole Principle Solutions

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1

In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write

Applied Algorithm Design Lecture 5

SYSTEMS OF PYTHAGOREAN TRIPLES. Acknowledgements. I would like to thank Professor Laura Schueller for advising and guiding me

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

1 if 1 x 0 1 if 0 x 1

Symbol Tables. Introduction

Cartesian Products and Relations

The Running Time of Programs

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Lecture 13 - Basic Number Theory.

Closest Pair Problem

Single machine parallel batch scheduling with unbounded capacity

CHAPTER 9. Integer Programming

Solving Systems of Linear Equations Using Matrices

Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

The GMAT Guru. Prime Factorization: Theory and Practice

Fast Fourier Transform: Theory and Algorithms

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Definition Given a graph G on n vertices, we define the following quantities:

Reading 13 : Finite State Automata and Regular Expressions

8 Primes and Modular Arithmetic

Transcription:

Fundamental Techniques There are some algorithmic tools that are quite specialised. They are good for problems they are intended to solve, but they are not very versatile. There are also more fundamental (general) algorithmic tools that can be applied to a wide variety of different data structure and algorithm design problems. week 4 Complexity of Algorithms 1 The Greedy Method An optimisation problem (OP) is a problem that involves searching through a set of configurations to find one that minimises or maximizes an objective function defined on these configurations The greedy method solves a given OP going through a sequence of (feasible) choices The sequence starts from well-understood starting configuration, and then iteratively makes the decision that seems best from all those that are currently possible. week 4 Complexity of Algorithms 2 The Greedy Method The greedy approach does not always lead to an optimal solution. The problems that have a greedy solution are said to posses the greedy-choice property. The greedy approach is also used in the context of hard (difficult to solve) problems in order to generate an approximate solution. week 4 Complexity of Algorithms 3 Fractional Knapsack Problem In fractional knapsack problem, where we are given a set S of n items, s.t., each item I has a positive benefit b i and a positive weight w i, and we wish to find the maximum-benefit subset that doesn t exceed a given weight W. We are also allowed to to take arbitrary fractions of each item. week 4 Complexity of Algorithms 4 1

Fractional Knapsack Problem Fractional Knapsack Problem I.e., we can take an amount x i of each item i such that The total benefit of the items taken is determined by the objective function week 4 Complexity of Algorithms 5 week 4 Complexity of Algorithms 6 Fractional Knapsack Problem Fractional Knapsack Problem In the solution we use a heap-based PQ to store the items of S, where the key of each item is its value index With PQ, each greedy choice, which removes an item with the greatest value index, takes O(log n) time The fractional knapsack algorithm can be implemented in time O(n log n). Fractional knapsack problem satisfies the greedy-choice property, hence Thm: Given an instance of a fractional knapsack problem with set S of n items, we can construct a maximum benefit subset of S, allowing for fractional amounts, that has a total weight W in O(n log n) time. week 4 Complexity of Algorithms 7 week 4 Complexity of Algorithms 8 2

Task Scheduling Task Scheduling Suppose we are given a set T of n tasks, s.t., each task i has a start time s i and a completion time f i. Each task has to be performed on a machine and each machine can execute only one task at a time. Two tasks i and j are non-conflicting if fi sj or fj si. Two tasks can be executed on the same machine only if they are non-conflicting. week 4 Complexity of Algorithms 9 The task scheduling problem is to schedule all the tasks in T on the fewest machines possible in a nonconflicting way week 4 Complexity of Algorithms 10 Task Scheduling (algorithm) Task Scheduling (analysis) week 4 Complexity of Algorithms 11 In the algorithm TaskSchedule, we begin with no machines and we consider the tasks in a greedy fashion, ordered by their start times. For each task i, if we have the machine that can handle task i, then we schedule i on that machine. Otherwise, we allocate a new machine, schedule i on it, and repeat this greedy selection process until we have considered all the tasks in T. week 4 Complexity of Algorithms 12 3

Task Scheduling (analysis) Divide and Conquer Task scheduling problem satisfies the greedy-choice property, hence Thm: Given an instance of a task scheduling problem with set of n tasks, the algorithm TaskSchedule produces a schedule of the tasks with the minimum number of machines in O(n log n) time. Divide: if the input size is small then solve the problem directly; otherwise divide the input data into two or more disjoint subsets Recur: recursively solve the sub-problems associated with the subsets Conquer: take the solutions to the subproblems and merge them into a solution to the original problem week 4 Complexity of Algorithms 13 week 4 Complexity of Algorithms 14 Divide and Conquer Substitution Method To analyse the running time of a divideand-conquer algorithm we utilise a recurrence equation, where T(n) denotes the running time of the algorithm on an input of size n, and Characterise T(n) using an equation that relates T(n) to values of function T for problem sizes smaller than n, e.g., One way to solve a divide-and-conquer recurrence equation is to use the iterative substitution method, a.k.a., plug-and-chug method, e.g., having We get And after i-1 substitutions we have And for i = log n, we get week 4 Complexity of Algorithms 15 week 4 Complexity of Algorithms 16 4

Recursion Tree (visual approach) Guess-and-Prove In recursion tree method, some overhead (forming a part of a recurrence equation) is associated with every node of the tree. E.g., having In guess-and-prove method the solution to a recurrence equation is guessed and then proved by mathematical induction Where the overhead corresponds to summand +bn. We get The value of T(n) corresponds to the sum of all overheads. In this example, depth of the tree times overhead at each level, which is O(n log n) week 4 Complexity of Algorithms 17 We guess that T(n) = O(n log n). We have to prove that T(n) < C n log n for some constant C and large enough n. We use inductive assumption that T(n/2) < C n/2 log (n/2) = Cn/2 (log n 1) = (Cn log n)/2 Cn/2. T(n) = 2T(n/2) +bn < 2((Cn log n)/2 Cn/2) +bn = Cn log n + (-Cn + bn) < Cn log n, for any C > b. week 4 Complexity of Algorithms 18 The Master Method Matrix Multiplication Suppose we are given two n x n matrices X and Y, and we wish to compute their product Z=X Y, which is defined so that: week 4 Complexity of Algorithms 19 Which naturally leads to a simple O(n 3 ) time algorithm. week 4 Complexity of Algorithms 20 5

Matrix Multiplication Strassen s Algorithm Another way of viewing this product is in terms of sub-matrices: Define seven matrix products: where However this gives a divide-and-conquer algorithm with running time T(n), s.t., T(n) =8T(n/2) +bn 2 = O(n 3 ) week 4 Complexity of Algorithms 21 week 4 Complexity of Algorithms 22 Strassen s Algorithm Strassen s Algorithm Having S i s we can represent I, J, K, L: Thus, we can compute Z=XY using seven recursive multiplications of matrices of size (n/2) x (n/2), where One can prove, e.g., using Master Theorem, that: Thm: We can multiply two n x n matrices in O(n log 7 ) = O(n 2.808 ) time. week 4 Complexity of Algorithms 23 week 4 Complexity of Algorithms 24 6

Dynamic Programming Dynamic Programming The dynamic programming (DP) algorithm-design technique is similar to divide-and-conquer technique. The main difference is in replacing (possibly) repetitive recursive calls by the reference to already computed values stored in a special table. DP technique is used primarily for optimisation problems We very often apply DP where the brute-force search for the best is infeasible However DP is efficient only if the problem has a certain amount of structure that we can exploit week 4 Complexity of Algorithms 25 week 4 Complexity of Algorithms 26 Dynamic Programming 0-1 Knapsack Problem Simple sub-problems: there must be a way of braking the whole optimisation problem into smaller pieces sharing a similar structure Sub-problem optimality: an optimal solution to the global problem must be a composition of optimal sub-problem solutions Sub-problem overlap: optimal solutions to unrelated sub-problems can contain subproblems in common week 4 Complexity of Algorithms 27 In 0-1 knapsack problem, is the knapsack problem where taking fractions of items is not allowed, i.e., each item s i S, for 1 i n, must be entirely accepted or rejected Item s i has a benefit b i (s.t., b 1 b 2 b n ) and an integer weight w i We have the following objective: where T S week 4 Complexity of Algorithms 28 7

0-1 Knapsack Problem 0-1 Knapsack Problem Exponential solution: we can easily solve 0-1 knapsack problem in O(2 n ) time by testing all possible subsets of items Unfortunately exponential complexity is not acceptable for large n and we rather have to focus on nice characterisation for sub-problems in order to use DP approach Let S k = {s i : i= 1,2,,k} Let B[k,w] be the maximum total benefit of a subset of S k from among all those subsets having total weight exactly w We have b[0,w]=0, for each w W, and week 4 Complexity of Algorithms 29 week 4 Complexity of Algorithms 30 0-1 Knapsack Problem 0-1 Knapsack Problem The running time of the 01Knapsack algorithm is dominated by the two nested for-loops, where the outer one iterates n times and the inner one iterates at most W times Thm: 01Knapsack algorithm finds the highest benefit subset of S with total weight at most W in O(nW) time week 4 Complexity of Algorithms 31 week 4 Complexity of Algorithms 32 8