Unit 4: Dynamic Programming



Similar documents
Matrix-Chain Multiplication

Dynamic Programming. Lecture Overview Introduction

Lecture 18: Applications of Dynamic Programming Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY

Solutions to Homework 6

Analysis of Algorithms I: Optimal Binary Search Trees

CS473 - Algorithms I

OPTIMAL BINARY SEARCH TREES

Approximation Algorithms

Lecture 13: The Knapsack Problem

Why? A central concept in Computer Science. Algorithms are ubiquitous.

CMSC 451 Design and Analysis of Computer Algorithms 1

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

CS473 - Algorithms I

Catalan Numbers. Thomas A. Dowling, Department of Mathematics, Ohio State Uni- versity.

Warshall s Algorithm: Transitive Closure

Network (Tree) Topology Inference Based on Prüfer Sequence

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

Clustering UE 141 Spring 2013

Interconnection Networks. Interconnection Networks. Interconnection networks are used everywhere!

Scheduling Shop Scheduling. Tim Nieberg

Data Structures and Algorithms Written Examination

Dynamic programming. Chapter Shortest paths in dags, revisited S C A B D E

Models in Transportation. Tim Nieberg

Part 2: Community Detection

Introduction to Scheduling Theory

Triangulation by Ear Clipping

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Binary Heaps. CSE 373 Data Structures

Static Load Balancing

EE602 Algorithms GEOMETRIC INTERSECTION CHAPTER 27

Computational Geometry. Lecture 1: Introduction and Convex Hulls

Data Structure with C

Reminder: Complexity (1) Parallel Complexity Theory. Reminder: Complexity (2) Complexity-new

The following themes form the major topics of this chapter: The terms and concepts related to trees (Section 5.2).

24. The Branch and Bound Method

Decision-Tree Learning

Fast Sequential Summation Algorithms Using Augmented Data Structures

GRAPH THEORY LECTURE 4: TREES

Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

How To Solve The Cluster Algorithm

Euclidean Minimum Spanning Trees Based on Well Separated Pair Decompositions Chaojun Li. Advised by: Dave Mount. May 22, 2014

Analysis of Algorithms I: Binary Search Trees

System Interconnect Architectures. Goals and Analysis. Network Properties and Routing. Terminology - 2. Terminology - 1

Hadoop SNS. renren.com. Saturday, December 3, 11

5.1 Bipartite Matching

Shortest Inspection-Path. Queries in Simple Polygons

DESIGN AND ANALYSIS OF ALGORITHMS

On the k-path cover problem for cacti

Learning Outcomes. COMP202 Complexity of Algorithms. Binary Search Trees and Other Search Trees

Matrix Multiplication

Section IV.1: Recursive Algorithms and Recursion Trees

AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC

CHAPTER 5 FINITE STATE MACHINE FOR LOOKUP ENGINE

Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

Clustering Artificial Intelligence Henry Lin. Organizing data into clusters such that there is

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

HOLES 5.1. INTRODUCTION

Lecture 10 Scheduling 1

Discrete Optimization

Algorithms and data structures

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Linear Programming. March 14, 2014

Lecture 7: Clocking of VLSI Systems

Analysis of Algorithms, I

R-trees. R-Trees: A Dynamic Index Structure For Spatial Searching. R-Tree. Invariants

Analysis of Binary Search algorithm and Selection Sort algorithm

Solution of Linear Systems

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Binary Trees and Huffman Encoding Binary Search Trees

Converting a Number from Decimal to Binary

Lecture 10: Regression Trees

Exam study sheet for CS2711. List of topics

Lecture 1: Course overview, circuits, and formulas

Project Scheduling by Critical Path Method (CPM)

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Mathematical Induction. Lecture 10-11

Concurrency Control. Chapter 17. Comp 521 Files and Databases Fall

Assessment Plan for CS and CIS Degree Programs Computer Science Dept. Texas A&M University - Commerce

Introduction to CMOS VLSI Design (E158) Lecture 8: Clocking of VLSI Systems

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

Y. Xiang, Constraint Satisfaction Problems

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

11. APPROXIMATION ALGORITHMS

3. The Junction Tree Algorithms

Optimal Binary Search Trees Meet Object Oriented Programming

Solving Geometric Problems with the Rotating Calipers *

Single machine parallel batch scheduling with unbounded capacity

28 Closest-Point Problems

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

Binary Search Trees CMPSC 122

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Data Mining on Streams

Gate Delay Model. Estimating Delays. Effort Delay. Gate Delay. Computing Logical Effort. Logical Effort

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

C H A P T E R. Logic Circuits

Course: Model, Learning, and Inference: Lecture 5

Transcription:

Unit 4: Dynamic Programming Course contents: Assembly-line scheduling Matrix-chain multiplication Longest common subsequence Optimal binary search trees Applications: Rod cutting, optimal polygon triangulation, flipchip routing, technology mapping for logic synthesis Reading: Chapter 15 1 Dynamic Programming (DP) vs. Divide-and-Conquer Both solve problems by combining the solutions to subproblems. Divide-and-conquer algorithms Partition a problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. Inefficient if they solve the same subproblem more than once. Dynamic programming (DP) Applicable when the subproblems are not independent. DP solves each subproblem just once. 1

Assembly-line Scheduling station S 1,1 S 1, S 1,3 S 1,n-1 S 1,n line 1 a 1,1 a 1, a 1,3 a 1,n-1 a 1,n chassis enters e 1 e t 1,1 t,1 t 1, t, t 1,n-1 t,n-1 x 1 x auto exits line a,1 a, a,3 a,n-1 a,n station S,1 S, S,3 S,n-1 S,n An auto chassis enters each assembly line, has parts added at stations, and a finished auto exits at the end of the line. S i,j : the jth station on line i a i,j : the assembly time required at station S i,j t i,j : transfer time from station S i,j to the j+1 station of the other line. e i (x i ): time to enter (exit) line i 3 Optimal Substructure line 1 S 1,1 S 1, S 1,3 S 1,n-1 S 1,n chassis enters e 1 e t 1,1 t,1 t 1, t, a 1,1 a 1, a 1,3 a 1,n-1 a 1,n x1 t 1,n-1 t,n-1 x auto exits line a,1 a, a,3 a,n-1 a,n S,1 S, S,3 S,n-1 S,n Objective: Determine the stations to choose to minimize the total manufacturing time for one auto. Brute force: ( n ), why? The problem is linearly ordered and cannot be rearranged => Dynamic programming? Optimal substructure: If the fastest way through station S i,j is through S 1,j-1, then the chassis must have taken a fastest way from the starting point through S 1,j-1. 4

Overlapping Subproblem: Recurrence line 1 S 1,1 S 1, S 1,3 S 1,n-1 S 1,n chassis enters e 1 e t 1,1 t,1 t 1, t, a 1,1 a 1, a 1,3 a 1,n-1 a 1,n x1 t 1,n-1 t,n-1 x auto exits line a,1 a, a,3 a,n-1 a,n S,1 S, S,3 S,n-1 S,n Overlapping subproblem: The fastest way through station S 1,j is either through S 1,j-1 and then S 1,j, or through S,j-1 and then transfer to line 1 and through S 1,j. f i [j]: fastest time from the starting point through S i,j e1 a1,1 if j 1 f 1[ j] min( f 1[ j 1] a1, j, f [ j 1] t, j 1 a1, j) if j The fastest time all the way through the factory f* = min(f 1 [n] + x 1, f [n] + x ) 5 Computing the Fastest Time Fastest-Way(a, t, e, x, n) 1. f 1 [1] = e 1 + a 1,1. f [1] = e + a,1 3. for j = to n 4. if f 1 [j-1] + a 1,j f [j-1] + t,j -1 + a 1,j 5. f 1 [j] = f 1 [j-1] + a 1,j 6. l 1 [j] = 1 7. else f 1 [j] = f [j-1] + t,j -1 + a 1,j 8. l 1 [j] = 9. if f [j-1] + a,j f 1 [j-1] + t 1,j -1 + a,j 10. f [j] = f [j-1] + a,j 11. l [j] = 1. else f [j] = f 1 [j-1] + t 1,j -1 + a,j 13. l [j] = 1 14. if f 1 [n] + x 1 f [n] + x 15. f* = f 1 [n] + x 1 16. l* = 1 17. else f* = f [n] + x 18. l* = Running time? Linear time!! l i [j]: The line number whose station j-1 is used in a fastest way through S i,j 6 3

Constructing the Fastest Way Print-Station(l, n) 1. i = l*. Print line i, station n 3. for j = n downto 4. i = l i [j] 5. Print line i, station j-1 line 1, station 6 line, station 5 line, station 4 line 1, station 3 line, station line 1, station 1 S 1,4 S 1,1 S 1, S 1,3 S 1,5 S 1,6 line 1 7 9 3 4 8 4 chassis enters 4 3 1 1 3 4 1 3 auto exits line 8 5 6 4 5 7 S,1 S, S,4 S,3 S,5 S,6 7 An Example S 1,4 S 1,1 S 1, S 1,3 S 1,5 S 1,6 line 1 7 9 3 4 8 4 chassis enters 4 3 1 1 3 4 1 3 auto exits line 8 5 6 4 5 7 S,1 S, S,4 S,3 S,5 S,6 f 1 [j] f [j] j 9 18 0 4 3 35 1 16 5 30 37 1 3 4 5 6 l 1 [j] 1 1 1 l [j] 1 1 f* = 38 l* = 1 8 4

Dynamic Programming (DP) Typically apply to optimization problem. Generic approach Calculate the solutions to all subproblems. Proceed computation from the small subproblems to larger subproblems. Compute a subproblem based on previously computed results for smaller subproblems. Store the solution to a subproblem in a table and never recompute. Development of a DP 1. Characterize the structure of an optimal solution.. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution bottom-up. 4. Construct an optimal solution from computed information (omitted if only the optimal value is required). 9 When to Use Dynamic Programming (DP) DP computes recurrence efficiently by storing partial results efficient only when the number of partial results is small. Hopeless configurations: n! permutations of an n-element set, n subsets of an n-element set, etc. n Promising configurations: i n( n 1)/ i 1 contiguous substrings of an n-character string, n(n+1)/ possible subtrees of a binary search tree, etc. DP works best on objects that are linearly ordered and cannot be rearranged!! Linear assembly lines, matrices in a chain, characters in a string, points around the boundary of a polygon, points on a line/circle, the left-to-right order of leaves in a search tree, etc. Objects are ordered left-to-right Smell DP? 10 5

DP Example: Matrix-Chain Multiplication If A is a p x q matrix and B a q x r matrix, then C = AB is a p x r matrix q Ci [, j] Ai [, k] Bk [, j] k 1 time complexity: O(pqr). Matrix-Multiply(A, B) 1. if A.columns B.rows. error incompatible dimensions 3. else let C be a new A.rows * B.columns matrix 4. for i = 1 to A.rows 5. for j = 1 to B.columns 6. c ij = 0 7. for k = 1 to A.columns 8. c ij = c ij + a ik b kj 9. return C 11 DP Example: Matrix-Chain Multiplication The matrix-chain multiplication problem Input: Given a chain <A 1, A,, A n > of n matrices, matrix A i has dimension p i-1 x p i Objective: Parenthesize the product A 1 A A n to minimize the number of scalar multiplications Exp: dimensions: A 1 : 4 x ; A : x 5; A 3 : 5 x 1 (A 1 A )A 3 : total multiplications = 4 x x 5 + 4 x 5 x 1 = 60 A 1 (A A 3 ): total multiplications = x 5 x 1 + 4 x x 1 = 18 So the order of multiplications can make a big difference! 1 6

Matrix-Chain Multiplication: Brute Force A = A 1 A A n : How to evaluate A using the minimum number of multiplications? Brute force: check all possible orders? P(n): number of ways to multiply n matrices., exponential in n. Any efficient solution? The matrix chain is linearly ordered and cannot be rearranged!! Smell Dynamic programming? 13 Matrix-Chain Multiplication m[i, j]: minimum number of multiplications to compute matrix A i..j = A i A i +1 A j, 1 i j n. m[1, n]: the cheapest cost to compute A 1..n. matrix A i has dimension p i-1 x p i Applicability of dynamic programming Optimal substructure: an optimal solution contains within its optimal solutions to subproblems. Overlapping subproblem: a recursive algorithm revisits the same problem over and over again; only (n ) subproblems. 14 7

Bottom-Up DP Matrix-Chain Order A i dimension p i-1 x p i Matrix-Chain-Order(p) 1. n = p.length 1. Let m[1..n, 1..n] and s[1..n-1,..n] be new tables 3. for i = 1 to n 4. m[i, i] = 0 5. for l = to n // l is the chain length 6. for i = 1 to n l +1 7. j = i+ l -1 8. m[i, j] = 9. for k = i to j -1 10. q = m[i, k] + m[k+1, j] + p i-1 p k p j 11. if q < m[i, j] 1. m[i, j] = q 13. s[i, j] = k 14. return m and s s 15 Constructing an Optimal Solution s[i, j]: value of k such that the optimal parenthesization of A i A i+1 A j splits between A k and A k+1 Optimal matrix A 1..n multiplication: A 1..s[1, n] A s[1, n] + 1..n Exp: call Print-Optimal-Parens(s, 1, 6): ((A 1 (A A 3 ))((A 4 A 5 ) A 6 )) Print-Optimal-Parens(s, i, j) 1. if i == j. print A i 3. else print ( 4. Print-Optimal-Parens(s, i, s[i, j]) 5. Print-Optimal-Parens(s, s[i, j] + 1, j) 6. print ) s 0 16 8

Top-Down, Recursive Matrix-Chain Order n 1 k 1 Recursive-Matrix-Chain(p, i, j) 1. if i == j. return 0 3. m[i, j] = 4. for k = i to j -1 5 q = Recursive-Matrix-Chain(p,i, k) + Recursive-Matrix-Chain(p, k+1, j) + p i-1 p k p j 6. if q < m[i, j] 7. m[i, j] = q 8. return m[i, j] Time complexity: ( n ) (T(n) > (T(k)+T(n-k)+1)). 17 Top-Down DP Matrix-Chain Order (Memoization) Complexity: O(n ) space for m[] matrix and O(n 3 ) time to fill in O(n ) entries (each takes O(n) time) Memoized-Matrix-Chain(p) 1. n = p.length - 1. let m[1..n, 1..n] be a new table. for i = 1 to n 3. for j = i to n 4. m[i, j] = 5. return Lookup-Chain(m, p,1,n) Lookup-Chain(m, p, i, j) 1. if m[i, j] <. return m[i, j] 3. if i == j 4. m[ i, j] = 0 5. else for k = i to j -1 6. q = Lookup-Chain(m, p, i, k) + Lookup-Chain(m, p, k+1, j) + p i-1 p k p j 7. if q < m[i, j] 8. m[i, j] = q 9. return m[i, j] 18 9

Two Approaches to DP 1. Bottom-up iterative approach Start with recursive divide-and-conquer algorithm. Find the dependencies between the subproblems (whose solutions are needed for computing a subproblem). Solve the subproblems in the correct order.. Top-down recursive approach (memoization) Start with recursive divide-and-conquer algorithm. Keep top-down approach of original algorithms. Save solutions to subproblems in a table (possibly a lot of storage). Recurse only on a subproblem if the solution is not already available in the table. If all subproblems must be solved at least once, bottom-up DP is better due to less overhead for recursion and for maintaining tables. If many subproblems need not be solved, top-down DP is better since it computes only those required. 19 Longest Common Subsequence Problem: Given X = <x 1, x,, x m > and Y = <y 1, y,, y n >, find the longest common subsequence (LCS) of X and Y. Exp: X = <a, b, c, b, d, a, b> and Y = <b, d, c, a, b, a> LCS = <b, c, b, a> (also, LCS = <b, d, a, b>). Exp: DNA sequencing: S1 = ACCGGTCGAGATGCAG; S = GTCGTTCGGAATGCAT; LCS S3 = CGTCGGATGCA Brute-force method: Enumerate all subsequences of X and check if they appear in Y. Each subsequence of X corresponds to a subset of the indices {1,,, m} of the elements of X. There are m subsequences of X. Why? 0 10

Optimal Substructure for LCS Let X = <x 1, x,, x m > and Y = <y 1, y,, y n > be sequences, and Z = <z 1, z,, z k > be LCS of X and Y. 1. If x m = y n, then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n-1.. If x m y n, then z k x m implies Z is an LCS of X m-1 and Y. 3. If x m y n, then z k y n implies Z is an LCS of X and Y n-1. c[i, j]: length of the LCS of X i and Y j c[m, n]: length of LCS of X and Y Basis: c[0, j] = 0 and c[i, 0] = 0 1 Top-Down DP for LCS c[i, j]: length of the LCS of X i and Y j, where X i = <x 1, x,, x i > and Y j =<y 1, y,, y j >. c[m, n]: LCS of X and Y. Basis: c[0, j] = 0 and c[i, 0] = 0. The top-down dynamic programming: initialize c[i, 0] = c[0, j] = 0, c[i, j] = NIL TD-LCS(i, j) 1. if c[i,j] == NIL. if x i == y j 3. c[i, j] = TD-LCS(i-1, j-1) + 1 4. else c[i, j] = max(td-lcs(i, j-1), TD-LCS(i-1, j)) 5. return c[i, j] 11

Find the right order to solve the subproblems To compute c[i, j], we need c[i-1, j-1], c[i-1, j], and c[i, j-1] b[i, j]: points to the table entry w.r.t. the optimal subproblem solution chosen when computing c[i, j] Bottom-Up DP for LCS LCS-Length(X,Y) 1. m = X.length. n = Y.length 3. let b[1..m, 1..n] and c[0..m, 0..n] be new tables 4. for i = 1 to m 5. c[i, 0] = 0 6. for j = 0 to n 7. c[0, j] = 0 8. for i = 1 to m 9. for j = 1 to n 10. if x i == y j 11. c[i, j] = c[i-1, j-1]+1 1. b[i, j] = 13. elseif c[i-1,j] c[i, j-1] 14. c[i,j] = c[i-1, j] 15. b[i, j] = `` '' 16. else c[i, j] = c[i, j-1] 17. b[i, j] = 18. return c and b 3 Example of LCS LCS time and space complexity: O(mn). X = <A, B, C, B, D, A, B> and Y = <B, D, C, A, B, A> LCS = <B, C, B, A>. 4 1

Constructing an LCS Trace back from b[m, n] to b[1, 1], following the arrows: O(m+n) time. Print-LCS(b, X, i, j) 1. if i == 0 or j == 0. return 3. if b[i, j] == 4. Print-LCS(b, X, i-1, j-1) 5. print x i 6. elseif b[i, j] == 7. Print-LCS(b, X, i -1, j) 8. else Print-LCS(b, X, i, j-1) 5 Optimal Binary Search Tree Given a sequence K = <k 1, k,, k n > of n distinct keys in sorted order (k 1 < k < < k n ) and a set of probabilities P = <p 1, p,, p n > for searching the keys in K and Q = <q 0, q 1, q,, q n > for unsuccessful searches (corresponding to D = <d 0, d 1, d,, d n > of n+1 distinct dummy keys with d i representing all values between k i and k i+1 ), construct a binary search tree whose expected search cost is smallest. k p k p 1 k 1 k 4 p 4 k 1 k 5 d 0 d 1 k 3 k 5 p 3 p 5 d 0 d 1 k 4 d 5 q 0 q 1 d d 3 d 4 d 5 k 3 d 4 q q 3 q 4 q 5 d d 3 6 13

An Example i 0 1 3 4 5 p i 0.15 0.10 0.05 0.10 0.0 q i 0.05 0.10 0.05 0.05 0.05 0.10 n p i n i 1 i 0 qi 1 0.15 k 1 k 0.10 1 (depth 0) k 4 0.10 k 1 k k 5 d 0 d 1 k 3 k 5 0.0 3 d 0 d 1 k 4 d 5 Cost =.80 d d 3 d 4 d 5 n n T i i T i i i 1 i 0 n n T i i T i i E[search cost in T ] = ( depth ( k ) 1) p ( depth ( d ) 1) q 0.10 4 1 depth ( k ) p depth ( d ) q i 1 i 0 7 k 3 d d 3 d 4 Cost =.75 Optimal!! Optimal Substructure If an optimal binary search tree T has a subtree T containing keys k i,, k j, then this subtree T must be optimal as well for the subproblem with keys k i,, k j and dummy keys d i-1,, d j. Given keys k i,, k j with k r (i r j) as the root, the left subtree contains the keys k i,, k r-1 (and dummy keys d i-1,, d r-1 ) and the right subtree contains the keys k r+1,, k j (and dummy keys d r,, d j ). For the subtree with keys k i,, k j with root k i, the left subtree contains keys k i,.., k i-1 (no key) with the dummy key d i-1. k k 1 k 5 d 0 d 1 k4 d 5 k 3 d d 3 d 4 8 14

Overlapping Subproblem: Recurrence e[i, j] : expected cost of searching an optimal binary search tree containing the keys k i,, k j. Want to find e[1, n]. e[i, i -1] = q i-1 (only the dummy key d i-1 ). If k r (i r j) is the root of an optimal subtree containing keys k i,, k j and let wi (, j j j ) p l q l then l i l i 1 e[i, j] = p r + (e[i, r-1] + w(i, r-1)) + (e[r+1, j] +w(r+1, j)) = e[i, r-1] + e[r+1, j] + w(i, j) 1 Recurrence: qi if j i 1 ei [, j] min{ ei [, r 1] er [ 1, j] wi (, j)} if i j i r j k 4 0.10 1 Node depths increase by 1 after merging two subtrees, and so do the costs k 3 k 5 0.0 d d 3 d 4 d 5 0.10 3 9 Computing the Optimal Cost Need a table e[1..n+1, 0..n] for e[i, j] (why e[1, 0] and e[n+1, n]?) Apply the recurrence to compute w(i, j) (why?) q if j i wi [, j] wi [, j 1] pj qj if i j i 1 1 Optimal-BST(p, q, n) 1. let e[1..n+1, 0..n], w[1..n+1, 0..n], and root[1..n, 1..n] be new tables. for i = 1 to n + 1 3. e[i, i-1] = q i-1 4. w[i, i-1] = q i-1 5. for l = 1 to n 6. for i = 1 to n l + 1 7. j = i+ l -1 8. e[i, j] = 9. w[i, j] = w[i, j-1] + p j + q j 10. for r = i to j 11. t = e[i, r-1] + e[r+1, j] + w[i, j] 1. if t < e[i, j] 13. e[i, j] = t 14. root[i, j] = r 15. return e and root root[i, j] : index r for which k r is the root of an optimal search tree containing keys k i,, k j. e 5 1 4.75 j 3 1.75.00 3 i 1.5 1.0 1.30 4 1 0.90 0.70 0.60 0.90 5 0 0.45 0.40 0.5 0.30 0.50 6 0.05 0.10 0.05 0.05 0.05 0.10 30 k 3 k 4 k 5 d d 3 d 4 d 5 15

Example e[1, 1] = e[1, 0] + e[, 1] + w(1,1) i 0 1 3 4 5 = 0.05 + 0.10 + 0.3 p e i 0.15 0.10 0.05 0.10 0.0 = 0.45 5 1 q j i i 0.05 0.10 0.05 0.05 0.05 0.10 4.75 root 3 1.75 5.00 1 3 1.5 1.0 j 4 1.30 4 3 4 i 3 1 0.90 0.70 0.60 0.90 5 5 4 0 0.45 1 1 4 0.40 5 0.5 0.30 0.50 5 6 1 3 4 5 0.05 0.10 0.05 0.05 0.05 0.10 w e[1, 5] = e[1, 1] + e[3, 5] + w(1,5) 5 1 = 0.45 + 1.30 + 1.00 =.75 4 1.00 k (r = ; r=1, 3?) j 0.70 3 0.80 3 i k 1 k 5 0.55 0.50 0.60 4 1 0.45 0.35 0.30 0.50 5 d 0 d 1 k d 5 4 0 0.30 0.5 0.15 0.0 0.35 6 k 0.05 0.10 0.05 0.05 0.05 0.10 3 d 4 d d 3 31 Appendix A: Rod Cutting Cut steel rods into pieces to maximize the revenue Assumptions: Each cut is free; rod lengths are integral numbers. Input: A length n and table of prices p i, for i = 1,,, n. Output: The maximum revenue obtainable for rods whose lengths sum to n, computed as the sum of the prices for the individual rods. length i 1 3 4 5 6 7 8 9 10 price p i 1 5 8 9 10 17 17 0 4 30 9 1 8 5 5 8 length i = 4 1 max revenue r 4 = 5+5 =10 1 1 5 1 5 1 5 1 1 1 1 1 1 Objects are linearly ordered (and cannot be rearranged)?? 3 16

Optimal Rod Cutting length i 1 3 4 5 6 7 8 9 10 price p i 1 5 8 9 10 17 17 0 4 30 max revenue r i 1 5 8 10 13 17 18 5 30 If p n is large enough, an optimal solution might require no cuts, i.e., just leave the rod as n unit long. Solution for the maximum revenue r i of length i rn max( pn, r1 rn 1, r rn,..., rn 1 r1) max ( pi rn i) r 1 = 1 from solution 1 = 1 (no cuts) r = 5 from solution = (no cuts) r 3 = 8 from solution 3 = 3 (no cuts) r 4 = 10 from solution 4 = + r 5 = 13 from solution 5 = + 3 r 6 = 17 from solution 6 = 6 (no cuts) r 7 = 18 from solution 7 = 1 + 6 or 7 = + + 3 r 8 = from solution 8 = + 6 1 i n 33 Optimal Substructure length i 1 3 4 5 6 7 8 9 10 price p i 1 5 8 9 10 17 17 0 4 30 max revenue r i 1 5 8 10 13 17 18 5 30 Optimal substructure: To solve the original problem, solve subproblems on smaller sizes. The optimal solution to the original problem incorporates optimal solutions to the subproblems. We may solve the subproblems independently. After making a cut, we have two subproblems. Max revenue r 7 : r 7 = 18 = r 4 + r 3 = (r + r ) + r 3 or r 1 + r 6 rn max( pn, r1 rn 1, r rn,..., rn 1 r1) Decomposition with only one subproblem: Some cut gives a first piece of length i on the left and a remaining piece of length n - i on the right. rn max ( pi 1 i n r n i 34 ) 17

Recursive Top-Down Solution r n max ( p 1 i n i r n i ) Cut-Rod(p, n) 1. if n == 0. return 0 3. q = - 4. for i = 1 to n 5. q = max(q, p[i] + Cut-Rod(p, n - i)) 6. return q Inefficient solution: Cut-Rod calls itself repeatedly, even on subproblems it has already solved!! Cut-Rod(p, 4) Cut-Rod(p, 3) 1, if n 0 n 1 T ( n) 1 T ( j), if n 0. j 0 T(n) = n Cut-Rod(p, 1) Cut-Rod(p, 0) Overlapping subproblems? 35 Top-Down DP Cut-Rod with Memoization Complexity: O(n ) time Solve each subproblem just once, and solves subproblems for sizes 0,1,, n. To solve a subproblem of size n, the for loop iterates n times. Memoized-Cut-Rod(p, n) 1. let r[0..n] be a new array. for i = 0 to n 3. r[i] = - 4. return Memoized-Cut-Rod -Aux(p, n, r) Memoized-Cut-Rod-Aux(p, n, r) 1. if r[n] 0. return r[n] 3. if n == 0 4. q = 0 5. else q = - 6. for i = 1 to n // Each overlapping subproblem // is solved just once!! 7. q = max(q, p[i] + Memoized-Cut-Rod -Aux(p, n-i, r)) 8. r[n] = q 9. return q 36 18

Bottom-Up DP Cut-Rod Complexity: O(n ) time Sort the subproblems by size and solve smaller ones first. When solving a subproblem, have already solved the smaller subproblems we need. Bottom-Up-Cut-Rod(p, n) 1. let r[0..n] be a new array. r[0] = 0 3. for j = 1 to n 4. q = - 5. for i = 1 to j 6. q = max(q, p[i] + r[j i]) 7. r[j] = q 8. return r[n] 37 Bottom-Up DP with Solution Construction Extend the bottom-up approach to record not just optimal values, but optimal choices. Saves the first cut made in an optimal solution for a problem of size i in s[i]. 5 5 1 17 Extended-Bottom-Up-Cut-Rod(p, n) 1. let r[0..n] and s[0..n] be new arrays. r[0] = 0 3. for j = 1 to n 4. q = - 5. for i = 1 to j 6. if q < p[i] + r[j i] 7. q = p[i] + r[j i] 8. s[j] = i 9. r[j] = q 10. return r and s Print-Cut-Rod-Solution(p, n) 1. (r, s) = Extended-Bottom-Up-Cut-Rod(p, n). while n > 0 3. print s[n] 4. n = n s[n] i 0 1 3 4 5 6 7 8 9 10 r[i] 0 1 5 8 10 13 17 18 5 30 s[i] 0 1 3 6 1 3 10 38 19

Appendix B: Optimal Polygon Triangulation Terminology: polygon, interior, exterior, boundary, convex polygon, triangulation? The Optimal Polygon Triangulation Problem: Given a convex polygon P = <v 0, v 1,, v n-1 > and a weight function w defined on triangles, find a triangulation that minimizes w( ). One possible weight function on triangle: w( v i v j v k )= v i v j + v j v k + v k v i, where v i v j is the Euclidean distance from v i to v j. 39 Optimal Polygon Triangulation (cont'd) Correspondence between full parenthesization, full binary tree (parse tree), and triangulation full parenthesization full binary tree full binary tree (n-1 leaves) triangulation (n sides) t[i, j]: weight of an optimal triangulation of polygon <v i-1, v i,, v j >. 40 0

Pseudocode: Optimal Polygon Triangulation Matrix-Chain-Order is a special case of the optimal polygonal triangulation problem. Only need to modify Line 9 of Matrix-Chain-Order. Complexity: Runs in (n 3 ) time and uses (n ) space. Optimal-Polygon-Triangulation(P) 1. n = P.length. for i = 1 to n 3. t[i,i] = 0 4. for l = to n 5. for i = 1 to n l + 1 6. j = i + l - 1 7. t[i, j] = 8. for k = i to j-1 9. q = t[i, k] + t[k+1, j ] + w( v i-1 v k v j ) 10. if q < t[i, j] 11. t[i, j] = q 1. s[i,j] = k 13. return t and s 41 Appendix C: LCS for Flip-Chip Routing Lee, Lin, and Chang, ICCAD-09 Given: A set of driver pads on driver pad rings, a set of bump pads on bump pad rings, a set of nets/connections Objective: Connect driver pads and bump pads according to a predefined netlist such that the total wirelength is minimized driver pad bump pad ring net assignment (1) () 4 1

Minimize # of Detoured Nets by LCS Cut the rings into lines/segments (form linear orders) d 1 d 3 d 1 d 3 S b 3 1 4 d n 1 n 3 d 4 n n 4 b 1 b b 3 b 4 n 3 b 1 n b n 1 b 3 n 4 b 4 (1) () net sequence S d 1 1 3 4 3 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 1 1 1 0 1 1 0 1 1 3 3 n 1 n n 1 n 3 n 4 n 3 d 1 d d 1 d 3 d 4 d 3 b 1 b b 3 b 4 43 d 1 d 3 d 1 d d 1 d 3 d 4 d 3 n 3 n n 1 (3) (4) n 4 S d =[1,,1,3,4,3] S b =[3,,1,4] Need detour for only n 3 #Detours Minimization by Dynamic Programming Longest common subsequence (LCS) computation d 1 d d 3 d 1 d d 3 d 1 d d 3 b 3 b 1 b b 3 b 1 b b 3 b 1 b seq.1=<1,,3> (a) seq.=<3,1,> (b) Common subseq = <3> (c) LCS= <1,> Maximum planar subset of chords (MPSC) computation d 1 d d 3 d 1 d d 3 d 1 d d 3 b 3 b 1 b b 3 b 1 b b 3 b 1 b b 4 b 5 b 6 b 4 b 5 b 6 b 4 b 5 b 6 (d) chord set: {3,4,5} (e) subset:{3} Y.-W. Chang (f) MPSC = {4,5} 44

Supowit's Algorithm for Finding MPSC Supowit, Finding a maximum planar subset of a set of nets in a channel, IEEE TCAD, 1987. Problem: Given a set of chords, find a maximum planar subset of chords. Label the vertices on the circle 0 to n-1. Compute MIS(i, j): size of maximum independent set between vertices i and j, i < j. Answer = MIS(0, n-1). Vertices on the circle Unit 7 Y.-W. Chang 45 Dynamic Programming in Supowit's Algorithm Apply dynamic programming to compute MIS(i, j ). Unit 7 Y.-W. Chang 46 3

Ring-by-Ring Routing Decompose chip into rings of pads Initialize I/O-pad sequences Route from inner rings to router rings Exchange net order on current ring to be consistent with I/O pads Keep applying LCS and MPSC algorithms between two adjacent rings to minimize #detours Over 100X speedups over ILP current ring preceding ring Y.-W. Chang 47 Global Routing Results The global routing result of circuit fc64 A routing path for each net is guided by a set of segments Segment 48 4

Detailed Routing: Phase 1 Segment-by-segment routing in counter-clockwise order As compacted as possible 49 Detailed Routing: Phase Net-by-net re-routing in clockwise order Wirelength and number of bends minimizations 50 5

Appendix D: Standard-Cell Based VLSI Design Style 51 Pattern Graphs for an Example Library 5 6

Technology Mapping Technology Mapping: The optimization problem of finding a minimum cost covering of the subject graph by choosing from the collection of pattern graphs for all gates in the library. A cover is a collection of pattern graphs such that every node of the subject graph is contained in one (or more) of the pattern graphs. The cover is further constrained so that each input required by a pattern graph is actually an output of some other pattern graph. 53 Trivial Covering Mapped into -input NANDs and 1-input inverters. 8 -input NAND-gates and 7 inverters for an area cost of 3. Best covering? an example subject graph 54 7

Optimal Tree Covering by Dynamic Programming If the subject directed acyclic graph (DAG) is a tree, then a polynomial-time algorithm to find the minimum cover exists. Based on dynamic programming: optimal substructure? overlapping subproblems? Given: subject trees (networks to be mapped), library cells Consider a node n of the subject tree Recursive assumption: For all children of n, a best match which implements the node is known. Cost of a leaf is 0. Consider each pattern tree which matches at n, compute cost as the cost of implementing each node which the pattern requires as an input plus the cost of the pattern. Choose the lowest-cost matching pattern to implement n. 55 Best Covering A best covering with an area of 15. Obtained by the dynamic programming approach. 56 8