Dynamic Programming. Applies when the following Principle of Optimality

Similar documents
Dynamic Programming. Lecture Overview Introduction

Matrix-Chain Multiplication

Solutions to Homework 6

Warshall s Algorithm: Transitive Closure

Approximation Algorithms

Section IV.1: Recursive Algorithms and Recursion Trees

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Social Media Mining. Graph Essentials

DATA ANALYSIS II. Matrix Algorithms

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Dynamic programming. Chapter Shortest paths in dags, revisited S C A B D E

Why? A central concept in Computer Science. Algorithms are ubiquitous.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

GRAPH THEORY LECTURE 4: TREES

Problem Set 7 Solutions

n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n lim m 2 1

Arithmetic and Algebra of Matrices

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Data Structures and Algorithms Written Examination

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) Total 92.

Discrete Mathematics Problems

Row Echelon Form and Reduced Row Echelon Form

Cpt S 223. School of EECS, WSU

Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Sample Questions Csci 1112 A. Bellaachia

Exam study sheet for CS2711. List of topics

Linear Programming. March 14, 2014

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

3.1 Solving Systems Using Tables and Graphs

A Review And Evaluations Of Shortest Path Algorithms

Applied Algorithm Design Lecture 5

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Shortest Path Algorithms

11. APPROXIMATION ALGORITHMS

Network (Tree) Topology Inference Based on Prüfer Sequence

Computer Science 281 Binary and Hexadecimal Review

Scheduling Shop Scheduling. Tim Nieberg

Polynomial Degree and Finite Differences

Linear Programming I

PES Institute of Technology-BSC QUESTION BANK

1 Introduction. Dr. T. Srinivas Department of Mathematics Kakatiya University Warangal , AP, INDIA

5.1 Bipartite Matching

Distance Degree Sequences for Network Analysis

Graph Theory and Complex Networks: An Introduction. Chapter 06: Network analysis

Near Optimal Solutions

Reductions & NP-completeness as part of Foundations of Computer Science undergraduate course

Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

Minimum cost maximum flow, Minimum cost circulation, Cost/Capacity scaling

On the independence number of graphs with maximum degree 3

Graph Theory and Complex Networks: An Introduction. Chapter 06: Network analysis. Contents. Introduction. Maarten van Steen. Version: April 28, 2014

Finding and counting given length cycles

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Analysis of Algorithms I: Binary Search Trees

2.6 Exponents and Order of Operations

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

Data Structures in Java. Session 15 Instructor: Bert Huang

Lecture 18: Applications of Dynamic Programming Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

Any two nodes which are connected by an edge in a graph are called adjacent node.

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

Asking Hard Graph Questions. Paul Burkhardt. February 3, 2014

Just the Factors, Ma am

Catalan Numbers. Thomas A. Dowling, Department of Mathematics, Ohio State Uni- versity.

Page 1. CSCE 310J Data Structures & Algorithms. CSCE 310J Data Structures & Algorithms. P, NP, and NP-Complete. Polynomial-Time Algorithms

An Introduction to APGL

Transportation Polytopes: a Twenty year Update

CMPSCI611: Approximating MAX-CUT Lecture 20

Mathematics Course 111: Algebra I Part IV: Vector Spaces

ALGEBRA 2: 4.1 Graph Quadratic Functions in Standard Form

Dynamic Programming 11.1 AN ELEMENTARY EXAMPLE

24. The Branch and Bound Method

Unit 4: Dynamic Programming

Solving Simultaneous Equations and Matrices

A permutation can also be represented by describing its cycles. What do you suppose is meant by this?

UPPER BOUNDS ON THE L(2, 1)-LABELING NUMBER OF GRAPHS WITH MAXIMUM DEGREE

Models in Transportation. Tim Nieberg

Automata and Computability. Solutions to Exercises

6.1 The Greatest Common Factor; Factoring by Grouping

Lecture 13: The Knapsack Problem

CSI 333 Lecture 1 Number Systems

On Integer Additive Set-Indexers of Graphs

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph

Solving Systems of Linear Equations Using Matrices

Notes on Determinant

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

1 Solving LPs: The Simplex Algorithm of George Dantzig

Lecture 1: Systems of Linear Equations

Labeling outerplanar graphs with maximum degree three

Sample Problems. Practice Problems

1 Introduction to Matrices

Computational Geometry. Lecture 1: Introduction and Convex Hulls

Transcription:

Dynamic Programming Applies when the following Principle of Optimality holds: In an optimal sequence of decisions or choices, each subsequence must be optimal. Translation: There s a recursive solution. Steps in designing a dynamic programming algorithm: 1. Characterize the solution structure. 2. Recursively define the solution. 3. Solve sub-problems in bottom-up fashion. 4. Construct final solution from sub-problem solutions. Subproblem results usually are stored in an array. 1 26 January 2000

Examples of Dynamic Programming Fibonacci Numbers F (n) = F (n 1) + F (n 2) F (0) = F (1) = 1 Binomial Coefficients ( ) ( ) ( ) n k = n 1 k 1 + n 1 k ( ) ( ) n 0 = n n = 1 2 26 January 2000

Dynamic Programming The Subset-Sum problem is the following: Input: An array A[1... n] of positive integers, and a target integer T > 0. Output: True iff there is a subset of the A values that sums to T. If so, would also like to know what the subset is. Can solve as follows: Let S[i, j] be defined as true iff there is a subset of elements A[1... i] that sums to j. Then S[n, T ] is the solution to our problem. In general: S[i, j] = S[i 1, j A[i]] S[i 1, j] Initial conditions are S[i, 0] = True, and S[0, j] = False, for j > 0. 3 26 January 2000

Subset-Sum Problem Translating to C/C++ code, looks like: for (i = 0; i <= n; i ++) S [i] [0] = TRUE; for (j = 1; j <= T; j ++) S [0] [j] = FALSE; for (i = 1; i <= n; i ++) for (j = 1; j <= T; j ++) S [i] [j] = S [i - 1] [j] (A [i] <= j && S [i - 1] [j - A [i]]); Time complexity is Θ(nT ). Can find the actual values by stepping back through the array as follows: j = T; for (i = n; i >= 1; i --) if (! S [i - 1] [j]) printf ("Use item %d = %d\n", i, A [i]); j -= A [i]; } 4 26 January 2000

All-Pairs Shortest Paths Let D[i, j, k] be the shortest distance from vertex i to vertex j without passing through a vertex numbered higher than k. Floyd-Warshall Algorithm: Set D[i, j, 0] to the adjacency matrix of G. for (k = 1 to n) for (i = 1 to n) for (j = 1 to n) D[i, j, k] = min D[i, j, k 1], D[i, k, k 1] + D[k, j, k 1] Time complexity is Θ(n 3 ). The third subscript of D can be ignored. 5 26 January 2000

Knapsack Problem The Knapsack problem is the following: Input: An array S[1... n] of positive integers, an array V [1... n] of positive integers, and an integer K > 0. Output: The maximum sum of a subset of V [i] s subject to the constraint that the sum of the corresponding S[i] s is not larger than K. Would also like to know what elements are in the subset. Can solve in a similar manner to the Subset- Sum problem: Let A[i, j] be defined as the maximum sum of a subset of elements V [1... i] whose corresponding S[i] s sum to exactly j. Then maxa[n, j] 1 j K} is the solution to our problem. In general: A[i, j] = 6 26 January 2000

Knapsack Problem Initial conditions? Code? Time complexity? What happens if we allow each object to be used more than once? I.e., we wish to maximize c 1 V [1] + c 2 V [2] + + c n V [n] where each c i is a non-negative integer, subject to the constraint that c 1 S[1] + c 2 S[2] + + c n S[n] K 7 26 January 2000

Matrix-Chain Multiplication Input: Dimensions d 0,..., d n of matrices M 1,..., M n to be multiplied, where M i is d i 1 d i. Output: Best way to parenthesize the product so as to minimize the cost of the multiplications, where the cost of multiplying a p q matrix times a q r matrix is pqr. 8 26 January 2000

Matrix-Chain Multiplication m[i, j] def = the minimum cost of multiplying matrices M i M j s[i, j] def = the last multiplication used to achieve m[i, j], In other words, s[i, j] = k iff the min-cost parenthesization of M i M j has the form (M i M k ) (M k+1 M j ). m[i, i] = 0 m[i, j] = min i k<j m[i, k] + m[k + 1, j] + d i 1 d k d j } s[i, j] = k that achieves the above min } 9 26 January 2000

Matrix-Chain Multiplication for (i = 1; i <= n; i ++) m [i] [i] = 0; for (d = 1; d < n; d ++) for (i = 1; i <= n - d; i ++) j = i + d; m [i, j] = m [i, i] + m [i + 1] [j] + d [i - 1] * d [i] * d [j]; s [i, j] = i; for (k = i + 1; k < j; k ++) Cost = m [i, k] + m [k + 1] [j] + d [i - 1] * d [k] * d [j]; if (Cost < m [i] [j]) m [i] [j] = Cost; s [i] [j] = k; } } } printf ("Min Cost = %d\n", m [1] [n]); Complexity is Θ(n 3 ). 10 26 January 2000

Matrix-Chain Multiplication int Print_Expression (s, i, j) // Print the optimal parenthesization for // M [i]... M [j] based on the information // in s. int k = s [i] [j]; if (i == j) printf ("M%d", i); return 0; } if (k!= i) printf ("("); Print_Expression (s, i, k); if (k!= i) printf (")"); printf (" * "); if (k + 1!= j) printf ("("); Print_Expression (s, k + 1, j); if (k + 1!= j) printf (")"); return 0; } 11 26 January 2000

Minimum Edit Distance Input: Two strings: s[1... m] and t[1... n]. Output: The minimum number of operations to transform s into t. An operation is: insert a character; delete a character; or substitute a character. c[i, j] def = the minimum cost to transform s[1... i] into t[1... j]. c[0, 0] = 0 c[i, j] = min c[i 1][j 1] if s[i] = t[j] 1 + c[i 1, j 1] otherwise (subst) 1 + c[i 1, j] (delete) 1 + c[i, j 1] (insert) 12 26 January 2000

Minimum Edit Distance Example: Here is the c array for strings s = ababbab and t = babbbaaba t[j] s[i] b a b b b a a b a 0 1 2 3 4 5 6 7 8 9 0 0 1 2 3 4 5 6 7 8 9 a 1 1 1 1 2 3 4 5 6 7 8 b 2 2 1 2 1 2 3 4 5 6 7 a 3 3 2 1 2 2 3 3 4 5 6 b 4 4 3 2 1 2 2 3 4 4 5 b 5 5 4 3 2 1 2 3 4 4 5 a 6 6 5 4 3 2 2 2 3 4 4 b 7 7 6 5 4 3 2 3 3 3 4 13 26 January 2000

Minimum Edit Distance The code looks like this: for (i = 0; i <= m; i ++) c [i] [0] = i; for (j = 1; j <= n; j ++) c [0] [j] = j; for (i = 1; i <= m; i ++) for (j = 1; j <= n; j ++) Best = c [i - 1] [j - 1]; if (s [i]!= t [j]) Best ++; if (1 + c [i - 1] [j] < Best) if Best = 1 + c [i - 1] [j]; (1 + c [i] [j - 1] < Best) Best = 1 + c [i] [j - 1]; c [i] [j] = Best; } Can get the minimum sequence of edit operations by tracing back through the c array. 14 26 January 2000

Travelling Salesperson Problem Input: A complete directed graph G = (V, E) with non-negative edge weights. Output: A directed cycle with the smallest possible sum of edge weights that passes through all vertices of G. For u V, S V, u, v 1 S, define D(u, S) to be the minimum-weight path starting at vertex u, passing through each vertex in S and ending at vertex v 1. The solution to the entire problem is then D(v 1, V \ v 1 }). Let w u,v denote the weight of edge u v. Then in general we have D(u, S) = min v S w u,v + D(v, S \ u})} D(u, ) = w u,v1 15 26 January 2000

Other Dynamic Programming Problems Longest Common Subsequence Given two sequences S = s 1, s 2,..., s m and T = t 1, t 2,..., t n, find the longest sequence that is a subsequence of both. Longest Ascending Subsequence Given a sequences S = s 1, s 2,..., s n of numbers, find the longest subsequence of it in which every number is at least as great as its predecessor. Bitonic Travelling Salesperson Given a set of points in the plane, find the minimum-distance cycle that hits every point and has the property that, if we start at the leftmost point, the cycle goes strictly to the right until it hits the rightmost point, and then it goes strictly to the left back to the leftmost point. 16 26 January 2000