Solutions to Homework 6



Similar documents
Approximation Algorithms

Analysis of Algorithms I: Optimal Binary Search Trees

Social Media Mining. Graph Essentials

Single machine parallel batch scheduling with unbounded capacity

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Matrix-Chain Multiplication

Dynamic Programming. Lecture Overview Introduction

Applied Algorithm Design Lecture 5

Network (Tree) Topology Inference Based on Prüfer Sequence

Scheduling Shop Scheduling. Tim Nieberg

Near Optimal Solutions

Discrete Optimization

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) Total 92.

GRAPH THEORY LECTURE 4: TREES

Catalan Numbers. Thomas A. Dowling, Department of Mathematics, Ohio State Uni- versity.

Labeling outerplanar graphs with maximum degree three

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

LECTURE 4. Last time: Lecture outline

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

CSC2420 Spring 2015: Lecture 3

8.1 Min Degree Spanning Tree

OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

CIS 700: algorithms for Big Data

Reminder: Complexity (1) Parallel Complexity Theory. Reminder: Complexity (2) Complexity-new

Reminder: Complexity (1) Parallel Complexity Theory. Reminder: Complexity (2) Complexity-new GAP (2) Graph Accessibility Problem (GAP) (1)

Euclidean Minimum Spanning Trees Based on Well Separated Pair Decompositions Chaojun Li. Advised by: Dave Mount. May 22, 2014

each college c i C has a capacity q i - the maximum number of students it will admit

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Algorithm Design and Analysis

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

The Goldberg Rao Algorithm for the Maximum Flow Problem

Part 2: Community Detection

FIND ALL LONGER AND SHORTER BOUNDARY DURATION VECTORS UNDER PROJECT TIME AND BUDGET CONSTRAINTS

Warshall s Algorithm: Transitive Closure

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

DATA ANALYSIS II. Matrix Algorithms

11. APPROXIMATION ALGORITHMS

Dynamic programming. Chapter Shortest paths in dags, revisited S C A B D E

Stationary random graphs on Z with prescribed iid degrees and finite mean connections

Mathematical Induction. Lecture 10-11

Data Structures and Algorithms Written Examination

On the k-path cover problem for cacti

Graphs without proper subgraphs of minimum degree 3 and short cycles

Transportation Polytopes: a Twenty year Update

Unit 4: Dynamic Programming

Introduction to Scheduling Theory

Linear Programming. March 14, 2014

6.852: Distributed Algorithms Fall, Class 2

Lecture 18: Applications of Dynamic Programming Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY

Finding and counting given length cycles

Offline sorting buffers on Line

Seminar. Path planning using Voronoi diagrams and B-Splines. Stefano Martina

SEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS. Nickolay Khadzhiivanov, Nedyalko Nenov

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

Max Flow, Min Cut, and Matchings (Solution)

GENERATING LOW-DEGREE 2-SPANNERS

Analysis of Algorithms, I

A Note on the Bertsimas & Sim Algorithm for Robust Combinatorial Optimization Problems

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004

Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

24. The Branch and Bound Method

UPPER BOUNDS ON THE L(2, 1)-LABELING NUMBER OF GRAPHS WITH MAXIMUM DEGREE

Approximating Minimum Bounded Degree Spanning Trees to within One of Optimal

Ph.D. Thesis. Judit Nagy-György. Supervisor: Péter Hajnal Associate Professor

ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN. Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015

Sample Questions Csci 1112 A. Bellaachia

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design

Asking Hard Graph Questions. Paul Burkhardt. February 3, 2014

Triangulation by Ear Clipping

Distributed Computing over Communication Networks: Maximal Independent Set

V. Adamchik 1. Graph Theory. Victor Adamchik. Fall of 2005

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

Shortest Inspection-Path. Queries in Simple Polygons

CS473 - Algorithms I

OPTIMAL BINARY SEARCH TREES

A Network Flow Approach in Cloud Computing

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

THE PROBLEM WORMS (1) WORMS (2) THE PROBLEM OF WORM PROPAGATION/PREVENTION THE MINIMUM VERTEX COVER PROBLEM

HOLES 5.1. INTRODUCTION

1 Definitions. Supplementary Material for: Digraphs. Concept graphs

Static Load Balancing

Triangle deletion. Ernie Croot. February 3, 2010

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research

International Journal of Advanced Research in Computer Science and Software Engineering

EE602 Algorithms GEOMETRIC INTERSECTION CHAPTER 27

Mining Social Network Graphs

Computational Geometry. Lecture 1: Introduction and Convex Hulls

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu

Load balancing Static Load Balancing

Guessing Game: NP-Complete?

Disjoint Compatible Geometric Matchings

136 CHAPTER 4. INDUCTION, GRAPHS AND TREES

Transcription:

Solutions to Homework 6 Debasish Das EECS Department, Northwestern University ddas@northwestern.edu 1 Problem 5.24 We want to find light spanning trees with certain special properties. Given is one example of light spanning tree. Input: Undirected graph G = (V,E); edge weights w e ; subset of vertices U V Output: The lightest spanning tree in which the nodes of U are leaves (there might be other leaves in this tree as well) Consider the minimum spanning tree T = (V,Ê) of G and the leaves of the tree T as L(T). Three possible situations are feasible U L(T ) U = U 1 U 2 where U1 L(T ) U 2 V L(T ) Note that when U satisfies Equation 1, the algorithm we are seeking is same as Kruskal s algorithm. Define a graph G=(V U,(u, v) : u, v V U (u, v) E}). Lemma 1 If there is no minimum spanning tree of the graph G, then lightest spanning tree with U vertices as leaves is not feasible Proof: Consider that no spanning tree for the graph G exists. Without any loss of generality assume that there are two trees in the spanning tree forest for the graph G. The lightest spanning tree of G must be a tree. Therefore some node u U must connect the two tress in the spanning tree forest of the graph G. But then u is no longer a leaf node which is a contradiction. Property 1 Minimum spanning tree obtained on the graph G is the lightest spanning tree if we greedily select and add edges such that all vertices U are leaves Proof: The minimum spanning tree T is well defined on the graph G. Consider each node u U. We greedily select an edge (u, v) : v T and add the edge (u, v) to T. Note that the edge (u, v) we greedily chose, v T. If v / T then v U. But we cannot choose such an edge because to form a spanning tree either of u or v must be connected to T but connecting one of them (suppose u) to T will no longer keep u as leaf. Cost of the new tree is cost( T ) + w (u,v). Since cost( T ) is optimal and we cannot select any edge other than (u, v) (because of the constraint on U vertices as leaves), cost of the new tree is lightest (cannot be decreased any further). Based on Lemma 1 and Property 1 we present the following algorithm procedure lightest-spanning-tree(g,w,u) Input: Graph G = (V,E); edges weights w e ; U V Output: Lightest spanning tree if exists Construct graph G = (Ṽ, Ẽ): 1

Ṽ = V U Ẽ = (u, v) : u, v V U (u, v) E} Apply Kruskal to get MST( G) = T : if T does not exist: lightest spanning tree infeasibile Construct edge set Ē: (u, v) Ē : u U v / U for each u U: makeset(u) sort the edges Ē by w e for all edges u,v E, in increasing order of weight: if find(u) find(v): add edge u,v to T union(u,v) return T Complexity analysis: Other than construction of edge set Ē, rest of the complexity analysis is analogous to Kruskal s algorithm which runs in O( E log V ) time. We construct a set U. For each edge e = (u, v) if find(u) find(v) then we keep that edge in Ē. Therefore this takes O( E log U ) time. Hence total complexity of the algorithm is still bounded by O( E log V ). 2 Problem 5.32 A server has n customers waiting to be served. For each customer i the service time required is t i. The cost function for this problem is given by n T = (time spent waiting by customer i) (1) i=1 Let the time spent waiting by customer i be w i. Then cost T = n i=1 (w i). w i can be defined recursively as w i = w i 1 + t i 1. Following the recurrence i 1 w i = w 1 + t j (2) Without any loss of generality weight time of customer 1 can be considered 0. Therefore w i = i 1 j=1 t j. The total cost can be summarized as n i 1 T = t j (3) Equation 3 can be rewritten as T = j=1 i=2 j=1 n f i t i (4) i=1 where f i denotes number of times (frequency) t i appears in cost function T. If we look at Equation 3 t i has a frequency of (n-i). Frequency of t 1 is n 1. Lemma 2 To minimize the cost we choose t 1 as the minimum service time. Proof: Assume that t 1 is the minimum service time. For some other t i, t i > t 1. Our claim is that choosing t i in place of t 1 can only increase the cost function. Note that only two terms of cost function will change. Previously cost T old = (n 1)t 1 + (n i)t i. With the proposed change new cost T new = (n 1)t i + (n i)t 1. But T new > T old and so t 1 must be minimum service time. Now as we can see at each step i if we choose t i to be the i-th minimum service time our problem reduces to subproblems of smaller size. Based on our discussion the greedy algorithm is as follows 2

procedure optimal-ordering(n,t) Input: Number of customers N;Set of service time t i : i (1,..., N) T Output: Optimal ordering of processing customers Sort the set T: increasing order of t i return T 3 Problem 6.3 Yuckdonald s is considering opening a series of restaurant along QVH. n possible locations are along a straight line and the distances of these locations from the start of QVH are in miles and in increasing order m 1, m 2,..., m n. The constraints are as follows: 1. At each location, Yuckdonald may open one restaurant and expected profit from opening a restaurant at location i is given as p i 2. Any two restaurants should be at least k miles apart, where k is a positive integer We use a dynamic programming formulation to solve this problem. We define P [i] as follows Definition 1 P [i] is defined as the maximum expected profit at location i Based on the constraints we have, we come up with the following recursive definition of P [i] maxj<i P [j] + α(m P [i] = max i, m j ) p i } The function α is defined as follows p i α(m i, m j ) = 0 if mi m j < k 1 if m i m j k Maximum expected profit of location i comes from the maximum of expected profits of location j and whether we can open a location at location i. The profit from opening a restaurant at location i is p i. Note that P[j] shows the maximum expected profit at location j. There may or may not be a restaurant at location j. Those cases when there is no restaurant at location j will be considered by case k where k < j. It is also likely that profit at location i, p i is higher than the P [j] + α(m i, m j ) p i. In those cases we need to do a comparison with p i as well. Based on the recursion the algorithm writes itself procedure expected-profit(n,p) Input: N locations; P[1,..,N] where P[i] denotes profit at location i Output: Maximum expected profit P max Declare an array of maximum Expected profit Profit[1..N]: Profit[i] denotes maximum expected profit at location i for i = 1 to N: Profit[i] = 0 for i = 2 to N for j = 1 to i-1 temp = Profit[j] + α(m i, m j ) P[i] if temp > Profit[i]: temp = Profit[i] if Profit[i] < P[i]: Profit[i] = P[i] Complexity analysis : There are two for loops which gives a O(n 2 ) complexity to this algorithm. 3

4 Problem 6.7 A subsequence is palindromic if it is the same whether read left to right or right to left. We have to devise an algorithm that takes a sequence x[1,...,n] and returns the length of the longest palindromic subsequence. We give the following definition of a palindrome Definition 2 Sequence x is called a palindrome if we form a sequence x by reversing the sequence x then x = x This definition allows us to formulate an optimal substructure for the problem. If we look at the sub-string x[i,..,j] of the string x, then we can find a palindrome of length at least 2 if x[i] = x[j]. If they are not same then we seek the maximum length palindrome in subsequences x[i+1,...,j] and x[i,...,j-1]. Also every character x[i] is a palindrome in itself of length 1. Therefore base cases are given by x[i,i] = 1. Let us define the maximum length palindrome for the substing x[i,...,j] as L(i,j) L(i + 1, j 1) + 2 if x[i] = x[j] L(i, j) = maxl(i + 1, j), L(i, j 1)} otherwise L(i, i) = 1 i (1,..., n) Based on the recursive definition we present the following algorithm procedure palindromic-subsequence(x[1,...,n]) Input: Sequence x[1,...,n] Output: Length of the longest palindromic subsequence Declare a 2 dimensional array L: L[i,j] stores the length of longest palindromic subsequence in x[i,...,j] for i = 1 to n: L[i,i] = 1 (Each character is a palindrome of length 1) for i = 1 to n: for s = 1 to n-i: j = s+i L[s,j] = compute-cost(l,x,s,j) L[j,s] = compute-cost(l,x,j,s) return L[1,n] The procedure compute-cost is an implementation of recursive formulation shown above. include details not mentioned in the recursive formulation Note that we procedure compute-cost(l,x,i,j) Input: Array L from procedure palindromic-subsequence; Sequence x[1,...,n] i and j are indices Output: Cost of L[i,j] if i = j: return L[i,j] else if x[i] = x[j]: if i+1 < j-1: return L[i+1,j-1] + 2 else: return 2 else: return max(l[i+1,j],l[i,j-1]) Complexity analysis: First for loop takes O(n) time while the second for loop takes O(n i) which is also O(n). Therefore the total running time of the algorithm is given by O(n 2 ). 4

5 Problem 6.12 A convex polygon P on n vertices in the plane (specified by their x and y coordinates). A triangulation of P is a collection of n-3 diagonals of P such that no towo diagonals intersect. Cost of a triangulation is given by the sum of lengths of diagonals in it. In this problem we are deriving an algorithm for finding a triangulation of minimum cost. We label the vertices of P by 1,...,n starting from an arbitrary vertex and walking clockwise. For 1 i < j n, we denote a subproblem A(i,j) which computes the minimum cost triangulation of the polygon spanned by vertices i,i+1,...,j. Formulating the subproblem this way we are enumerating all the possible diagonals such that the diagonals do not cross each other. Since we are only considering in clockwise directions, diagonals generated by the subproblems can not cross each other. Recursive definition of A[i,j] is as follows A[i, j] = min i k j A[i, k] + A[k, j] + d i,k + d k,j } (5) Base cases are given by A[i,i] = 0. This is because a diagonal can not be formed due to the same vertex. Next equation for d takes care of the fact that a diagonal can be evaluated only if the order of the vertices differ by 2. (xi x d i,j = j ) 2 + (y i y j ) 2 if j i 2 0 otherwise Based on the recursive definition and base cases, we present the algorithm as follows procedure min-cost-triangulation(p 1,...,p n ) Input: n vertices p 1,...,p n of a convex polygon P: Vertices are ordered in clockwise direction Output: Minimum cost triangulation of P Declare a 2 dimensional array A: A[i,j] stores minimum cost triangulation of polygon spanned by vertices i,i + 1,...,j for i = 1 to n: A[i,i] = 0 for s = 1 to n-1: for i = 1 to n-s: j = i + s: A[i,j] = min i k j A[i,k] + A[k,j] + d i,k + d k,j } return A[1,n] Complexity Analysis: Complexity analysis of the algorithm is analogous to chain matrix multiplication problem presented on page 171 of textbook. Computation of each A[i,j] takes O(j-i) time which is O(n). There are two outer loops which takes O(n 2 ). The algorithm runs in O(n 3 ) time. 6 Problem 6.17 Given an unlimited supply of coins of denomination x 1, x 2,...,x n, we wish to make change for a value v. We are seeking an O(nv) dynamic-programming for the following problem Input: x 1,...,x n ;v Question: Is it possible to make change for v using coins of denominations x 1,..., x n We define D(v) as a predicate which evaluates to true if it is make change for v using available denominations x 1, x 2,...,x n. If it is possible to make change for v using given denominations x 1, x 2,...,x n, then it is possible to make change for v x i using the same denominations with one coin of x i being chosen. But since we don t know which i will finally give us true value (which indicates change of v is possible) we will do a logical or over all i. Recursive definition of D(v) is written as follows D(v) = 1 i n D(v xi ) if x i v f alse otherwise 5

Based on Equation 6 the algorithm is straigtforward procedure make-change(x 1,..., x n,v ) Input: Denominations of available coins x 1,...,x n, Value V for which we are seeking denominations Output: true if making change with available denomination is feasible false otherwise Declare an array D of size V + 1 D[0] = true for i = 1 to V D[i] = false for v = 1 to V : for j = 1 to n: if x j v: D[v] = D[v] D[v x j ] else: D[v] = false return D[V ] Complexity Analysis: There are two for loops in the algorithm where one loop runs V times while the other loops run n times. Therefore the complexity of the algorithm is O( n V ). 6