Discuss the size of the instance for the minimum spanning tree problem.



Similar documents
Why? A central concept in Computer Science. Algorithms are ubiquitous.

Lecture 7: NP-Complete Problems

One last point: we started off this book by introducing another famously hard search problem:

CHAPTER 9. Integer Programming

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

1. Nondeterministically guess a solution (called a certificate) 2. Check whether the solution solves the problem (called verification)

Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

Approximation Algorithms

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Algorithm Design and Analysis

Introduction to Algorithms. Part 3: P, NP Hard Problems

2.8 An application of Dynamic Programming to machine renewal

NP-complete? NP-hard? Some Foundations of Complexity. Prof. Sven Hartmann Clausthal University of Technology Department of Informatics

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Introduction to Logic in Computer Science: Autumn 2006

NP-Completeness. CptS 223 Advanced Data Structures. Larry Holder School of Electrical Engineering and Computer Science Washington State University

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

Transportation Polytopes: a Twenty year Update

5.1 Bipartite Matching

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

THE PROBLEM WORMS (1) WORMS (2) THE PROBLEM OF WORM PROPAGATION/PREVENTION THE MINIMUM VERTEX COVER PROBLEM

The Classes P and NP

Proximal mapping via network optimization

Discrete Mathematics & Mathematical Reasoning Chapter 10: Graphs

Notes on NP Completeness

Completely Positive Cone and its Dual

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Discrete Optimization

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Reductions & NP-completeness as part of Foundations of Computer Science undergraduate course

OHJ-2306 Introduction to Theoretical Computer Science, Fall

Max Flow, Min Cut, and Matchings (Solution)

8.1 Min Degree Spanning Tree

Guessing Game: NP-Complete?

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004

Polytope Examples (PolyComp Fukuda) Matching Polytope 1

Page 1. CSCE 310J Data Structures & Algorithms. CSCE 310J Data Structures & Algorithms. P, NP, and NP-Complete. Polynomial-Time Algorithms

Math 181 Handout 16. Rich Schwartz. March 9, 2010

A Working Knowledge of Computational Complexity for an Optimizer

Applied Algorithm Design Lecture 5

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

Integer Programming Approach to Printed Circuit Board Assembly Time Optimization

Data Structures and Algorithms Written Examination

How To Solve A Minimum Set Covering Problem (Mcp)

Chapter. NP-Completeness. Contents

Chapter 1. NP Completeness I Introduction. By Sariel Har-Peled, December 30, Version: 1.05

The Goldberg Rao Algorithm for the Maximum Flow Problem

Bounded-width QBF is PSPACE-complete

On the Relationship between Classes P and NP

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

Bounded Treewidth in Knowledge Representation and Reasoning 1

Branch and Cut for TSP

Binary Image Reconstruction

6.852: Distributed Algorithms Fall, Class 2

Generating models of a matched formula with a polynomial delay

P versus NP, and More

Analysis of Algorithms, I

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

4.6 Linear Programming duality

Scheduling Shop Scheduling. Tim Nieberg

Exponential time algorithms for graph coloring

(67902) Topics in Theory and Complexity Nov 2, Lecture 7

Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Single machine parallel batch scheduling with unbounded capacity

CSC 373: Algorithm Design and Analysis Lecture 16

Lecture 19: Introduction to NP-Completeness Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY

Can linear programs solve NP-hard problems?

Optimization Modeling for Mining Engineers

Introduction to computer science

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

CAD Algorithms. P and NP

Integer Factorization using the Quadratic Sieve

Statistical machine learning, high dimension and big data

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) Total 92.

Tutorial 8. NP-Complete Problems

THE SCHEDULING OF MAINTENANCE SERVICE

Problem Set 7 Solutions

The LCA Problem Revisited

Classification - Examples

Computational Complexity: A Modern Approach. Draft of a book: Dated January 2007 Comments welcome!

A Model of Optimum Tariff in Vehicle Fleet Insurance

DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS

V. Adamchik 1. Graph Theory. Victor Adamchik. Fall of 2005

2.3 Convex Constrained Optimization Problems

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths

On Integer Additive Set-Indexers of Graphs

How To Find Local Affinity Patterns In Big Data

Linear Codes. Chapter Basics

Lecture 1: Course overview, circuits, and formulas

South Carolina College- and Career-Ready (SCCCR) Algebra 1

Transcription:

3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can be solved in 1 hour with algorithm A on a given computer, and n B 0 be the corresponding size for algorithm B. Give the values of n A 0 e nb 0 for a computer 100 times faster. 3.2 Size of the instances Discuss the size of the instance for the minimum spanning tree problem. 3.3 N P-complete and N P-hard problems Given a directed graph G = (N, A) with rational costs on the arcs, and a pair of nodes s, t, show that the problem of identifying a simple path (i.e., where each node occurs, at most, once) of minimum length from s to t is N P-hard. Show that the associated recognition problem, i.e., Max-SimplePath-r Given a directed graph G = (N, A) with rational costs on the arcs, a pair of nodes s, t and an integer K, does it contain a simple path from s to t of length at least K? is N P-complete. [Hint: propose a polynomial reduction of the problem Hamiltonian-Circuit-r (given a directed graph, does it contain a Hamiltonian circuit?] 3.4 Integer Linear Programming is N P-hard Show that Integer Linear Programming is N P-hard, by showing that the associated recognition problem, is N P-complete. ILP: Given a matrix A Z m n and two vectors b Z m and c Z n, find x {0, 1} n such that Ax b which minimizes c T x. ILP-d: Given a matrix A Z m n and a vector b Z m, is there a vector x {0, 1} n such that Ax b? [Hint: propose a polynomial reduction of SAT (satisfiability problem for boolean clauses) to ILP-r.] 3.5 Complexity and size of the formulation Give an integer linear programming formulation for the problem of finding a spanning tree of minimum cost in a graph G = (V, E). Is the number of constrants polynomial or exponential in Document prepared by L. Liberti, S. Bosio, S. Coniglio, and C. Iuliano. Translation to English by S. Coniglio 1

n = V? Is there a relationship between the size of a formulation (number of constraints and variables) and the difficilty of the associated problem? Document prepared by L. Liberti, S. Bosio, S. Coniglio, and C. Iuliano. Translation to English by S. Coniglio 2

Solution 3.1 Algorithm complexity Let n A, n B be the size of the largest instances which can be solved, in one hour, with algorithms A, B, on a computer 100 times faster that the original one. Since, by definition of n A 0, (na 0 )2 elementary operations are performed when using algorithm A on the original computer, 100(n A 0 )2 operations can be executed on the faster machine. Therefore, (n A ) 2 = 100(n A 0 )2 and hence n A = 10n A 0. Similarly, for algorithm B, we have that 2nB 0 operations are performed on the original computer, 100 2 nb 0 are performed on the faster one, and hence n B = n B 0 + log 2(100) < n B 0 + 7. To summarize, with algorithm A, which has quadratic complexity, we can solve instances 10 times larger, while, with algorithm B, which has exponential complexity, we can solve instances which are only larger by a tiny quantity of bits. 3.2 Size of the instances An instance for the minimum spanning tree problem amounts to a graph G = (V, E), with a weight c ij per edge {i, j} E. Recall that, given an integer i Z, log i + 1 bits are needed to code it in memory. The +1 addendum takes into account the sign bit. For nonnegative integers, log i bits suffice. For each edge {i, j} E we need to memorize the indices of the nodes where it is incident, using 2 log n bits per node, and the weights c ij, using log c max + 1 bits, where max := max ij E c ij. In total, we need log n + log m + m(2 log n + log c max + 1) bits, i.e., O(m(log n + log c max )). Note that we always suppose m > n, otherwise the graph is not connected and it admits no spanning tree. Usually, the number of bits needed to code a numerical value is taken as a constant. For instance, for a 64 bit machine, we can suppose that we d never deal with instances with more than 2 64 arcs or nodes. Similarly, we can suppose c max 2 64, so that c max can be stored in a single memory word. Under this assumption, the size of an instance is, asymptotically, O(m). Note that, for dense graphs (m n 2 ), the instance has a quadratic, O(n 2 ), size n. For sparse graphs (m < n 2 ), the size is less than quadratic, since n 2 < m 2, and more than linear, since n 2 > m. Document prepared by L. Liberti, S. Bosio, S. Coniglio, and C. Iuliano. Translation to English by S. Coniglio 3

3.3 N P-complete and N P-hard problems To prove that Max-SimplePath-r is N P-complete, we need to (i) show that the problem is in N P and (b) show that any other problem in N P can be reduced to our problem in polynomial time. Max-SimplePath-r is clearly in N P, since (a) it is a recognition problem and (b) given any solution to it (a path), we can verify in polynomial time whether (1) it is a path, (2) it is simple, 3) it has total cost > K. To verify (ii), we show that the problem Hamiltonian-Circuit-r, which is known to be N P-comlete, can be reduced in polynomial time to Max-SimplePath-r. Hamiltonian-Circuit-r: Given a directed graph G = (N, A ), does it contain a Hamiltonian circuit, i.e., a circuit where each node of G occurrs exactly once? We show that, given any instance of Hamiltonian-Circuit-r, we can create, in polynomial time, a particular instance of Max-SimplePath-r, such that the answer to Hamiltonian- Circuit-r is yes if and only if the answer to Max-SimplePath-r is yes. Consider, as an instance of Hamiltonian-Circuit-r, the directed graph G = (N, A), with unit arc costs. Let s = t be any node in N. G contains a Hamiltonian circuit if and only if it contains a simple path from s to t of length, at least, N. Indeed, any Hamiltonian circuit is a simple path from a node to itself, of cost N. For this pair of problems the reduction is trivial, as any instance of Hamiltonian-Circuit-r can directly be used as an instance of Max-SimplePath-r. By definition, since Max-SimplePath-r is N P-complete, Max-SimplePath, i.e., the problem of, given a graph and a pair of nodes s, t, finding a simple path of maximum length between s, t, is N P-hard, since it is at least as hard as Max-SimplePath-r (you can use the former to solve the latter) and is not in N P (since it s not a recognition problem). As a consequence, the problem Min-SimplePath, i.e., the problem of, given a directed graph with rational (nonrestricted in sign) arc weights and two nodes s, t, finding a simple path between s, t of minimum length, is N P-hard. Indeed, we can reduce in polynomial time the problem Max-SimplePath-r to the its recognition version Min-SimplePath-r Min-SimplePath-r: Given a directed graph G = (N, A), with rational arc weights, a pair of nodes s, t, and an integer k, does G contain a simple s t path of total length nonlarger than k? Indeed, it suffices to, given an instance of Max-SimplePath-r, composed of G = (V, A), weights c ij, s, t, k, build a new instance G = (V, A ), c ij, k, with V = V, A = A, c ij = c ij, k = k, and solve the Min-SimplePath-r problem on it. 3.4 ILP is N P-hard Consider the recognition problem ILP-r Document prepared by L. Liberti, S. Bosio, S. Coniglio, and C. Iuliano. Translation to English by S. Coniglio 4

ILP-r: Given a matrix A Z m n and a vector b Z m, is there a vector x {0, 1} n such that Ax b? Note that the case where we are also given an objective function with cost vector c and an integer k and are asked to find a vector x such that c T x k can be taken into account by adding the constraint c T x k to Ax b. To show that ILP-r is N P-complete, we need to (i) verify that ILP-r is in N P, and (ii) show that we can reduce a NP-complete problem, in polynomial time, to ILP-r. We will use the SAT problem (boolean satisfaibility). SAT: Given m boolean clauses C 1,...,C m (disjunction over 2n literals y 1,...,y n, y 1,...,y n ), is there a valuation which satisfies all the clauses? (ii) ILP-r is in N P since (a) it is a recognition problem and (b) given a vector x {0, 1} n, we can verify in polynomial time whether it satisfies the system Ax b. As to the reduction, it suffices to, given any SAT instance with variables y 1,...,y n and clauses C 1,...,C m, show that we can construct a particular instance of ILP-r with binary variables x 1,...,x n and m linear constraints given as inequalities, such that SAT has answer yes if and only if ILP-r has answer yer. We proceed as follows. We introduce a linear constraint per clause. If the k-th variable in clause C j, 1 j m, is y i, the k-th term of the j-th constraint is x i. If the k-th variable is ȳ i, the k-th term is (1 x i ). We replace the addition operator to the local disjunction one. For example, given the SAT instance we construct the ILP-r instance C 1 = (y 1 y 2 y 3 ) C 2 = (ȳ 1 ȳ 2 ) C 3 = (y 2 ȳ 3 ), x 1 + x 2 + x 3 1 (1 x 1 ) + (1 x 2 ) 1 x 2 + (1 x 3 ) 1 x 1, x 2, x 3 {0, 1}. Given a solution to ILP-r, we can construct the corresponding solution to SAT-r by setting to true all variables at value 1, and to false all those at value 0. 3.5 Complexity and size of the formulation Consider the graph G = (V, E), with costs c ij on the edges {i, j} E. Associate to each edge {i, j} a variables x ij, such that x ij = 1 if the edge is in the tree and 0 otherwise. The model is Variables x ij {0, 1}, {i, j} E Document prepared by L. Liberti, S. Bosio, S. Coniglio, and C. Iuliano. Translation to English by S. Coniglio 5

Formulation min c ij x ij s.t. x ij = V 1 {i,j} E {i,j} E x ij S 1 S V : 2 S < V (subtour elimination). i,j S (cardinality) We have a subtour elimination constraint for each proper subset S of V, of cardinality nonsmaller than 2. Indeed, for S = V, the corresponding constraint amounts to the cardinality constraint. As an exercise, discuss on why the constraints need not be imposed for S = i, i V. Observe that an instance of the problem, given as an ILP, has exponentially many constraints (exponentially w.r.t. n = V ), since we have exactly 2 n n 2 subtour elimination constraints. Any polynomial algorithm (supposing that there is any!) for solving ILP problems will not run in polynomial time, as it will we polynomial in the size of the formulation, which is exponential in that of the instance. Document prepared by L. Liberti, S. Bosio, S. Coniglio, and C. Iuliano. Translation to English by S. Coniglio 6