In this paper we present a branch-and-cut algorithm for



Similar documents
Approximation Algorithms

Noncommercial Software for Mixed-Integer Linear Programming

A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

24. The Branch and Bound Method

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows

The Trip Scheduling Problem

OPRE 6201 : 2. Simplex Method

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

3. Linear Programming and Polyhedral Combinatorics

Linear Programming I

CHAPTER 9. Integer Programming

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

Modeling and Solving the Capacitated Vehicle Routing Problem on Trees

1 Solving LPs: The Simplex Algorithm of George Dantzig

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Linear Programming. March 14, 2014

Adaptive Linear Programming Decoding

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Applied Algorithm Design Lecture 5

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

5.1 Bipartite Matching

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Discrete Optimization

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

Practical Guide to the Simplex Method of Linear Programming

2.3 Convex Constrained Optimization Problems

The Goldberg Rao Algorithm for the Maximum Flow Problem

Exponential time algorithms for graph coloring

New Exact Solution Approaches for the Split Delivery Vehicle Routing Problem

JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004

Integer Programming. subject to: (i = 1, 2,..., m),

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

A Column Generation Model for Truck Routing in the Chilean Forest Industry

Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints

Chapter 13: Binary and Mixed-Integer Programming

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Integrating Benders decomposition within Constraint Programming

Transportation Polytopes: a Twenty year Update

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

What is Linear Programming?

4.6 Linear Programming duality

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction

Adaptive Online Gradient Descent

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Routing in Line Planning for Public Transport

Discuss the size of the instance for the minimum spanning tree problem.

Proximal mapping via network optimization

Chapter 10: Network Flow Programming

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Strategic planning in LTL logistics increasing the capacity utilization of trucks

Offline sorting buffers on Line

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

Linear Programming. Solving LP Models Using MS Excel, 18

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Compact Representations and Approximations for Compuation in Games

Labeling outerplanar graphs with maximum degree three

Solving Systems of Linear Equations

DATA ANALYSIS II. Matrix Algorithms

THE PROBLEM WORMS (1) WORMS (2) THE PROBLEM OF WORM PROPAGATION/PREVENTION THE MINIMUM VERTEX COVER PROBLEM

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Linear Programming. April 12, 2005

HYBRID GENETIC ALGORITHMS FOR SCHEDULING ADVERTISEMENTS ON A WEB PAGE

Ph.D. Thesis. Judit Nagy-György. Supervisor: Péter Hajnal Associate Professor

Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

A Decision Support System for Crew Planning in Passenger Transportation using a Flexible Branch-and-Price Algorithm

Linear Codes. Chapter Basics

Special Situations in the Simplex Algorithm

How to speed-up hard problem resolution using GLPK?

Scheduling Shop Scheduling. Tim Nieberg

An optimization model for aircraft maintenance scheduling and re-assignment

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

A Branch-Cut-and-Price Approach to the Bus Evacuation Problem with Integrated Collection Point and Shelter Decisions

Lecture 17 : Equivalence and Order Relations DRAFT

Polytope Examples (PolyComp Fukuda) Matching Polytope 1

Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

Social Media Mining. Graph Essentials

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Lecture 6 Online and streaming algorithms for clustering

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen

Lecture 4 Online and streaming algorithms for clustering

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

Notes on Determinant

Fairness in Routing and Load Balancing

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA

Algorithm Design and Analysis

How To Solve A Minimum Set Covering Problem (Mcp)

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Transcription:

SOLVING A TRUCK DISPATCHING SCHEDULING PROBLEM USING BRANCH-AND-CUT ROBERT E. BIXBY Rice University, Houston, Texas EVA K. LEE Georgia Institute of Technology, Atlanta, Georgia (Received September 1994; revisions received September 1995, April 1996; accepted April 1996) A branch-and-cut IP solver is developed for a class of structured 0/1 integer programs arising from a truck dispatching scheduling problem. This problem involves a special class of knapsack equality constraints. Families of facets for the polytopes associated with individual knapsack constraints are identified. In addition, a notion of conflict graph is utilized to obtain an approximating node-packing polytope for the convex hull of all 0/1 solutions. The branch-and-cut solver generates cuts based on both the knapsack equality constraints and the approximating node-packing polytope, and incorporates these cuts into a tree-search algorithm that uses problem reformulation and linear programming-based heuristics at each node in the search tree to assist in the solution process. Numerical experiments are performed on large-scale real instances supplied by Texaco Trading & Transportation, Inc. The optimal schedules correspond to cost savings for the company and greater job satisfaction for drivers due to more balanced work schedules and income distribution. In this paper we present a branch-and-cut algorithm for a structured 0/1 integer programming problem referred to as the truck dispatching scheduling (TDS) problem. The TDS problem is a vehicle routing and scheduling problem that arose from an oil company s desire to better utilize regional truck fleets and drivers. Many studies on different variations of vehicle routing and scheduling have appeared in the literature (Bodin 1990, Golden and Assad 1988). In most of these, the emphasis has been on developing fast heuristic algorithms to find good feasible solutions (Bodin 1990, p. 575). In contrast, the branch-and-cut algorithm developed herein finds optimal solutions. A classical approach for solving integer programming problems is branch-and-bound. However, for large-scale problems, such an approach often fails due to excessive memory usage. In 1983 Crowder et al. incorporated preprocessing and cut generation prior to initiating the branch-and-bound procedure, resulting in remarkable solution times for some difficult instances (some of which were once thought to be intractable). By the end of the 1980s, the term branch-and-cut was being used to describe methods that utilize cutting-plane techniques not only at the root node, but at nodes within the search tree itself. In these methods, cuts generated at a given node are valid globally across the entire search tree. The branch-and-cut approach has been responsible for solving important hard combinatorial optimization problems. In particular, largescale instances of the traveling salesman problem and the airline crew scheduling problem have been solved to proven optimality in reasonably short amounts of time (Applegate et al. 1994; Hoffman and Padberg 1991, 1993; Padberg and Rinaldi 1989, 1991). Certainly, an important aspect of a successful application of branch-and-cut is the ability to generate deep cuts in a computationally efficient way. This ability is often enhanced by knowledge of the facet structure of the convex hull of 0/1 solutions, or of certain approximating polytopes to this convex hull. Thus, an important component of this research involved a study of the polyhedral structure of the TDS problem. Polyhedral results pertaining to the TDS problem are summarized in Section 2. First, we consider approximating knapsack equality polytopes associated with individual constraints of the TDS model. These knapsack equality polytopes are of course intimately related to their inequality counterparts. We consider various families of facet defining inequalities for the associated knapsack inequality polytope and state conditions when these inequalities also define facets of the equality polytope. We also consider an approximating polytope derived from the entire system of constraints. In particular, by allowing a relaxation of the equality constraints and defining the notion of conflict graph, a node packing polytope is obtained, from which clique and odd cycle inequalities are generated. The branch-and-cut solver is described in Section 3. The individual components include a problem preprocessor, a heuristic, a cut generator, and an LP solver. These components have become the standard components for successful branch-and-cut solvers in recent years (e.g., see Subject classifications: Programming, integer, algorithms, cutting plane/facet generation: branch-and-cut algorithm for a structured 0/1 problem. Transportation, vehicle routing: dispatching of trucks for an oil company. Area of review: OPTIMIZATION. Operations Research 0030-364X/98/4603-0355 $05.00 Vol. 46, No. 3, May June 1998 355 1998 INFORMS

356 / BIXBY AND LEE Hoffman and Padberg 1991, 1993; Padberg and Rinaldi 1989). However, the specific features one should include in each component and the best way of interfacing the components with one another is still largely experimental. With this in mind we designed the code so that individual features/components can be switched on and off, allowing great flexibility in testing the effectiveness of different solution strategies. Also note that most branch-and-cut solvers are problem specific and exploit the underlying problem structure whenever possible. In this same spirit our solver is specially tailored to solve the TDS problem, though of course, many of the techniques and much of the code are applicable to general 0/1 problems. Results of numerical experiments on fourteen largescale real instances supplied by Texaco Trading & Transportation are summarized in Section 4. The results indicate that in combination, the system s components yield an effective IP solver for this class of problems, and that cut generation plays a major role in reducing the time to establish optimality on some of the more difficult problem instances. We now turn to Section 1, where we describe the TDS problem and its 0/1 integer programming formulation. 1. PROBLEM DESCRIPTION AND FORMULATION An oil company has dispatchers in each of 17 regions in the United States. Each dispatcher is responsible for assigning itineraries to drivers each day. The set of itineraries selected must enable the pickup of crude products at designated locations for delivery to pipeline entry points. Each truck carries a tank of the same volume. When picking up product, the entire tank is filled, and this is called a load. Products are picked up and delivered as loads. Thus, each itinerary specifies an alternating sequence of pickup/ delivery actions. When there is more than one load of crude product at a given pickup point, multiple pickups at that point result. Thus, associated with each itinerary are loads, pickup points, delivery points, deadhead miles (i.e., miles driven with an empty truck), and total hours. Itineraries are generated for each driver or group of drivers (called a driver set) by an automatic process prior to the 0/1 integer programming problem formulation. Drivers are grouped in the same driver set provided they have identical driver parameters, including home base location, hours available, starting time, breathing equipment certification, etc. The automatic process will generate a large number of possible itineraries for each driver set according to the parameters of the driver set, as well as the parameters associated with pickup points, delivery points, and Department of Transportation (DOT) regulations (e.g., open/ close times for pickup and delivery points, and maximum on-duty-time allowed by DOT). The initial generation process may generate two or more itineraries that haul the same loads, but in a different order. In this case, preliminary elimination is done to select only the itinerary with the smallest cost. (The determination of the cost of an itinerary is discussed below.) The primary objectives of the TDS problem are to minimize deadhead miles and equalize the loaded drive-time of the drivers. By minimizing deadhead miles, it is expected that trucks and drivers will be on the road fewer hours each day, thereby lowering fuel and maintenance costs, and improving safety for the drivers. Also, since driver payroll is based on the amount of time a driver is hauling product, by equalizing loaded drive-time, overall driver satisfaction should be increased. In order to reflect both objectives, to each generated itinerary a weight is assigned that incorporates the associated number of deadhead miles and a penalty associated with loaded drive-time being above or below a prespecified target level. The company anticipates that even a five-percent reduction in deadhead miles could result in significant savings. In addition, obtaining optimal dispatches via solving the integer programs allows for more consistency in the dispatch creation in different locations, it lowers the training curve for dispatchers, and enables easier substitution and relocation of dispatchers. There are two types of constraints in the 0/1 integer programming formulation of the TDS problem. One type arises from the number of times each pickup point must be visited, and the second type arises from the number of drivers in each driver set. The parameters associated with the formulation are as follows: n number of itineraries c j weight associated with the jth itinerary p number of pickup points b i number of times pickup point i must be visited a ij number of times pickup point i is visited according to itinerary j q number of driver sets g k number of drivers in the kth driver set n k number of itineraries generated for the kth driver set. In matrix form, the problem can be stated as minimize c T x (1) subject to Ax b pickup point constraints, Hx g driver set constraints, x B n, where c (c 1,...,c n ) T n, c 0; A [a ij ]isap n nonnegative integral matrix; b (b 1,...,b p ) T Z p, b 0; g ( g 1,...,g q ) T Z q, g 0; B {0, 1}; and H is a q n matrix of the form T H 1 n 1 T 1 n2, T 1 nq where 1 nk is the vector in n k in which every entry is 1, and n 1... n q n. Furthermore, the ith pickup point constraint a i T x b i,1 i p, is of the form

x j j R i1 2x j j R i2 b i x j b i, (2) j R ib i where each b i is a small positive integer and R im,1 m b i, are disjoint subsets of {1,..., n}. Note that, by itself, a column of the matrix A does not capture all the information about the associated itinerary. Indeed, it only reflects how many times each pickup point is to be visited. A separate database specifies detailed information about the itinerary, and once an optimal solution to the IP is obtained, the solution is then passed to a translation program to decipher. Once deciphered, estimated arrival and departure times for each location are compared across the dispatch to look for situations where too many drivers are scheduled to be at a given location at the same time. Adjustments are made as necessary and the final dispatch is generated. For the given test instances, we remark that no adjustments were actually needed; the optimal solution returned by the integer programming solver satisfied the time-space requirements. 2. POLYHEDRAL THEORY FOR THE TDS PROBLEM 2.1. Preliminaries In this section we give an overview of the cutting aspect of a branch-and-cut approach for solving an integer programming problem. Let P IP denote the convex hull of integer solutions of problem (1), let P LP denote the polytope associated with the linear programming relaxation, and let x be a solution of minimize{c T x : x P LP }. If x is integral, then we are done. If not, one attempts to generate one or more valid inequalities T i x i0 for P IP such that T i x i0. One then adds these inequalities to the linear programming relaxation and obtains an updated solution x of the linear program minimize{c T x : x P LP, T i x i0 }. Again, if x is integral, we are done. If not, the process is repeated, saving only those valid inequalities generated from previous iterations that have been active (binding) at x within the last few iterations. An effective way of generating valid inequalities that cut off the current fractional solution is to generate facet defining inequalities for various approximating polytopes of P IP. In this work we make use of approximating knapsack equality polytopes associated with individual constraints of the TDS problem, and an approximating node packing polytope associated with a graph, called the conflict graph, obtained by considering all the constraints simultaneously. 2.2. Facets of Approximating Polytopes Let P i denote the approximating polytope P i conv{ x B n : a T i x b i }, where a T i x b i is the ith pick-up point constraint in the system Ax b. Since a T i x b i is of the form (2), we need only to consider the general knapsack equality polytope S b : conv x B n : j R 1 x j 2x j j R 2 BIXBY AND LEE / 357 j R b bx j b. (3) Clearly, the inequalities x j 0 and x j 1 determine all of the facets of S 1. Hence, the polyhedral structure of S b is only of interest when b 2. For simplicity of exposition, in what follows we assume b that i 1 R i N, where N : {1,..., n}. It is straightforward to extend the results to the general case. We also assume R 1 b 1, which is realistic for the TDS model since every pickup point constraint of the given collection of problem instances satisfies this condition. Together, these two assumptions imply that S b has dimension n 1. In Theorem 1 we record b 1 R k 1 ( 1 k ) b/2 k 2 R k 1 distinct facets of S b, b 2. For each facet we list one representative inequality from the family of valid inequalities defining it. These representative inequalities are in fact facet defining for the inequality counterpart of S b (that is, the full-dimensional polytope obtained by replacing the equality sign in (3) with less than or equal to), and could be obtained via lifting of certain minimal covers or (1, k) configurations. Proofs that these inequalities are indeed facet defining for S b are given in Lee 1993a, 1993b and 1994. (Also, see Weismantel 1994 for related results.) What is important for the application at hand is that each representative inequality has most (in some cases, all) of the coefficients associated with variables with indices in R 1 equal to zero. For the problem instances motivating this work, a typical constraint has R 1 much greater than R i for i 2,...,b; consequently, this representative inequality is the sparsest among all inequalities in the family. Since maintaining sparsity in the augmented constraint matrix is desirable when implementing a branch-and-cut algorithm, these representative inequalities are the ones actually used in the cut generation phase of the algorithm described in Section 3. In the statement of the theorem we adopt the convention that a summation or union is vacuous if the lower limit is greater than the upper limit. Theorem 1. Let b 2, and let S b be defined as in (3), b where i 1 R i N and R 1 b 1. (a) Let k {1, 2,..., b 1}, let T R 1 with T k, and assume k 1 i 1 R b i 0. Then the inequality b x j k b i x j k, (4) j T i b k 1 j R i defines a facet of S b. (b) Let i 0 {1,..., b/2 }, and let j 0 R i0. Then the inequality b x j0 i b i 0 x j 1, (5) 1 j R i defines a facet of S b.

358 / BIXBY AND LEE b 1 (c) Assume i b/2 1 R i 0, let i 0 min {i { b/2 b/2 1,...,b 1} : R i 0 }, and suppose i b i0 1 R i 0. Then the inequality b i i 0 x j 1, (6) j R i defines a facet of S b. Note that if j 0 R 1, then the associated facet identified in part (b) is identical to the facet associated with the set T { j 0 } identified in part (a). We next consider an approximating polytope derived by considering all the constraints simultaneously. For convenience of notation, let ( H A ) and bˆ ( g b ), where A, H, b, and g are defined as in Section 1. Then the TDS polytope P IP can be expressed as P IP conv{ x B n : x bˆ}. Now consider the relaxation P conv{ x B n : x bˆ} of P IP. Although several classes of valid/facet defining inequalities for P are known (Nemhauser and Wolsey 1988, p. 237; Ferreira et al. 1993), the polyhedral structure is less well-understood than that of a single knapsack inequality polytope (Nemhauser and Wolsey 1988, p. 290). One important class of valid inequalities for P is the minimal cover inequalities: i C x i C 1, where C N is a minimal set satisfying i C a i bˆ, where a i denotes the ith column of. Since for the given TDS problem instances no pickup point requires more than three visits per day, the components of the right-hand-side vector b are either 1, 2 or 3. Consequently, one can readily find many minimal covers C of cardinality 2, and moreover, minimal covers of cardinality 3 or greater occur infrequently. With this in mind we now define the conflict graph, which allows us to approximate P by a node packing polytope. This idea extends that suggested by Johnson and Nemhauser (1991), who proposed obtaining clique inequalities based on a single knapsack constraint. It can also be viewed as a natural extension of the intersection graph associated with a pure 0/1 integer matrix. Definition 1. The conflict graph associated with the system x bˆ, x B n is the graph G C (N, E), where N 1, 2,..., n and E jk: j, k N, j k, and a j a k bˆ, where a j and a k denote distinct columns in. Clearly, if jk E, then x j x k 1 is a valid inequality for P IP. Hence, if A C denotes the edge-node incidence matrix of G C, then the node packing polytope P C conv x B n : A C x 1, satisfies P P C. Consequently, P IP P C, and one can generate valid inequalities for P IP by applying results from the theory of node packing polytopes (Chvátal 1975, Padberg 1973, Trotter 1975, Wolsey 1976b). The following example illustrates that cuts generated via the conflict graph are, in general, tighter than cuts obtained using only one constraint at a time. Example 1. Let P IP conv{ x B 5 : Ax b}, where A 1 1 1 0 0 0 1 1 0 1 and b 1 1 0 2 2 1 1 2. The conflict graph yields the clique inequalities: x 1 x 2 x 3 x 5 1 and x 1 x 2 x 4 x 5 1 which are violated by the fractional solution x [1/2, 1/2, 0, 0, 1/2] T. However, no inequality generated from a single row of the system Ax b is violated by this solution. 3. THE BRANCH-AND-CUT IP SOLVER 3.1. Overview Recent implementations of branch-and-cut algorithms (e.g., see Hoffman and Padberg 1991, 1993; Padberg and Rinaldi 1991) have indicated that in addition to cut generation and clever branching strategies, other computational devices, such as preprocessing and heuristics, also play an important role in the solution process. However, the details of a good implementation including exactly which techniques to incorporate, how to implement these techniques, and how the components of a branch-and-cut solver should interface with one another are still largely experimental. Moreover, depending on the application in mind certain techniques in a given implementation may be tailored for the specific structure of the problem, while other techniques are of a general nature and could be effective on a wide variety of problems. The Branch-and-Cut IP Solver described herein is designed to solve to optimality the real instances of the TDS problem, and consequently, it exploits the TDS problem structure when it is advantageous to do so. On the other hand, many of the techniques are applicable more generally; some being extensions of techniques described in Hoffman and Padberg (1993), and some being based on the polyhedral results in Section 2. The first task of the system is to preprocess the usersupplied linear programming formulation. Preprocessing reformulates and tightens the constraint matrix by removing redundant rows and columns, and fixing variables if possible. The preprocessing techniques are used throughout the entire branch-and-cut tree to reduce the size of subsequent linear programs and eliminate redundancy among rows and columns. After initial preprocessing, we solve the linear program relaxation and obtain a lower bound for the original problem. Next, the heuristic is called. If the heuristic succeeds, we then perform reduced-cost fixing using the reducedcosts from the linear programming relaxation, the lower bound, and the upper bound. If enough variables are fixed, we eliminate columns that are fixed to zero and one, and preprocess the problem again. If the heuristic fails, we

either branch or proceed to the constraint generator as described below. Once the reduced-cost fixing is done, the linear program is solved again. If the solution is not integral, the constraint generator is called. The particular polyhedral cuts that we generate are knapsack cuts (as described in Section 2.2), clique inequalities and odd cycle inequalities. The latter two types of cuts are generated over a partial conflict graph. They are then tightened by lifting to include other nodes in the graph. The cuts obtained, which are valid across the entire tree, are then appended to the constraint matrix, and the linear program is solved again. We iterate through this process until one of the following occurs: (i) an integral solution is obtained, (ii) the linear program is infeasible, (iii) no more cuts are generated, (iv) the number of cuts generated exceeds a prespecified value, or (v) the objective value of the linear programs have not improved much. If (i) occurs, the node is fathomed. The next node is chosen based on the best-estimate strategy; i.e., we select the node with the smallest LP value among all active nodes. (Note that the current LP value of a node serves only as an estimate, since the true value may be different due to the addition of cuts.) However, we move directly to this next node only if the integral objective value is greater than that of the current integral upper bound. Otherwise, we update the integral upper bound, return to the root node, resolve the linear program with the current cuts appended, and perform reduced-cost fixing. Only after all this is completed do we move to the node in the tree with the smallest LP value; at which point we reconstruct this node by resolving the associated linear program, again with the current cuts appended and the current variables fixed. If (ii) occurs, the node is fathomed, and we continue our search with the remaining active nodes. If (iii), (iv), or (v) occur, we perform the heuristic again. If the heuristic succeeds and produces a better upper bound, reduced-cost fixing is performed at the root. After this, or if the heuristic does not produce a better upper bound, we select a variable to branch on and create two new nodes. We experimented with branching on the smallest indexed fractional variable, and branching on the most infeasible fractional variable. The former selection worked better in the overall performance on the real instances, although in a few cases, using the most infeasible fractional variable was superior (see Section 4). Condition (iv) is used in our system as one of the criteria to move on since the gap between the linear program relaxation and the optimal integral solution in several of the test problems is very small (see Table I). As a result, it is not always easy to detect the tailing off of the procedure as in (v). We remark that a small gap does not imply that the integer programs are easy to solve. For example, problem p6000 in MIPLIB (Bixby et al. 1993) has a relative gap of only 0.007%, and yet it is very difficult to solve, as evidenced in Hoffman and Padberg 1991. A flow chart of the entire system is given in Figure 1. 3.2. Preprocessing and Reformulation Preprocessing techniques fix variables permanently, remove redundant rows, and check for inconsistencies among the constraints (Crowder et al. 1983; Hoffman and Padberg 1991, 1993; Savelsbergh 1994; Suhl and Szymanski 1994). Some of the features in our preprocessor are frequently mentioned in the literature of branch-and-cut, in particular the removal of duplicate, multiple, and redundant rows. In addition, we extend the row inclusion and the clique fixing techniques for set partitioning problems (Hoffman and Padberg 1993) to our integral matrix, and incorporate conflict fixing based on the idea from Section 2. A brief description of these techniques is given below. Row Inclusion/Multiple-Inclusion Fixing. This procedure searches for rows that are properly contained in a multiple of another row, i.e., an appropriate multiple of one row is an inclusion of the other. To be precise, let J and K be the sets of column indices for the nonzero entries of two distinct rows, say rows 1 and 2, respectively, and suppose J K. Thus, we have the constraints a 1j x j b 1, j J BIXBY AND LEE / 359 a 2j x j a 2j x j b 2. j J j K J Furthermore, suppose a ij 0 for all i and j, and suppose a 1j (b 1 /b 2 )a 2j for all j J. Then x k 0 for all k (K J) { j J : a 1j (b 1 /b 2 )a 2j }. In this case, row 2 can be removed and row 1 can be updated. In the implementation, we first distinguish between constraints for pickup points and driver sets. Corresponding to each pickup point constraint, we record the number of distinct driver sets appearing in the row, and the total number of itineraries for each driver set. For those rows with itineraries from exactly one driver set, we perform the inclusion test for this row and the corresponding driver set constraint. Next we do a pairwise comparison of pickup rows. Assume the q driver sets are labeled 1,..., q. If the labels appearing in one row of a pair also appear in the other, we check that the number of itineraries in each represented driver set of the former is less than or equal to that of the latter. If this is the case, we perform the inclusion test, checking along the smaller of the associated index sets. For example, assume only driver sets 2 and 3 are represented in row 1, and only driver sets 1, 2, 3, and 4 are represented in row 2. Let m ij denote the number of nonzero coefficients associated with driver set j in row i. If m 12 m 22 and m 13 m 23, then the inclusion test would be performed, with the candidate set J consisting of the m 12 m 13 indices in row 1. Creating a Clique Matrix. Let G C be the conflict graph associated with the integral matrix : ( H A ) and the vector bˆ ( g b ). Choose a row i with right-hand-side 1, and let M i { j N : a ij 1}. If there exists a complete subgraph

360 / BIXBY AND LEE Figure 1. Branch-and-cut flow chart. K of G C such that M i K, then since j K x j 1 and j Mi x j 1, it follows that x j 0 for all j K M i.to implement this, we select a node v adj(m i ), the set of nodes in N M i that are adjacent to at least one node in M i. If degree(v) M i, discard v. Otherwise, scan through the adjacency list of v. If it does not contain all the nodes in M i, then discard v. Otherwise, v is part of a complete subgraph containing M i, so we set x v 0. Repeat the above process for each distinct v adj(m i ). Subsequent Conflict-Fixing of Variables. If some variable x k has been fixed to one, then conflict-fixing will in turn fix variable x j to zero if a k a j bˆ. 3.3. The Heuristic The heuristic provides a means of obtaining integer feasible solutions quickly and is used repeatedly within the branch-and-cut search tree. A good heuristic one that produces good integer feasible solutions is a crucial component in the branch-and-cut algorithm since it provides an upper bound for reduced-cost fixing at the root, and thus allows reduction in the size of the linear program that must be solved. This in turn may reduce the time required to solve subsequent linear programs at nodes within the search tree. In addition, a good upper bound also enables us to fathom more active nodes, which is extremely important in solving large-scale integer programs as they tend to create many active nodes leading to memory explosion. Our heuristic works by sequentially fixing variables and solving the corresponding linear programs. It incorporates logical and conflict fixing based on the TDS problem structure. Fine tuning and selection of various parameters within the heuristic were based on extensive numerical experiments using the given TDS problem instances. The

heuristic algorithm terminates when either an integer feasible solution is obtained, or the linear program is infeasible. If it returns an integer feasible solution with a lower objective value than the upper bound of the integer program, the upper bound is updated and we move once again to the reduced-cost fixing routine. Otherwise, we turn to cut generation. At a node of the branch-and-bound tree, the heuristic first solves the LP-relaxation associated with that node. If a fractional solution is obtained and an upper bound for the integer program exists, the heuristic then performs reduced-cost fixing to fix nonbasic variables at their corresponding lower bound and upper bound. In the latter case, for the rows covered by these variables, implied fixing is performed, and the appropriate rows are removed. (A row is said to be covered by the variables set equal to one if the sum of the associated coefficients equals the right-hand side.) For other rows containing these variables but not covered by them, we perform conflict fixing of the variables. After removing all variables fixed to zero, the linear program is solved again. If there are still fractional values in the optimal solution, variables (both basic and nonbasic) at value one will be set to one. Then, for specified parameters,, and tol, the heuristic computes x max max{ x j : tol x j 1 tol}, and x min min{ x j : tol x j 1 tol}, and sets x j 1 for x j x max tol, and x j 0 for x j x min tol. After extensive numerical tests, we set tol 0.001, 20, and 15. Special care is taken to ensure that no variable is set to one, and then subsequently set to zero when x max tol x j x min tol. In addition, when all the fractional variables fall outside the range tol x j 1 tol, the code automatically resets x max to 1 tol, and x min to tol. After these additional variables are fixed, the LP solver is called again, and the process continues. We experimented with the frequency of heuristic calls on our test problems and found that applying the heuristic at every node of the search tree is ineffective in improving the solution time. Indeed, on average, 20 linear programs are solved per heuristic call, and the solution time for each of these linear programs ranges from 0.2 to 1.9 seconds. As a result, applying the heuristic at each node tends to be computationally expensive. Based on our experiments, the present code is set to call the heuristic as follows: at the root node, the heuristic is called before the first cut generation and also before branching begins; and within the branch-and-bound tree, it is invoked whenever the node is at a depth of a multiple of 8 from the root. 3.4. Continuous Reduced Cost Implications Reduced cost fixing is a well-known and important idea in the literature of integer programming. In a pure zero-one linear programming minimization problem, given the continuous optimal objective value z lp, the continuous optimal reduced cost d j, and a true upper bound, z, on the zeroone optimal objective function value, then for all nonbasic BIXBY AND LEE / 361 variables x j in the continuous optimal solution, the following are true: (a) If x j 0 in the continuous solution and z z lp d j, then x j 0 in every zero-one solution. (b) If x j 1 in the continuous solution and z z lp d j, then x j 1 in every zero-one solution. In our branch-and-cut implementation, before branching, global reduced-cost fixing (that is, permanently fixing variables at their respective upper and lower bounds) is performed at the root node whenever an improved integral upper bound is obtained, or whenever the objective value of the linear programming relaxation at the root node improves (due to the addition of cuts). Local fixing is performed at each node of the search tree using the local LP value and the current best upper bound. On those nodes where the heuristic is called, local fixing is performed both before and after the heuristic call. 3.5. Constraint Generation Let x be the fractional solution obtained at a given node in the tree, and let L { j N : x j 0}, F { j N :0 x j 1}, and U { j N : x j 1}. Since we would like to identify a valid inequality, T x 0, which is violated by x, we can ignore the set L in the identification phase. However, unlike the set packing problem, due to the non- 0/1 entries in the constraint matrix and right-hand side of the TDS problem, it is advantageous not to ignore the variables indexed by U when generating valid inequalities based on the conflict graph. More specifically, for the set packing problem, since every neighbor k of a node in U in the intersection graph necessarily satisfies x k 0, one cannot obtain violated valid inequalities by focusing on graph structures (e.g., cliques and odd cycles) involving nodes in U (Hoffman and Padberg 1993, p. 675). The same is not necessarily true when considering the conflict graph associated with a problem with non 0/1 data. For example, consider again Example 1 from Section 2.2. The conflict graph yields the valid inequality x 1 x 2 x 4 x 5 1, which is violated by the fractional solution x [0, 1, 0, 1/2, 0] T. Let us now focus on generating valid inequalities for the approximating polytope P C defined in Section 2.2. Let G F U (F U, E F U ) denote the subgraph of G C induced by the node set F U. Since the graph G F U can be quite large, rather than computing the entire graph, in our implementation only subgraphs of G F U are generated. The size of the generated subgraph is controlled by adjusting the tolerance used to check for nonzero values of the solution vector x. Our current tolerance is set to 0.001. Since one can think of the set F as being controlled by adjusting the tolerance, we will continue to use the notation G F U to denote these subgraphs. Clique Generation. We generate violated clique inequalities of the subgraph G F U using the following two procedures. Note that we preserve the graph G F U and only use a copy when implementing each procedure.

362 / BIXBY AND LEE (a) Select a vertex s with maximum degree and use it to initialize a clique set K. Throw away nodes not adjacent to s and update the adjacency lists of the remaining nodes. Continue to add nodes to K as follows. If v is the last vertex added to K, find a vertex u in the adjacency list of v that has maximum degree and is not in K, add it to K, and throw away all nodes that are not in the adjacency list of u, again updating the adjacency lists of the remaining nodes. If no such u exists, stop; a clique has been found, in which case we check to see if the associated clique inequality is violated by the current fractional solution. This procedure is repeated on the subgraph of G F U obtained by deleting the starting node s and its associated clique. We continue to repeat each time deleting all previous starting nodes and associated clique nodes until either 10 percent of the nodes in F U have been selected as starting nodes, or until the deletion process eliminates all of F U. (b) First, for each row i, define the index set(s) M i j F U:a ij 1, if the right-hand side of row i equals 1, M ij0 j F U:a ij b j 0, if the right-hand side of row i equals b, where b 2, and j 0 F U, a ij0 {1, 2,..., b 1}. Clearly, M i and M ij0 are the node sets of complete subgraphs of G F U. Enlarge such a subgraph by finding the set K (F U) M i (K (F U) M ij0 ) such that each node in K is adjacent to every node in M i (M ij0 ). Then apply procedure (a) to identify one or several cliques in the subgraph induced by K. In the implementation, for each i, if the number of sets M ij0 is less than or equal to 32 we perform this procedure for each such set. Otherwise, we perform it for either 20% of them or 32 of them (whichever is greater), where j 0 is chosen in decreasing order of the values of x j0. (We remark that in the numerical experiments on the given problem instances, with our chosen tolerance, we never generated a partial conflict graph with more than 200 vertices. In other applications, it may be necessary to place an upper bound on the number of sets M ij0 processed.) For each clique that is identified, we check to see if the associated inequality is violated. If no violated inequality is found, we form a new set M i (M ij0 ) consisting of half the indices in M i (M ij0 ) selected in decreasing order of x j. We then repeat the above procedure, still restricting K (F U) M i (K (F U) M ij0 ). Chordless Odd Cycles. It is well-known that if K is an odd cycle without chords, then the odd cycle inequality u K x u ( K 1)/2 is valid for P C and defines a facet of the node packing polytope associated with the restriction of P C to the node set K. An effective heuristic for obtaining an odd cycle K involves performing breadth-first search on a chosen node s. An odd cycle is obtained whenever a cross edge is encountered between two nodes, say u and v, on the same level of the breadth-first tree (Cormen et al. 1990). To recover the cycle, trace back along the paths to these two nodes until arriving at a common ancestor, say w (which could be different from s). Let P u and P v denote the node sets along the paths from w to u, and from w to v, respectively excluding w in both cases. Clearly P u P v {w} forms the node set of an odd cycle. To determine if the odd cycle is chordless, we first consider successive nodes in P u {u}. If a node is found whose adjacency list contains a node in P v, stop; the cycle has a chord. If, on the other hand, the adjacency list of every node in P u {u} is disjoint from P v, we then determine if node u is adjacent to a node in P v {v}. If so, we again conclude that the cycle has a chord; if not, the cycle is chordless. (Note that the breadth-first search strategy precludes the existence of a chord involving the common ancestor w.) Once a chordless odd cycle is found, the associated inequality is checked to see if it is violated by the current fractional solution. If not, we perform lifting of exactly 10 nodes (chosen in decreasing order of the associated variable values) from the remaining nodes and check if the final inequality is violated. This entire procedure is repeated using different starting root nodes first selecting 10 percent of the nodes in F U in decreasing order of the associated variable values, and then selecting 10 percent from the remaining nodes using a pseudo-random selection process. Facets for Individual Knapsack Equalities. This procedure identifies facets for individual knapsack polytopes of the form S b using the facet defining inequalities obtained in Section 2.2. In the current implementation, this procedure is specifically tailored to solve the given TDS problem instances. Since in these instances the largest coefficient in any knapsack equality constraint is 3, the current system is set to only search for facet defining inequalities associated with S 2 and S 3, namely x j0 x j0 j T j R 2 x j 1, j 0 R 1, (7) j R 2 x j 1, j 0 R 1, (8) j R 3 x j j R 2 x j x j 2x j 2, T R 1, T 2, (9) j R 3 j R 3 x j 1, (10) where (7) is associated with S 2 and (8) (10) are associated with S 3. Whether or not the inequalities generated by these methods yield deep cuts and/or high dimensional faces of P IP depends in large part on how closely the polytopes P i and P C approximate P IP. Nevertheless, one can tighten the generated inequalities by lifting. The Lifting Procedure. Let T x 0 be a valid inequality for P IP, and let J { j N : j 0}. If y n, let y J denote the vector in J with the components y i, i N J, removed. Similarly, let J denote the ( p q) J matrix obtained by removing the columns indexed by i N J

Table I Problem Characteristics BIXBY AND LEE / Problem Cols. Rows Nonzeros Z LP Z IP Gap zb1all 27,768 47 114,138 2,319.80 2,319.80 0.00 zb1all.1 24,771 47 103,020 2,012.28 2,012.28 0.00 zb2all 24,172 49 98,607 2,630.37 2,639.40 9.03 zb2all.1 21,801 51 89,670 2,259.50 2,268.00 8.50 zb3all 5,712 37 22,411 1,456.00 1,456.00 0.00 zm1all 5,301 26 22,236 1,444.60 1,449.00 4.40 zm2all 11,706 28 54,216 1,284.95 1,285.29 0.34 zm2new 3,636 29 18,190 1,419.69 1,426.00 6.31 zm3all 10,849 30 44,916 1,305.24 1,319.26 14.02 zs1new 2,089 28 8,403 1,527.73 1,539.60 11.87 zs1short 115,544 26 666,323 889.00 889.00 0.00 zs2short 26,306 27 138,322 904.71 919.21 14.50 zs3short 11,273 25 59,509 1,048.70 1,071.25 22.55 zs3area 4,030 34 17,154 1,918.59 1,924.00 5.41 363 from the constraint matrix. For an index k N J, we determine a lifting coefficient ˆ k for x k by first solving the lifting problem: z k maximize{ J T x J : J x J bˆ a k, x j {0, 1}, j J}, where a k denotes the kth column of. Then ˆ k 0 z k. If it is positive, we update k 4 ˆ k and J 4 J {k}, and repeat the process. Note that the order in which the variables are lifted is important, in the sense that different lifting orders can potentially produce different valid inequalities (Padberg 1973). We estimate the lifting coefficients by solving the linear programming relaxation of the lifting problem. Another approach, which avoids solving a linear program, extends the idea described in Hoffman and Padberg (1993) for the set packing problem. The method is based on the approximation: max { J T x J : A J x J bˆ a k, x j {0, 1} for all j J} min {u T (bˆ a k ):u T A J J T, u 0, u integral}. In our implementation, we restrict the components of u to be binary, and estimate its optimal objective value by a greedy algorithm. In general, the more variables that are lifted, the tighter the final cut. However, lifting can be computationally expensive, since the lifting problem itself is a 0/1 integer program. Hence, some sort of compromise must be made regarding the number of variables to be lifted. Our current procedure performs lifting until a maximum of 100 positive lifting coefficients are obtained. 4. NUMERICAL TESTS AND RESULTS The preprocessor, the reduced-cost fixing routine, the heuristic, and the constraint generator are integrated into a Branch-and-cut IP Solver using CPLEX 2.1 as the intermediate LP Solver. The code is written in C, and not counting the LP solver and the comment statements, has about 16,000 lines. Numerical tests on 14 real instances supplied by Texaco Trading & Transportation, Inc. were run on a SUN Super SPARCstation 10 model 41 with super cache. Table I provides a description of the problem instances. Cols, Rows, and Nonzeros denote, respectively, the number of columns, the number of rows, and the number of nonzeros in the constraint matrix; Z LP and Z IP denote the optimal objective values for the initial linear programming relaxation and the integer program; and Gap denotes the difference between Z LP and Z IP. A value of zero for the gap does not imply that the LP solver returned an integral feasible solution from the linear programming relaxation. We remark that the company was able to solve to optimality some smaller instances with 2000 or fewer variables in an acceptable time, but was not able to do so on larger instances. Table II summarizes the results using our plain branchand-bound (i.e., with the preprocessor, reduced-cost routine, heuristic, and constraint generator switched off) and using CPLEXMIP 2.1. BB Nodes and BB time denote the number of nodes in the terminal search tree and the time required to solve to optimality using our plain branch-andbound, and CPLEX Nodes and CPLEX denote corresponding information when CPLEXMIP is used. Our branch-and-bound code solves all the instances and outperforms CPLEXMIP on six of them. CPLEXMIP solves 12 of the 14 problem instances, and has better running times on eight of them. Our code uses best-node strategy for node selection, and is set to branch on the fractional binary variable with smallest index. For CPLEXMIP we used the default parameters best-node, and automatic variable selection (using either pseudo reduced-costs or maximum infeasibility). We note that for the zs* problems, solution times are much better with our code (by as much as ten-fold compared to what is reported here) if we instead set it to branch on the most infeasible fractional binary variable. However, this strategy increases the running times for the five zb* problems. For uniform reporting of the computational results, in the tables we only report numerical results for our chosen default settings for which all problems can be solved in reasonable time. Table III provides a comprehensive picture of the branch-and-cut system and the effort expended by each of its major components. Prepr. Calls, R-cost Calls, and Heur. Calls denote the number of times the preprocessor, the

364 / BIXBY AND LEE Table II Numerical Results Using Plain Branch-and-bound and CPLEXMIP Problem BB Nodes BB CPLEX Nodes CPLEX zb1all 65 172.5 17 92.0 zb1all.1 35 88.7 46 260.9 zb2all 62,644 340,690.0 205,700 a 426,600.1 a zb2all.1 55,460 333,047.0 149,500 b 459,775.8 b zb3all 7 2.7 13 9.3 zm1all 214 76.7 155 49.6 zm2all 36 16.9 28 45.6 zm2new 992 135.5 847 209.2 zm3all 43,990 156,158.0 37,505 25,317.9 zs1new 14,446 2,842.7 1,682 243.9 zs1short 12 171.8 3 97.8 zs2short 16,298 168,292.0 10,594 25,288.6 zs3short 8,578 26,874.8 2,805 2,138.0 zs3area 4,632 1,357.2 1,979 562.3 a Not proven optimal (out of memory). Best upper bound 2639.4, best lower bound 2631.5863 b Not proven optimal (out of memory). Best upper bound 2268, best lower bound 2260.6420. reduced-cost routine, and the heuristic were called; and Prepr., R-cost, and Heur. denote their respective times. Heur. Success indicates the number of times the heuristic returned an integral solution, and Cuts indicates the number of cuts generated. The time spent managing cut generation is listed under Cuts, and reflects the cumulative CPU time of all calls to the cut generator, including the time required to create the partial conflict graphs, the time to generate and add cuts to the database of the LP solver, and the time to delete cuts that have been inactive for four consecutive LP solves. Finally, BC Nodes and Total CPU secs. denote the number of nodes in the branch-and-cut tree and the total time of the entire branch-and-cut process. Table IV compares the results in Tables II and III, measured in terms of the ratios of the number of nodes and the total CPU time for the branch-and-cut algorithm relative to those for plain branch-and-bound and CPLEXMIP. Observe that branch-and-cut out-performed branch-andbound on all but one test problem (zb1all.1). Moreover, for those problems for which branch-and-bound required more than 10,000 seconds (i.e., zb2all, zb2all.1, zm3all, zs2short, zs3short) branch-and-cut substantially reduced the CPU time required to solve them to optimality. In all these cases, the time is reduced by more than 88 percent. The overall average time reduction for branch-and-cut over branch-and-bound for the 14 instances is 77.6 percent. Compared to CPLEXMIP, we again see that branch-andcut yielded better times in all but one instance (zs3short). For those problems for which CPLEXMIP required more than 10,000 seconds (e.g., zb2all, zb2all.1, zm3all, zs2short) solution times are reduced by more than 77 percent. The overall average time reduction for branch-and-cut over CPLEXMIP for all 14 instances is 74.3 percent. Table V records the performance of the heuristic routine when it is used as a standalone procedure. Z HEUR denotes the first heuristic value obtained, and Heur. denotes the time to compute this value. Note that the first Problem Prepr. Calls Prepr. R-cost Calls Table III Numerical Results for Branch-and-cut R-cost Heur. Calls Heur. Success Heur. Cuts Cuts BC Nodes Total CPU secs zb1all 1 0.2 0 0 1 1 9.9 0 a 0 11.0 zb1all.1 2 0.1 0 0 1 1 120.2 0 a 0 120.3 zb2all 2 0.1 12 2.1 1 1 26.0 456 2,017.3 0 3,702.5 zb2all.1 3 0.6 8 0.9 1 1 29.9 405 732.5 0 1,190.4 zb3all 2 0.1 0 0 1 1 1.7 0 a 0 2.1 zm1all 3 0.1 19 3.5 3 2 1.7 442 3.7 66 14.9 zm2all 3 0.1 1 0.1 1 1 0.2 5 0.0 b 0 0.4 zm2new 3 0.2 16 1.9 3 3 7.2 821 22.3 26 45.4 zm3all 1 0.1 11 2.5 5 2 7.1 5,413 3,529.1 198 5,624.3 zs1new 3 0.1 1 0.1 1 1 6.9 59 12.3 0 19.6 zs1short 1 0.0 0 0 1 1 24.2 0 a 0 24.2 zs2short 3 0.6 27 6.5 3 3 9.7 3,091 2,764.1 149 4,882.7 zs3short 3 0.5 63 20.5 3 3 4.8 4,902 1,503.9 156 3,026.8 zs3area 3 0.1 6 0.8 2 2 10.9 453 37.2 36 54.6 a Gap is zero and optimal IP Solution obtained after first call to heuristic. b is less than 0.05 seconds.

Table IV Branch-and-bound and CPLEX vs. Branch-and-cut Problem BC Nodes/BB Nodes BC /BB BC Nodes/ CPLEX Nodes BC / CPLEX zb1all 0 0.064 0 0.120 zb1all.1 0 1.356 0 0.461 zb2all 0 0.011 0 0.009 zb2all.1 0 0.004 0 0.003 zb3all 0 0.778 0 0.226 zm1all 0.308 0.194 0.426 0.300 zm2all 0 0.024 0 0.009 zm2new 0.026 0.335 0.031 0.217 zm3all 0.005 0.036 0.005 0.222 zs1new 0 0.007 0 0.080 zs1short 0 0.141 0 0.247 zs2short 0.009 0.029 0.014 0.193 zs3short 0.018 0.113 0.056 1.416 zs3area 0.008 0.040 0.018 0.097 call to the heuristic returned an optimal value for seven of the fourteen problem instances (four of which had a gap of zero). Moreover, for two hard problem instances (zb2all and zb2all.1) the heuristic returned an optimal solution within 30 seconds. Thus, for these two cases, the bulk of the total solution time recorded in Table III (3702.5 and 1190.4 seconds, respectively) was spent closing the gap between the lower and upper bounds in order to establish optimality. To illustrate the importance of cut generation in our branch-and-cut solver, Table VI reports the results when only the preprocessing, reduced-cost and heuristic routines are switched on, and compares these with the results from Table III. For those problems with total CPU time over 10,000 seconds when running with cuts disabled (e.g., zb2all, zb2all.1, zm3all, and zs2short), the time reduction achieved by switching on the cut generator is more than 72 percent. For the 10 problems for which cuts were generated, the average reduction in running time is 68.4 percent. To further emphasize the key role that cut generation plays, consider zb2all and zb2all.1. In each case, the first BIXBY AND LEE / 365 call to the heuristic returned an optimal solution in less than 30 seconds. Nevertheless, without cut generation, it took 13,501.7 and 14,231.9 seconds, respectively, to solve them to optimality. By invoking the cut generator the times were reduced to 3702.5 and 1190.4 seconds (reductions of 72.6 percent and 91.6 percent, respectively. We remark that although CPLEXMIP also found the optimal solutions for these two instances relatively quickly (after solving 14 and 22 branchand-bound nodes, respectively), it failed to solve them to proven optimality, and strenuous effort (205,700 and 149,500 nodes in the respective search trees) was expended to close the gaps by only 1.2163 and 1.1420, respectively. We experimented with ways to handle nonactive cuts efficiently. When a cutpool was employed, less than 3 percent of the cuts were reused. We estimated the CPU time for cut generation with and without a cutpool, and the former required over 20 percent in excess of the latter. Hence, our branch-and-cut solver defaults to cut generation without a cutpool; and cuts are removed after being inactive for four consecutive LP solves. We also measured the relative frequencies of the three classes of generated cuts on each of the fourteen instances. Cuts derived via the knapsack equalities accounted for 10 percent 15.3 percent of the overall number of cuts; clique inequalities accounted for 59.5 percent 81.2 percent; and odd cycle inequalities accounted for 5.2 percent 23.7 percent. Disabling the knapsack cuts increased the computational time by 17.2 percent on average over all instances, and by 45.2 percent on the four most difficult instances (zb2all, zb2all.1, zm3all, and zs2short). The resulting increases from disabling the clique and odd cycle cut generation routines were 54.3 percent and 20.1 percent, respectively, over all the instances. Also, we remark that the conflict graph proved to be crucial in generating clique and odd cycle inequalities. Over 99 percent of the violated cliques and odd cycles generated in the code could not be constructed using only a single constraint at a time. As with any branch-and-cut system, the performance of the LP solver which is repeatedly called to solve Table V Objective Values from First Call to Heuristic Problem Z LP Z IP Z HEUR Z HEUR Z IP Heur. zb1all 2,319.80 2,319.80 2,319.80 0 9.9 zb1all.1 2,012.28 2,012.28 2,012.28 0 120.2 zb2all 2,630.37 2,639.40 2,639.40 0 26.0 zb2all.1 2,259.50 2,268.00 2,268.00 0 29.9 zb3all 1,456.00 1,456.00 1,456.00 0 1.7 zm1all 1,444.60 1,449.00 1,452.00 3 0.7 zm2all 1,284.95 1,285.29 1,285.29 0 0.2 zm2new 1,419.69 1,426.00 1,428.70 2.70 4.6 zm3all 1,305.24 1,319.26 fail 1.2 zs1new 1,527.73 1,539.60 1,543.20 3.60 6.9 zs1short 889.00 889.00 889.00 0 24.2 zs2short 904.71 919.21 922.21 3 3.2 zs3short 1,048.70 1,071.25 1,084.33 13.08 1.0 zs3area 1,918.59 1,924.00 1,976.50 52.50 4.9

366 / BIXBY AND LEE Problem Prepr. Calls Table VI Numerical Results for Branch-and-cut with Cut Generator Switched Off Prepr. R-cost Calls R-cost Heur. Calls Heur. Success Heur. Total Nodes Total CPU secs BC-Nodes/ Nodes BC-/ zb1all 1 0.2 0 0 1 1 9.9 0 11.0 1.000 zb1all.1 2 0.1 0 0 1 1 120.2 0 120.3 1.000 zb2all 3 0.2 112 42.1 141 87 2,009.2 30,001 13,501.7 0.000 0.274 zb2all.1 4 0.7 81 30.1 99 76 1,601.2 27,029 14,231.9 0.000 0.084 zb3all 2 0.1 0 0 1 1 1.7 0 2.1 1.000 zm1all 4 0.2 20 3.7 4 3 2.7 162 58.9 0.407 0.253 zm2all 3 0.2 1 0.1 1 1 0.2 2 0.5 0.000 0.800 zm2new 3 0.2 18 2.1 5 4 6.3 96 47.3 0.271 0.960 zm3all 3 0.2 242 71.2 59 21 137.1 27,001 78,201.3 0.007 0.072 zs1new 4 0.3 132 41.2 7 3 23.1 8,921 1,309.1 0.000 0.015 zs1short 1 0.0 0 0 1 1 24.2 0 24.2 1.000 zs2short 3 0.6 187 71.3 91 62 192.2 15,902 49,821.3 0.009 0.098 zs3short 3 0.5 92 29.1 89 70 190.8 7,083 6,012.7 0.022 0.503 zs3area 4 0.2 59 19.2 13 10 5.3 3,012 529.8 0.012 0.103 sequences of linear programs is crucial to the overall performance of the branch-and-cut system. Therefore, we experimented with various features of CPLEX to determine which features could be exploited to speed up the overall process of the system. The system is now set to use primal reduced-cost pricing to solve the initial linear programming relaxation, and steepest-edge primal pricing for successive resolves of the linear programs within the heuristic and after reduced-cost fixing in the branch-and-cut tree. Finally, steepest-edge dual pricing is used for solving the linear programs during the resolve phase in the cutting plane routine and within the branching phase. In summary, the numerical tests indicate that the branch-and-cut system significantly reduces the total CPU time to solve to optimality some hard instances of the TDS problem. Table III clearly demonstrates the effectiveness of the combination of the system s components. Moreover, using the reported times in Tables II and VI one can calculate that the combination of the preprocessor, the reduced-cost routine and the heuristic alone yields an average time reduction of 61.1 percent over plain branchand-bound. However, Table VI and the performance of CPLEXMIP on the zb2* instances give strong evidence that cut generation plays a leading role in improving the solution time on the most difficult instances. ACKNOWLEDGMENT We would like to thank Donna McDougald Peacock of Texaco, Inc. for taking the time to explain the truck dispatching system and for obtaining test problems for us, and Texaco Trading & Transportation, Inc. for allowing the test problems to be released. Also, we would like to thank Karla Hoffman for sharing her experience in implementing branch-and-cut systems. Finally, we would like to thank two anonymous referees for their helpful comments on an earlier version of this paper. This work was completed while the second author was at Rice University. We acknowledge support through AFOSR Grant F49620-92-J-0053. REFERENCES APPLEGATE, D., R. BIXBY, V. CHVÁTAL AND W. COOK. 1994. Solving Traveling Salesman Problems. Preprint. BALAS, E. AND E. ZEMEL. 1984. Lifting and Complementing Yields all the Facets of Positive Zero-one Programming Polytopes. In Mathematical Programming, Proceedings of the International Conference on Mathematical Programming. R. W. Cottle et al. (eds.), 13 24. BIXBY, R. E., E. A. BOYD, S.S.DADMEHR, AND R. R. IN- DOVINA. 1993. The MIPLIB Mixed Integer Programming Library. COAL Bulletin, 22. BODIN, L. D. 1990. Twenty Years of Routing and Scheduling. Opns. Res. 38, 571 579. CHVÁTAL, V. 1975. On Certain Polytopes Associated with Graphs. J. Combinatorial Theory, B13, 138 154. CORMEN, T. H., C. E. LEISERSON, AND R. L. RIVEST. 1990. Introduction to Algorithms. MIT Press, Cambridge, MA. CPLEX MANUAL. 1993. Using the CPLEX Callable Library and the CPLEX Mixed Integer Library. CPLEX Optimization, Inc., Incline Village, Nevada. CROWDER, H., E. L. JOHNSON, AND M. PADBERG. 1983. Solving Large-scale Zero-one Linear Programming Problems. Opns. Res. 31, 803 834. FERREIRA, C. E., A. MARTIN, AND R. WEISMANTEL. 1993. Facets for the Multiple Knapsack Problem. Preprint. Konrad-Zuse-Zentrum für Informationstechnik Berlin. GOLDEN, B. L. AND A. ASSAD, EDITORS. 1988. Vehicle Routing: Methods and Studies (Studies in Management Science and Systems, Volume 16). North Holland, Amsterdam. GOMORY, R. E. 1969. Some Polyhedra Related to Combinatorial Problems. Linear Algebra and Its Applications, 2, 451 558. GRÖTSCHEL, M. AND O. HOLLAND. 1991. Solution of Largescale Symmetric Travelling Salesman Problems. Math. Programming, 51, 141 202. HAMMER, P. L. AND U. N. PELED. 1982. Computing Lowcapacity 0-1 Knapsack Polytopes. Zeitschrift für Operations Research, 26, 243 249. HOFFMAN, K. L. AND M. PADBERG. 1991. Improving LPrepresentations of Zero-one Linear Programs for Branch-and-cut. ORSA J. Computing, 3, 121 134.

HOFFMAN, K. L. AND M. PADBERG. 1993. Solving Airline Crew- Scheduling Problems by Branch-and-cut. Mgmt. Sci. 39, 657 682. JOHNSON, E. L., M. M. KOSTREVA, AND U. H. SUHL. 1985. Solving 0-1 Integer Programming Problems Arising from Large-scale Planning Models. Opns. Res. 33, 803 819. JOHNSON, E. L. AND G. L. NEMHAUSER. 1991. Recent Developments and Future Directions in Mathematical Programming. Research Report. Computational Optimization Center, Georgia Institute of Technology, Atlanta, Georgia. LEE, E. K. 1993a. Solving Structured 0/1 Integer Programming Problems Arising from Truck Dispatching Scheduling Problems. Ph.D. Thesis. Rice University, Houston, Texas. LEE, E. K. 1993b. Facets of Special Knapsack Equality Polytopes. Technical Report TR93-38. Rice University, Houston, Texas. LEE, E. K. 1994. On Facets of Knapsack Equality Polytopes. Journal of Optimization Theory and Applications 94(1), 223 239. NEMHAUSER, G. L. AND L. A. WOLSEY. 1988. Integer and Combinatorial Optimization. Wiley, New York. PADBERG, M. 1973. On the Facial Structure of Set Packing Polyhedra. Math. Programming, 5, 199 215. BIXBY AND LEE / 367 PADBERG, M. AND G. RINALDI. 1989. A Branch-and-cut Approach to a Traveling Salesman Problem with Side Constraints. Mgmt. Sci. 35, 1393 1412. PADBERG, M. AND G. RINALDI. 1991. A Branch-and-cut Algorithm for the Resolution of Large-scale Symmetric Traveling Salesman Problems. SIAM Review, 33, 60 100. SAVELSBERGH, M. W. P. 1994. Preprocessing and Probing for Mixed Integer Programming Problems. ORSA J. Computing, 6, 445 454. SUHL, U. H. AND R. SZYMANSKI. 1994. Supernode Processing of Mixed Integer Models. Computational Optimization and Applications, 3, 317 331. TROTTER, L. E. 1975. A Class of Facet-producing Graphs for Vertex Packing Polyhedra. Discrete Math. 12, 373 388. WEISMANTEL, R. 1994. On the 0/1 Knapsack Polytope. Research Report SC 94-1. Konrad-Zuse-Zentrum für Informationstechnik Berlin. WOLSEY, L. A. 1976a. Facets and Strong Valid Inequalities for Integer Programs. Opns. Res. 24, 367 372. WOLSEY, L. A. 1976b. Further Facet Generating Procedures for Vertex Packing Polytopes. Math. Programming, 11, 158 163. ZEMEL, E. 1978. Lifting the Facets of Zero-one Polytopes. Math. Programming, 15, 268 277.