An Implementation of a Constraint Branching Algorithm for Optimally Solving Airline Crew Pairing Problems

Size: px
Start display at page:

Download "An Implementation of a Constraint Branching Algorithm for Optimally Solving Airline Crew Pairing Problems"

Transcription

1 MASTER S THESIS An Implementation of a Constraint Branching Algorithm for Optimally Solving Airline Crew Pairing Problems Douglas Potter Department of Mathematical Sciences CHALMERS UNIVERSITY OF TECHNOLOGY GOTHENBURG UNIVERSITY Göteborg, Sweden 2008

2

3 Thesis for the Degree of Master of Science An Implementation of a Constraint Branching Algorithm for Optimally Solving Airline Crew Pairing Problems Douglas Potter Department of Mathematical Sciences Chalmers University of Technology and Gothenburg University SE Göteborg, Sweden Göteborg, November 2008

4 Department of Mathematical Sciences Göteborg 2008

5 Abstract Competition in the airline industry depends greatly on how efficiently crews are scheduled. Scheduling problems can be modeled as integer programs which can be solved exactly using branch-and-price methods. However, in practice, in order to find a good schedule expediently, the branch-and-price tree is often only partially explored. A constraint branching heuristic called connection fixing often selects branches containing optimal or near-optimal solutions. This thesis investigates utilizing connection fixing in a branchand-price algorithm to exactly solve airline crew scheduling problems. We present a mathematical model for optimizing airline crew scheduling that is suitable for the branch-and-price algorithm. Then we present the branch-and-price method for solving integer programs, the connection fixing heuristic, and how this can be integrated into a branch-and-price method. Finally, we evaluate these ideas by implementing a branch-and-price system using connection fixing and use this system to solve exactly several small and medium sized crew scheduling problems. The numerical results suggest that the branch-and-price method with connection fixing is a promising method for exactly solving large-scale crew scheduling problems.

6 Acknowledgments I would like thank my supervisor and examiner at Chalmers University of Technology, Ann-Brith Strömberg, for her support during the writing of this thesis. I would like to express my gratitude to many of the Jeppesen AB employees, especially my supervisor at Jeppesen AB, Lennart Bengtsson, for their guidance and insights into branch and price. I am also particularly indebted to Henrik Delin at Jeppesen AB who was very generous with his time, providing much programming advise. Finally, I would like to thank my parents for their support during these years in Sweden.

7 Contents Introduction 4 2 Optimization of Airline Crew Scheduling 6 3 Optimization Formulation of the Crew Pairing problem 9 3. The Pairing Problem Set Partitioning Formulation Set Covering Formulation The Augmented Set Covering Formulation Solving Integer Programs with Branch and Price 2 4. Branch and Bound Column Generation Branch and Price Branch and Price Utilizing Connection Branching Connection Branching: the Branching Rule Connection Branching: Selection and Ordering Connection Fixing heuristic Implementation of the Branch and Price System Using Connection Branching at Jeppesen AB Results 5 7 Conclusion 56 3

8 Introduction This thesis is a proof of concept project implemented at Jeppesen AB that aims to evaluate the feasibility of utilizing a connection branching branchand-price (B&P) system within the crew pairing component of crew scheduling. Crew pairing is the creation of shifts or work periods such that the flights are staffed. This differs from crew rostering, which is the assignment of specific individuals to these shifts [3] [7]. An example of this distinction is given in Chapter 2. Crew costs are the second largest cost in the airline industry after fuel costs [3] [7]. Consequently, optimal or near-optimal crew scheduling is crucial for airline competitiveness. Determining an optimal crew schedule is extremely complex due to the large amount of human resources being managed, the varying and overlapping competences of the crew, the number of ways the crew can be matched to different tasks at different times and locations, governmental regulations, labor rules, worker preferences, and seniority rules [3]. Billions of scheduling combinations can arise from a few thousand flights [5]. Accordingly, for a specific scheduling problem, there may exist millions of solutions of varying cost for which all tasks will be completed while fulfilling all the labor rules and other constraints. In order to manage the enormous complexity of crew scheduling and other optimization problems common to the transportation sector, many companies employ optimization products developed by Jeppesen AB. The process of solving large-scale scheduling problems at Jeppesen utilizes many optimization technologies. These technologies include algorithms based on the theoretical and applied mathematics of optimization. The connection fixing heuristic, which will be explained in Section 5.3, is one algorithm used in generating near-optimal solutions very quickly. Connection fixing is based roughly on the idea of searching for schedules that do or do not include a connection (for the crew) between two flights. Currently, connection fixing is a part of the system Jeppesen uses to determine high quality crew pairings. Some of these crew pairing problems are intractable when approached with other methods, such as commercial integer programming solvers, due to their very large size. That such huge problems can be tackled at all is due in part to the efficiency of connection fixing. While every feasible solution (a schedule that does not violate any of the constraints) cannot be considered by the current process, it may be possible to utilize connection fixing ideas to exhaustively search small and medium sized pairing problems. Performing an exhaustive search is theoretically possible with traditional B&P approaches. However, the hope is that a B&P system branching on connections, i.e. connection branching, will 4

9 lead to a time efficient exhaustive search. Time efficient methods for finding the best schedule (i.e. a global optimum) are of great importance since, in practice, the time between when a schedule planning begins and the schedule commences is limited. In order to evaluate using connection branching as a basis for a B&P system, Jeppesen AB has provided the software and hardware requisite for developing the B&P system. We develop the connection branching B&P system mostly from freely available open source software. The primary software development tools are the GNU C and C++ compilers [4], the Mercurial software code management system [23], and the Eclipse C/C++ Development Tools Project [3]. The connection branching B&P system relies on two commercial products: the primal heuristic PAQS by Jeppesen AB, which is based on work by Wedelin [30], and the linear programming solver and the integer programming solver in XPRESS R by Fair Issac Co. [32]. Additionally, we write Python [25] scripts to analyze and visualize the data. These Python scripts use the following libraries or software packages: igraph [6], matplotlib [22], and Graphviz [5]. Chapter 2 describes the airline crew pairing problem in detail and situates it within airline industry scheduling. Furthermore, Chapter 2 discusses the role of optimization techniques and their applicability to airline industry scheduling problems. Suitable mathematical models for the airline crew pairing problem are developed in Chapter 3. Chapter 4 explains the B&P algorithm which is suitable for solving the models developed in Chapter 3. Chapter 5 explains how B&P is tailored in our implementation for solving the airline crew pairing problem. The results of using this implementation to solve several small and medium test problems are displayed in Chapter 6. We conclude with Chapter 7, summarizing our results and discussing directions for further research and development. 5

10 2 Optimization of Airline Crew Scheduling This chapter aims to situate the crew pairing problem within the wider context of the airline industry as well as expand upon the description of the crew pairing problem presented in the introduction. We begin by describing the usage of optimization or operations research in the airline industry and then focus on the crew pairing problem. The airline industry is faced with many complex business decisions. As with all industries, making such decisions is a matter of minimizing costs and maximizing profits. These businesses decisions can be modeled mathematically and expressed as mathematical optimization problems. Then the techniques and methods of mathematical optimization can be applied, aiding the decision making process. The utility of employing optimization methods depends both on how well the mathematical model represents the actual business decisions and their related processes as well as on how well the model can be solved using optimization methods (the tractability of the model). Many factors affect the modeling of business decisions. For example, the complexity of the decisions, the quality of the input data, the interdependencies and the number of uncertainties all affect the modeling. The extensive use of optimization methods within the airline industry attests to the fact that the mathematical models adequately represent the business decisions in the airline industry [3] [7]. The difficulty is with the tractability of these models. Mathematical properties of these models such as discreteness, nonlinearity, and a huge number of variables characterize optimization problems that are difficult to solve [7]. This means that determining an optimal or near-optimal decision in terms of the model can be extremely taxing in terms of the computational resources, and can potentially exceed the available computing resources. The crucial decisions in the airline industry are related to scheduling. Generally, scheduling is deciding when and where certain people (e.g. pilots) and machines (e.g. aircraft) should carry out certain activities (e.g. piloting an aircraft, performing maintenance on an aircraft). Following the survey by Barnhart et al [7], scheduling in the airline industry can be decomposed into the sequence of four scheduling problems described in Table 2. Each of these four scheduling problems is not really a separate problem since the input data for each scheduling problem is determined by the solution to the preceding one. For example, schedule design determines which flights will be flown and thus which of the flights crew scheduling needs to staff. 6

11 Table : Airline Scheduling Components Scheduling Problem Scheduling Problem Objective Schedule Design Scheduling flights to satisfy passenger demand for specific routes Fleet Assignment Aircraft Maintenance Routing Crew Scheduling Scheduling the appropriate aircraft types to these flights Scheduling aircraft so they are at maintenance sites with sufficient time and frequency Scheduling crew members to ensure that the flights have sufficient staff Accordingly, the problems should properly be treated as one integrated problem in order to truly schedule optimally. Otherwise, the scheduling can go awry due to the fact that each scheduling problem constrains the following scheduling problem. In an extreme case, aircraft flights could be scheduled in a way that satisfies much of the passenger demand, but forces many crew members to fly many costly deadheads (traveling to an other airport as a passenger, instead of being on active duty). A fully integrated model, integrating the four scheduling problems into a single model, would take account of such interdependencies. Unfortunately, integrated problems are extremely complex and can result in billions of decision variables (the variables in a mathematical optimization model that are to be optimally determined and correspond to real world decisions; these variables differ from help variables that are used, primarily for modeling purposes). Thus a common framework for airline scheduling is to work separately on the four scheduling problems while allowing for some feedback between them [3]. Feedback between the scheduling problems means that the impact of the schedules on each other is communicated between departments that plan each of the schedules. However, this feedback is not such that it fully integrates the scheduling problems. This thesis focuses on the separate, or sequential, framework. In this sequential optimization process, crew scheduling comes last and thus crew scheduling is influenced by, but does not impact, schedule design, fleet assignment, and aircraft maintenance routing. Consequently, crew scheduling can focus purely on its own objectives: staffing cabin crews (e.g. flight attendants) and cockpit crews (e.g. pilots) such that all flights are properly staffed and all labor rules are satisfied at the lowest possible cost. An additional important factor not examined in this thesis is determining schedules that are robust. A robust schedule is a schedule that is easier and less expensive to adjust after a perturbation or a change, such as delays at an airport. Consequently, increasing the robustness of a schedule decreases the actual 7

12 cost of a schedule. Since optimization removes slack and redundancies in the schedule, optimized schedules which do not consider robustness can be more sensitive to minor perturbations [7]. Finally, the crew scheduling problem is further divided into two subproblems: crew pairing and crew rostering. As mentioned in the introduction, crew pairing creates work periods that staff flights (pairing work periods with flights) and crew rostering assigns real individuals to the work periods (manning the crews or assigning people to the roster). For example, a crew pairing could dictate that an anonymous pilot A departs Minneapolis at 9:00 for New York city and departs New York city at 4:00 for Minneapolis and that an anonymous pilot B departs New York city at 8:00 for Minneapolis, departs Minneapolis at.30 for Los Angeles and finally departs Los Angeles for New York city at 6:30. Crew rostering could determine that the anonymous pilot A will be Jane Doe and that the anonymous pilot B will be Bob Johnson. This subdivision means that the chosen crew pairing conditions rostering possibilities. Two reasons for breaking crew scheduling into these two subproblems are complexity reduction and delaying rostering. Considering the subproblems sequentially, solving crew pairing and then crew rostering, reduces the mathematical complexity and thus reduces the computational burden. However, as with dividing the airline scheduling problem into four parts, dividing crew scheduling into pairing and rostering means that not all feasible crew schedules will be considered. Since the integrated crew scheduling problem is more complex and computationally intensive than the rostering problem, splitting the crew scheduling makes it computationally easier. Concretely, this means that solving the crew rostering problem requires less time and planning resources than the integrate crew scheduling problem. Due to complex pay structures of crews, determining a good crew pairing is not just a matter of scheduling such that all the flights are flown. Each task performed or flight piloted does not have a fixed cost, but instead the cost depends a number of factors such as the time worked during a duty period (a work day), rest time alloted, time away from the home base for the crew, and the duty performed. Furthermore, the salary is often calculated according to several different formulas of which the maximum is used to calculate the actual commensuration for a crew member. This entails that the costs per crew per task can be a complicated nonlinear function [6] [7]. Given the complexity of large crew pairings, finding a good pairing manually (that is, without using optimization software) is nearly impossible. Consequently, techniques of mathematical optimization are employed. In order to use such techniques, the pairing problem needs be formulated in terms of the mathematics of optimization. 8

13 3 Optimization Formulation of the Crew Pairing problem Crew pairing is often formulated as a set partitioning problem or a set covering problem [3] [5] [6] [27]. In turn, the set partitioning problem and the set covering problem can be formulated as integer linear programs (ILP) which can be solved with optimization techniques such as B&B (branch and bound) [3] or B&P (branch and price) [5]. Section 3. precisely formulates the pairing problem so that it can be modeled mathematically in the subsequent sections. In Section 3.2, the set partitioning problem is formulated and the corresponding ILP formulation is explained. Furthermore, we elucidate how the crew pairing problems fits the set partitioning model. In Section 3.3, the set covering formulation will be explained along with some reasons as to why it is often the preferred formulation in practice. Finally in Section 3.4, a few slight amendments will be made to the set covering model so that it better models the pairing problem. 3. The Pairing Problem In this section, the ground for mathematically modeling the crew pairing problem is cleared by precisely defining the crew pairing problem. This section concludes with Example 3. which describes a small pairing instance. First, there are a number of flights that must be staffed. Each flight starts at a specific time at a specific airport and ends at a specific time at a specific airport. A pairing is a schedule for a crew (or crew member). A pairing consists of a sequence of flights which satisfies the labor rules, other regulations, and airline specific requirements. Furthermore, the sequence of flights in a pairing must start and end at the same airport, referred to as the home base. This means that crew is returned to its home base at the end of the pairing. Every pairing i has a cost c i. Determining the cost c i for a particular pairing can be complicated and is outside of the scope of this thesis; some of the relevant factors were discussed at the end of Chapter 2. 9

14 Example 3. A Pairing Problem This example shows the input data for a crew pairing problem. The flying times, airports, and the flight network have been chosen for notational simplicity rather than attempting to model an actual crew pairing problem. Table 2 list the flights and Figure depicts the same flights. The pairings listed in Tables 3 and 4 are all the feasible pairings that can be formed with the flights. In order to understand the meaning of the pairings, consider the pairing 7 listed in Table 4: pairing 7 represents a crew (or a crew member) flying from London(LHR) via flight to Gothenburg(GOT), then from Gothenburg(GOT) via flight 7 to Copenhagen(CPH), and finally from Copenhagen(CPH) via flight 5 back to London(LHR). Note also that it may be difficult to gauge the cost of an individual pairing. For example pairing may seem at first overly expensive, but after considering that the working day for pairings 4, 7, and 8 are just as long as that of pairing along with some other unknown cost factor, the cost of pairing no longer seems unreasonable. Table 2: Flights Flight From To Depart. Arrival LHR GOT GOT LHR GOT CPH CPH GOT CPH LHR LHR CPH GOT CPH CPH GOT Figure : Flight Network Flight : London LHR Flight 8: Flight 6: Flight 2: Flight 4: Flight 3: Gothenburg GOT Copenhagen CPH Flight 7: Table 3: Pairings -8 Flight Pairing Cost order Table 4: Pairings 9-7 Flight Pairing Cost order Flight 5:

15 3.2 Set Partitioning Formulation A set partitioning problem has two components: the ground set à consisting of elements ẽ j and a family F of subsets ã i Ã. Each ã i is associated with a cost c i R +. The objective of a set partitioning problem is to select a set B of these subsets ã i F such that B defines a partition of à and that the sum of the costs c i of the chosen subsets in B is minimized. The set B of the chosen subsets is a partition of à if and only if ãi Bã i = à and ã i ã j = for all ã i, ã j B such that ã i ã j. In general, there is not any guarantee that a partition of à exists; we will not consider this issue, since it cannot occur (in practice) with the final model used in this thesis. We can formulate a finite set partitioning problem compactly as an ILP in the following way: let the ground set à consist of m elements ẽ j and let the family F consist of n subsets, index the elements ẽ j of the ground set à with j such that j =,..., m, index the subsets ã i F with i for i =,..., n and let c i, where i =,..., n, represent the costs corresponding to each subset ã i. The decision vector x is defined to consist of x i {0, } for i =,..., n and indicates which subsets are chosen. If x i = then the subset i is selected, i.e. i B, and if x i = 0 then the subset i is not selected, i.e. i / B. The vector c R n + is defined to consist of the given costs c i R + 5 for i =,..., n. We define an m n matrix A that corresponds to the ground set à and the partitioning subsets by {, if ẽ j ã i, A ji = j =,..., m, i =,..., n. () 0, otherwise, This gives the integer linear program min ct x, x {0,} n s.t. Ax = m. (2) Note that s.t. means subject to. Defining a i R m for i =... n to be the column vectors of the matrix A, we can reformulate (2) in terms of sums, min x {0,} n s.t. n c i x i, i= n a i x i = m. (3) i=

16 It should be noted that although, together, the integrality constraints, x {0, } n and partitioning constraints n i= a ix i = m, of the problem (3) can be considered difficult, the objective function, f(x) = n i= c ix i is a linear function. Optimization models with linear objective functions are generally considered to be easier to handle than ones with nonlinear objective functions [4] [8]. The pairing problem as specified in Section 3. can be characterized as a set partitioning problem and thus modeled by the ILP (3). We consider all the flights that need to be staffed as the ground set in a set partitioning problem and note then that the pairings correspond to the subsets ã i F and the columns a i of the matrix A. In terms of the ILP (2), we obtain that the m n matrix A, where m is the number of flights and n is the number of pairings, indicates the flights that each pairing contains. The n dimensional vector x consists of the decision variables, indicating which pairings are selected. This gives us the following definitions of x and A: {, if the pairing i is used in the schedule, x i = (4) 0, otherwise, {, if flight j is in pairing i, A ji = (5) 0, otherwise. Minimizing the ILP (3) with x and A defined as in (4) and (5) will then give the minimum cost schedule. This pairing formulation is often used since it avoids bringing the nonlinear crew costs into the objective function [3] [5] [27]. If the decision variable were, for example, the specific assignment of crews to tasks instead of choosing pairings, the objective function would be complicated, nonlinear, and difficult to optimize, see the discussion at the end of Chapter 2. It should be noted that not all the information about the pairings is preserved in the matrix A. For instance, the temporal order of the flights is not contained in A. The process of modeling a crew pairing problem as an ILP is shown in Example 3.2. This example will be further developed in the following chapters and sections in order to clarify various modeling decisions and algorithms. 2

17 Example 3.2 Set Partitioning Formulation Using the definitions (4) and (5), the ILP for the crew pairing problem from Example 3. is constructed: min x {0,} 7 [ ] x, x x x 6 x 7 =. An optimal solution to this pairing problem is choosing the pairings 3, 2, and 6 giving a cost of 3. If the LP relaxation of (6) replacing the integrality constraint x {0, } 7 with x [0, ] 7 is considered instead, an optimal solution is x = [ ] with a cost of.5. The solution to the LP relaxation corresponds to choosing half of each of the pairings 6, 3, 6, and 7. While in reality, a half pairing cannot be chosen, the LP relaxation solution gives a lower bound on the ILP and, as will be shown, plays an important role in solving the ILPs from crew pairing problems. Although the set partitioning formulation and its ILP formulation (3) represent the crew pairing problem, these formulations have a computational deficiency. Often during the process of determining an optimal solution to an ILP, the integrality requirements on the decision variables are relaxed this modification is known as LP relaxing and it transforms an ILP into a LP (linear program) known as the LP relaxation. In our case, this would mean that the vector x is allowed to take the values [0, ] n instead of {0, } n, (6) min x [0,] n s.t. n c i x i, i= n a i x i = m. (7) i= Problematically, the equality constraints make the problem (7) numerically instable and difficult to solve, especially when compared to the set covering formulation [5], which is discussed in the next section and alleviates these difficulties without requiring extra variables. 3

18 3.3 Set Covering Formulation The set covering formulation is almost identical to the set partitioning formulation. The difference is that the requirement that every element in the ground set be in one and only one of the selected subsets is replaced by that every element in the ground set is in at least one of these subsets. This difference is shown in Example 3.3, which formulates the crew pairing problem from Example 3.2 as set covering problem. We formulate the set covering problem in such a way that the equivalent IP formulation is clear. The finite set covering problem is, given a finite ground set Ā containing elements ē j where j =,..., m, and a family subsets ā i F with costs c i, where i =,..., n, such that ā i Ā, find a minimum cost set B of the subsets such that the union of the subsets contains every element in the ground set Ā. As with the set partitioning problem, the decision vector x is defined to consist of x i {0, } for i =,..., n. The vector x indicates which subsets are selected. If x i = then the subset i is selected and if x i = 0 then the subset i is not selected. Furthermore, we can define a m n matrix A to represent Ā by {, if ē j ā i, A ji = j =,..., m, i =,..., n. (8) 0, otherwise, Letting a i for i =,..., n be the column vectors of A, we can formulate the set covering problem as an ILP, min x {0,} n s.t. n c i x i, i= n a i x i m. (9) i= Since this formulation has inequality constraints instead of equality constraints, it is more stable numerically [5]. It is always trivial to find a feasible solution to the problem (9) and the corresponding LP relaxed problem. A trivial solution can be constructed by selecting pairings that contain uncovered flights (flights which are not in any of the pairings chosen so far) until all flights are covered. A flight that is overcovered, that is, a flight that is in two or more of the selected pairings, indicates where a deadhead is in the schedule. Note that a solution to the set partitioning problem is also a solution to the corresponding set covering problem and that a solution to the set 4

19 covering problem without overcovers is also a solution to the corresponding set partitioning problem. It may even be possible be transform a solution to the set covering problem with overcovers to a partitioning solution. If the requirement that a covering subset or pairing must constitute a sequence of flights (such that each flight transports pilots to the next flight, starting and ending at the home base) could be discarded, it would be trivial to remove overcovers to create a partitioning solution but then we would loose the practical applicability to the crew pairing problem. Example 3.3: Set Covering Formulation The crew pairing problem described in Example 3.2 can also be modeled as a set covering problem as this Section explains. Stating the corresponding ILP in the same form as (9) yields: min [ ] x, x {0,} x x x 6 x 7. (0) An optimal solution to this pairing problem is choosing the pairings 2, 4 and, 6 giving a cost of 3. This solution has overcovers and would be infeasible in the model (6) from Example 3.2, since the left hand sides of the flight constraints 7 and 8 equal 2 that is for the solution x = [ ], = () Consequently, some crew will fly deadhead on flights 7 and 8. Note, although the partitioning solution found in Example 3.2 is also optimal in (0), the set covering formulation does not distinguish between overcovering solutions and partitioning solutions. In Section 3.4 we augment the set covering model so that overcovers can be given a cost. The LP relaxation of the problem (0) has an optimal solution x = [ ] with a cost of.5. This solution does not contain any overcovers and is the same solution as that found to the set partitioning formulation in Example 3.2. That these LP relaxed solutions are identical is not unexpected, since the possibility of dividing pairings in the LP relaxation, allows (unrealistically flexible) combinations of pairings so that the constraints are satisfied with equality

20 3.4 The Augmented Set Covering Formulation The set covering formulation (9) represents the pairing problem well and thus when minimized gives a good schedule as long as two modeling assumptions are met: overcovers in the solution are actually feasible and not prohibitively expensive and there are no important interdependencies between the pairings. The nature and source of these interdependencies is explained in this section. Since overcovers result in costly deadheads (which occupy a passenger seat), we want to avoid them if possible. This is accomplished by penalizing overcovers with a non-negative penalty vector f R m + where m is the number of flights. This gives the overcover penalized covering problem, min x {0,} n s.t. ( n n ) c i x i, + f T a i x i m, i= n a i x i m. (2) i= i= The elements of the vector n i= a ix i m indicate how much the corresponding flights (or elements in the ground set) are over-covered. If every flight is covered once, i.e. the set covering is also a set partition, then n i= a ix i m = 0 m, and thus the overcover cost penalty is not incurred. Example 3.4. illustrates penalizing overcovers by augmenting Example 3.3 with overcover penalties. 6

21 Example 3.4.: Penalizing Overcovers The crew pairing problem described in Example 3.2 can be modeled in several ways. The model in Example 3.3 gives a solution with overcovers. If overcovers incur an additional cost or are undesirable, they can be penalized using an overcover penalized set covering model as shown in (2). In this example, every flight constraint is given an overcover penalty cost of 0. Thus in terms of (2), f T = [ ( ]. Recalling that a i is a column of the constraint matrix, f T i= a ix i 7 ) 8 = [ ]x 80 is calculated. Adding this term to the objective function in Example 3.3, the ILP is stated in the same form as (2): min [ ] T x 80, x {0,} T x x 2.. x 6 x 7 An optimal solution to this pairing problem is choosing pairings 3, 2 and. 6, which results in no overcovers, and has a cost of 3. This is the same solution as that found for the set partitioning model in Example 3.2. The overcovering solution with pairings 2, 4, and 6 found in Example 3.3 would now cost 33, which explains why this solution, while still feasible, is not chosen. The LP relaxation of this problem (3) has an optimal x = [ ] with a cost of.5. This solution is identical to the solution found to the LP relaxations in Examples 3.2 and 3.3. Furthermore, there can exist interdependencies that further restrict the combinations of the pairings that may be chosen. Pairings are interdependent due to the resources that some pairings use and the nature of these resources. For example, there often exist minima and maxima on the number of pairings with certain home bases that can be selected since the total staff is limited and often guaranteed a minimum amount of work at a given home base []. The restrictions on how the pairings can and cannot be combined in a schedule are known as the GLCs (GLobal Constraints). Enforcing the GLCs is accomplished by adding soft constraints. A definition and discussion of soft constraints is found in [4, Section.8]. The following soft constraints are added: Bx y d where B is an integer q n matrix, q is the number of GLCs, and n is the number of pairings in the original overcover-penalized covering problem; the matrix B is defined as follows: B ki = quantity of resource k used by pairing i; the vector d R q + defines the soft maximum usage of resource k for k =,..., q that is, the maxi- 7 (3)

22 mum usage allowed without incurring any additional cost; the variable vector y R q + brings the amount exceeding the maximum usage into the objective function. Accordingly, the penalty term p T y = q k= p ky k, where p k > 0, is the penalty cost per unit of using additional amount of resource i, is added to the objective function. For notational simplicity, we let b i q for i =... n be the column vectors of B. Combining GLCs with the overcoverpenalized covering problem, we obtain the following MILP (mixed integer linear program), min x {0,} n y R q + s.t. ( n n ) q c i x i + f T a i x i m + p k y k, i= i= k= n a i x i m, (4) i= n b i x i y d. i= Note: It may not always seem natural to model certain restrictions on the ways pairings can be combined with constraints as shown in (4) (e.g. a minimum number of pairings with a certain home base. However, a single GLC k that would naturally be modeled by n i= B kix i + y k d k (e.g. a minimum number of pairings with a certain home base) can be modeled as explained above by multiplying each constant in such a constraint by giving n i= B kix i y k d k. The role of GLCs is demonstrated in Example

23 Example 3.4.2: Adding GLCs The crew pairing problem described in Example 3.2 is modeled in this example as a penalized set covering problem (see Example 3.4.) augmented with GLCs. The restriction that no more than two pairings can be chosen without incurring extra cost is imposed. This restriction is modeled by defining that every pairing uses unit of crew and imposing a soft maximum crew supply of 2 and a penalty cost of 20 for every crew unit that exceeds the soft maximum. In order to adapt the model (3) from Example 3.4., we need to determine the penalty terms q k= p ky k and the soft constraints n i= b ix i y d for the model (4), which are, respectively, 20y and [ ]x y 2. Adding this term to the model from 3.4. yields, min x {0,} 7 y 0 [ ] x + 20y x x 2.. x 6 x 7 [ ] x y 2. (5) In this model, the solution with pairings 3, 3, and 7 found in Examples 3.2 and 3.4. increases in cost from 3 to 23. An optimal solution to the model (5) is choosing the pairings 2 and 7 which gives a cost of 4 with y = 0. The LP relaxation of this problem (5) gives the solution x = [ ] with a cost of.5 and y = 0. Formulation (4) is a MILP since it contains variables both with and without the integrality requirements. In this case, x has the integrality requirement and y lacks the integrality requirement. Also note that the structure of the GLCs implies that the choice of x completely determines y but not vice versa. The vector x contains the decision variables and the vector y contains help variables (for a discussion of help variables, or non-decision variables, see [4, Remark.4]). The fact that all the decision variables are integral implies that we can treat the MILP (4) as an Integer Program (IP), where min x {0,} n s.t. g(x), n a i x i m (6) i= 9

24 g(x) := min y R q + s.t. ( n n ) c i x i + f T a i x i m i= i= n b i x i y d. i= + q p k y k, k= (7) This formulation, while not linear, is quite simple. Note also that the function g(x) is a convex function. The choice of using formulation (6) (7) or formulation (4) varies depending on the solution method to be applied. More importantly, the existence of these two formulations makes it logically consistent to discuss the crew pairing model with overcover penalties and GLCs as the MILP (4) or the IP (6) (7). This is equivalence is important for Section 4., which explains B&B algorithm, for the crew pairing problem, in terms an IP. The problem (4) captures the essential characteristics of the crew pairing problem that is, it models the essential characteristics of the problem, and given accurate input data, should yield staffed aircraft at a lowest possible cost. However, two crucial issues remain: How can a large integer program be solved? and Which pairings should be considered? This latter question is particularly important since handling all possible pairings explicitly is intractable for many real-world problem instances and considering fewer but good pairings makes solving (4) easier. The next chapter explains the B&P algorithm that can resolve these issues. 20

25 4 Solving Integer Programs with Branch and Price Finding optimal or near-optimal solutions to integer linear programs (ILP) has been the subject of much research since many important decision problems from industry can be formulated as ILPs [29] [3]. While the ability to succinctly model many problems is valuable, it does not itself lead to high quality solutions. The computational resources required to solve one particular ILP depends greatly on the choice of the algorithm. Some algorithms classified as approximation algorithms run in polynomial time, but can only guarantee near-optimal solutions; other algorithms classified as exact algorithms guarantee optimal solutions but often do not run in polynomial time [9]. Accordingly, the computational complexity of algorithms differ. While the theoretical complexity of algorithms is an important topic, it is beyond the scope of this thesis. The problem at hand is determining a practical algorithm for finding an optimal solution to the pairing problem as formulated in Section 3.4. This being said, it should be noted that the set covering problem is NP-complete and thus should temper expectations for quick solution methods [9]. The exact algorithm for solving ILPs that we will focus on is the B&P algorithm. The appearance of this algorithm in the literature dates back to the 980s when it was first used to solve routing and scheduling problems [2] [3]. The column generation roots of B&P are found in the early work of Ford and Fulkerson on multi-commodity flows, see [2]. The B&P algorithm combines the ideas from B&B (branch and bound) and column generation. Combining these ideas is not straightforward and a time-efficient implementation is problem specific. We follow a common tact of explaining B&B and column generation algorithms and then explain B&P in terms of these two algorithms. The following exposition assumes a basic understanding of general optimization theory and linear programming. The requisite theory is covered in depth in [4], [0], [8] and [28]. 2

26 4. Branch and Bound Through algebraic manipulation (as shown in Section 3.4), the final set covering MILP (4) can be equivalently formulated as the following general IP with n variables and m constraints min x n s.t. g(x), n a i x i h, i= (8) where a i, h R m are given. Note that the requirement on the variables, x {0, } n present in (6), can be incorporated into the constraints n i= a ix i h and x n found in (8). Given that the IP (8) is bounded, we can theoretically determine the optimal solutions by enumerating and evaluating all its feasible solutions (all solutions vectors that satisfy all the constraints of an optimization problem). For most problems, this simple method is clearly not tenable due to the size of the solution space (the set of feasible solutions). Even if the problem of enumerating the solution space is ignored, the remaining part of the algorithm evaluating the feasible solutions is not a polynomial-time algorithm since the size of the solution space as function of the number of decision variables, n, grows faster than any polynomial for large values of n [9]. B&B is a method for dealing with the size of the solution space by systematically partitioning the feasible set (set of feasible solutions or the solution space) in such a way that portions can successively be discarded until an optimal solution is found. B&B begins by relaxing the integrality constraint on x in the IP (8) and partitioning the solution space of the relaxed problem to create a number of subproblems. For each subproblem, the feasible set is a partition constructed by restricting the solution space in some way meaning that the feasible set for each subproblem is a subset of the feasible set of the original relaxed problem. Before explaining further, we consider these subproblems: a subproblem can never attain a lower objective value (the value of the objective function for a given solution values) than the original relaxed problem since the feasible set of a subproblem is a subset of the feasible set of the original relaxed problem. Moreover, if the optimal solution to the original problem is also in the feasible set of a subproblem, then the optimal objective value of a subproblem cannot be higher than that of the original problem. We can discard or prune a subproblem if it is infeasible, if its solution is greater than or equal to a known integer solution to the original problem, if its optimal solution is integral, or if all the subproblem s subproblems have be discarded; these pruning conditions are referred to, respectively, as pruned 22

27 by infeasibility, pruned by the bound, pruned by optimality, and pruned by its children [3]. The condition referred as pruned by optimality implies that a subproblem can be discarded since if a subproblem has an integer solution, this solution is also a solution to the original IP problem [3]. The latter condition gives rise to the tree structure of a B&B search this tree of subproblems is referred to as the search tree. By repeating this process of creating subproblems, eventually the optimal solution will be found and all subproblems will be pruned proving the optimality of the lowest integral solution. This method is called branch and bound since subproblems known as branches are created that are cut off by a bound the best integral solution available [3] [24]. In this thesis, the terms branches, subproblems and nodes are used synonymously. A node that is branched upon to create new subproblems, is called a parent. The subproblems created by branching on a parent are called children. The parent of a node, the parent of the parent of a node, etc are considered the ancestors of a node; and the children of a node, and the children of the children of a node, etc are descendants of a node. The node that is the ancestor to all other nodes, i.e. the node whose subproblem that is the LP relaxation of the original IP, is referred to as the root node. Example 4. demonstrates how the B&B algorithm can be used to solve the crew pairing model (6) presented in Example

28 Example 4.: B&B This example demonstrates how B&B can be used to solve the ILP (6) from Example 3.2: min [ ] x, x {0,} x x x 6 x 7 3 = (9) This ILP can be solved with a B&B algorithms which branch either on the variables or the constraints. Since this ILP has binary decision variables, branching on variables is simple and is accomplished creating subproblems which successively fix variables to either 0 or. An example of this is variable branching scheme is shown in Figure 2. The variables that are fixed in any subproblem can been seen in the figure by following the path up the tree from a subproblem back to the root node. Branching on the constraints is a bit more complicated. The subproblems created satisfy two constraints, i.e. constraint g and constraint h, in two different ways: in one subproblem only pairings that satisfy both or neither g nor h are considered and in the other subproblem only pairings that only contain only one of g or h or contain neither g nor h. These branchings are denoted respectively by both(g, h) and dif(g, h). This branching scheme is demonstrated in Figure 3. For more details concerning this constraint branching scheme see the description of Ryan-Foster branching in Section 4.3. Figure 2: Variable Branching The Root Node Fractional Objective Value =.5 Pruned by Children Figure 3: Constraint Branching The Root Node Fractional Objective Value =.5 Pruned by Children Fractional Objective Value = 2 Pruned by Children X = 0 7 X = 7 Integral Objective Value = 4 Pruned by Optimality both(5,6) dif(5,6) Integral Fractional Fractional Objective Value = 4 Objective Value = 2 Pruned by Optimality Pruned by Children X 6 = X 6 = 0 Optimal Solution Found both(7,8) dif(7,8) Optimal Solution Found Integral Objective Value = 4 Pruned by Optimality or Bound Integral Objective Value = 3 Pruned by Optimality Integral Objective Value = 4 Pruned by Optimality or Bound Integral Objective Value = 3 Pruned by Optimality It is worthwhile to note that an optimal solution was found relatively early in both branching trees. The similarity in number of nodes required and geometry of the branching trees in the variable and constraint branching are specific to this instance and the branching choices and not to the branching schemes in general. 24

29 At first glance, B&B seems simple, but in practice, determining the branches that is, how the feasible set should be partitioned is nontrivial. Choosing, for example, whether the B&B algorithm branches over the values the variables take (i.e. variable branching) or over how the constraints are satisfied (i.e. constraint branching), see restrictions on subsets in [5]) can affect both the difficulty of solving the subproblems and how well B&B can work within other frameworks. Example 4. demonstrates both of these branching alternatives. It is possible that in pathological cases a B&B algorithm will not be able to cut off any branches until all subproblems have been solved this means that what we set out to avoid, evaluating every feasible solution, will have occurred [7]. Further, determining the order in which the branches are explored is critical, since good integral solutions may be discovered along the way [] [9] [3]. These good integral solutions, which give upper bounds on the optimal objective value, let us discard subproblems, thus avoiding unnecessary branching. Once the above issues are resolved for a particular class of ILPs, B&B can efficiently find the optimal solutions. However B&B does not suffice for the crew pairing problem. This is because there are usually far too many possible pairings which would make the B&B subproblems very large, requiring an unacceptably long time to optimize. Thus, we must select or generate good subsets of the possible pairings in some way. Section 4.2 covers the column generation algorithm which resolves the pairing generation issue. 25

30 4.2 Column Generation This section explains column generation in the context of LP. It also builds on general LP theory, such as the theory of the simplex method. This general LP theory is covered in [4], [0], [8], and [28]. For more details concerning column generation, the classic book by Lasdon [20] is an excellent source. For a review of recent developments in column generation, see Lübbecke and Desrosiers [2]. Although the end goal is to use column generation for solving ILPs within a B&P framework, this section only covers the column generation in terms of LP problems. This suffices since the goal is to explain the role of column generation within B&P algorithms which use column generation to solve the LP relaxations of huge ILPs. Consider the following LP relaxation of (8) which will be referred to as the master problem (MP): z MP = min x R n s.t. c i x i, i I a i x i h, (20) i I where a i, h R m. We assume that the feasible set of defined by (20) is bounded. In the MP (20), the index set I indexes costs c i, variables x i in the objective function, and the associated constraint columns a i. The MP (20) may be intractable due to the large number of variables indexed in I. This may happen, for instance, if the algorithm, used to solve (20), requires more memory than available. The way around this impasse is to only consider a subset of the variables and columns. Thus instead of considering I, consider I I. This gives the restricted master problem (RMP) to the MP (20): z RMP = min x R n s.t. c i x i, i I I i I I a i x i h, (2) where a i, h R m. Given that I is a sufficiently small subset of I, the RMP is tractable. Of course, the optimal solution to the RMP (20) is not generally optimal in the MP (20). Since every feasible solution in the RMP is also feasible in the MP, it holds that z RMP z MP. However, for large-scale problems, the optimal solution to the MP is sparse, or in other words, most of the optimal x i s for i I are equal to zero. This means if we can correctly 26

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen MASTER STHESIS Minimizing costs for transport buyers using integer programming and column generation Eser Esirgen DepartmentofMathematicalSciences CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG

More information

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows TECHNISCHE UNIVERSITEIT EINDHOVEN Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows Lloyd A. Fasting May 2014 Supervisors: dr. M. Firat dr.ir. M.A.A. Boon J. van Twist MSc. Contents

More information

Chapter 13: Binary and Mixed-Integer Programming

Chapter 13: Binary and Mixed-Integer Programming Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Abstract A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Fang He and Rong Qu The Automated Scheduling, Optimisation and Planning (ASAP) Group School of Computer Science,

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept Introduction to Linear Programming (LP) Mathematical Programming Concept LP Concept Standard Form Assumptions Consequences of Assumptions Solution Approach Solution Methods Typical Formulations Massachusetts

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

MODELS AND ALGORITHMS FOR WORKFORCE ALLOCATION AND UTILIZATION

MODELS AND ALGORITHMS FOR WORKFORCE ALLOCATION AND UTILIZATION MODELS AND ALGORITHMS FOR WORKFORCE ALLOCATION AND UTILIZATION by Ada Yetunde Barlatt A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Industrial

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

A Decision Support System for Crew Planning in Passenger Transportation using a Flexible Branch-and-Price Algorithm

A Decision Support System for Crew Planning in Passenger Transportation using a Flexible Branch-and-Price Algorithm A Decision Support System for Crew Planning in Passenger Transportation using a Flexible Branch-and-Price Algorithm RICHARD FRELING 1, 2*, RAMON M. LENTINK 1, 2 AND ALBERT P.M. WAGELMANS 1 1 Erasmus Center

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Optimization in Airline Scheduling: Challenges and Successes

Optimization in Airline Scheduling: Challenges and Successes Optimization in Airline Scheduling: Challenges and Successes Ellis L. Johnson First TLI-AP NTNU Workshop on Multi-Modal Modal Logistics National University of Singapore 11 March 2002 2 TLI-AP NTNU NUS

More information

Integrating Benders decomposition within Constraint Programming

Integrating Benders decomposition within Constraint Programming Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP

More information

Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling. Tim Nieberg Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

More information

Discuss the size of the instance for the minimum spanning tree problem.

Discuss the size of the instance for the minimum spanning tree problem. 3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Linear Programming Supplement E

Linear Programming Supplement E Linear Programming Supplement E Linear Programming Linear programming: A technique that is useful for allocating scarce resources among competing demands. Objective function: An expression in linear programming

More information

Scheduling Algorithm with Optimization of Employee Satisfaction

Scheduling Algorithm with Optimization of Employee Satisfaction Washington University in St. Louis Scheduling Algorithm with Optimization of Employee Satisfaction by Philip I. Thomas Senior Design Project http : //students.cec.wustl.edu/ pit1/ Advised By Associate

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

2) Write in detail the issues in the design of code generator.

2) Write in detail the issues in the design of code generator. COMPUTER SCIENCE AND ENGINEERING VI SEM CSE Principles of Compiler Design Unit-IV Question and answers UNIT IV CODE GENERATION 9 Issues in the design of code generator The target machine Runtime Storage

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

A MODEL TO SOLVE EN ROUTE AIR TRAFFIC FLOW MANAGEMENT PROBLEM:

A MODEL TO SOLVE EN ROUTE AIR TRAFFIC FLOW MANAGEMENT PROBLEM: A MODEL TO SOLVE EN ROUTE AIR TRAFFIC FLOW MANAGEMENT PROBLEM: A TEMPORAL AND SPATIAL CASE V. Tosic, O. Babic, M. Cangalovic and Dj. Hohlacov Faculty of Transport and Traffic Engineering, University of

More information

Basic Components of an LP:

Basic Components of an LP: 1 Linear Programming Optimization is an important and fascinating area of management science and operations research. It helps to do less work, but gain more. Linear programming (LP) is a central topic

More information

Chapter 11 Monte Carlo Simulation

Chapter 11 Monte Carlo Simulation Chapter 11 Monte Carlo Simulation 11.1 Introduction The basic idea of simulation is to build an experimental device, or simulator, that will act like (simulate) the system of interest in certain important

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery

More information

Compact Representations and Approximations for Compuation in Games

Compact Representations and Approximations for Compuation in Games Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Fairness in Routing and Load Balancing

Fairness in Routing and Load Balancing Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

What mathematical optimization can, and cannot, do for biologists. Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL

What mathematical optimization can, and cannot, do for biologists. Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL What mathematical optimization can, and cannot, do for biologists Steven Kelk Department of Knowledge Engineering (DKE) Maastricht University, NL Introduction There is no shortage of literature about the

More information

JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004

JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004 Scientiae Mathematicae Japonicae Online, Vol. 10, (2004), 431 437 431 JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS Ondřej Čepeka and Shao Chin Sung b Received December May 12, 2003; revised February

More information

Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

Sudoku puzzles and how to solve them

Sudoku puzzles and how to solve them Sudoku puzzles and how to solve them Andries E. Brouwer 2006-05-31 1 Sudoku Figure 1: Two puzzles the second one is difficult A Sudoku puzzle (of classical type ) consists of a 9-by-9 matrix partitioned

More information

Robust Geometric Programming is co-np hard

Robust Geometric Programming is co-np hard Robust Geometric Programming is co-np hard André Chassein and Marc Goerigk Fachbereich Mathematik, Technische Universität Kaiserslautern, Germany Abstract Geometric Programming is a useful tool with a

More information

Simultaneous Column-and-Row Generation for Large-Scale Linear Programs with Column-Dependent-Rows

Simultaneous Column-and-Row Generation for Large-Scale Linear Programs with Column-Dependent-Rows SABANCI UNIVERSITY Orhanlı-Tuzla, 34956 Istanbul, Turkey Phone: +90 (216) 483-9500 Fax: +90 (216) 483-9550 http://www.sabanciuniv.edu http://algopt.sabanciuniv.edu/projects April 23, 2012 Simultaneous

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

Basics of Polynomial Theory

Basics of Polynomial Theory 3 Basics of Polynomial Theory 3.1 Polynomial Equations In geodesy and geoinformatics, most observations are related to unknowns parameters through equations of algebraic (polynomial) type. In cases where

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

The Trip Scheduling Problem

The Trip Scheduling Problem The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems

More information

High Performance Computing for Operation Research

High Performance Computing for Operation Research High Performance Computing for Operation Research IEF - Paris Sud University claude.tadonki@u-psud.fr INRIA-Alchemy seminar, Thursday March 17 Research topics Fundamental Aspects of Algorithms and Complexity

More information

Randomization Approaches for Network Revenue Management with Customer Choice Behavior

Randomization Approaches for Network Revenue Management with Customer Choice Behavior Randomization Approaches for Network Revenue Management with Customer Choice Behavior Sumit Kunnumkal Indian School of Business, Gachibowli, Hyderabad, 500032, India sumit kunnumkal@isb.edu March 9, 2011

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Dynamic TCP Acknowledgement: Penalizing Long Delays

Dynamic TCP Acknowledgement: Penalizing Long Delays Dynamic TCP Acknowledgement: Penalizing Long Delays Karousatou Christina Network Algorithms June 8, 2010 Karousatou Christina (Network Algorithms) Dynamic TCP Acknowledgement June 8, 2010 1 / 63 Layout

More information

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of. Polynomial and Rational Functions Outline 3-1 Polynomial Functions 3-2 Finding Rational Zeros of Polynomials 3-3 Approximating Real Zeros of Polynomials 3-4 Rational Functions Chapter 3 Group Activity:

More information

Formulation of simple workforce skill constraints in assembly line balancing models

Formulation of simple workforce skill constraints in assembly line balancing models Ŕ periodica polytechnica Social and Management Sciences 19/1 (2011) 43 50 doi: 10.3311/pp.so.2011-1.06 web: http:// www.pp.bme.hu/ so c Periodica Polytechnica 2011 Formulation of simple workforce skill

More information

Multiple Spanning Tree Protocol (MSTP), Multi Spreading And Network Optimization Model

Multiple Spanning Tree Protocol (MSTP), Multi Spreading And Network Optimization Model Load Balancing of Telecommunication Networks based on Multiple Spanning Trees Dorabella Santos Amaro de Sousa Filipe Alvelos Instituto de Telecomunicações 3810-193 Aveiro, Portugal dorabella@av.it.pt Instituto

More information

Final Report. to the. Center for Multimodal Solutions for Congestion Mitigation (CMS) CMS Project Number: 2010-018

Final Report. to the. Center for Multimodal Solutions for Congestion Mitigation (CMS) CMS Project Number: 2010-018 Final Report to the Center for Multimodal Solutions for Congestion Mitigation (CMS) CMS Project Number: 2010-018 CMS Project Title: Impacts of Efficient Transportation Capacity Utilization via Multi-Product

More information

Route optimization applied to school transports A method combining column generation with greedy heuristics

Route optimization applied to school transports A method combining column generation with greedy heuristics PREPRINT Route optimization applied to school transports A method combining column generation with greedy heuristics Mikael Andersson Peter Lindroth Department of Mathematics CHALMERS UNIVERSITY OF TECHNOLOGY

More information

How To Solve A Minimum Set Covering Problem (Mcp)

How To Solve A Minimum Set Covering Problem (Mcp) Measuring Rationality with the Minimum Cost of Revealed Preference Violations Mark Dean and Daniel Martin Online Appendices - Not for Publication 1 1 Algorithm for Solving the MASP In this online appendix

More information

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Batch Production Scheduling in the Process Industries. By Prashanthi Ravi

Batch Production Scheduling in the Process Industries. By Prashanthi Ravi Batch Production Scheduling in the Process Industries By Prashanthi Ravi INTRODUCTION Batch production - where a batch means a task together with the quantity produced. The processing of a batch is called

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

Optimal shift scheduling with a global service level constraint

Optimal shift scheduling with a global service level constraint Optimal shift scheduling with a global service level constraint Ger Koole & Erik van der Sluis Vrije Universiteit Division of Mathematics and Computer Science De Boelelaan 1081a, 1081 HV Amsterdam The

More information

An Introduction to Linear Programming

An Introduction to Linear Programming An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important

More information

High-performance local search for planning maintenance of EDF nuclear park

High-performance local search for planning maintenance of EDF nuclear park High-performance local search for planning maintenance of EDF nuclear park Frédéric Gardi Karim Nouioua Bouygues e-lab, Paris fgardi@bouygues.com Laboratoire d'informatique Fondamentale - CNRS UMR 6166,

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

4 UNIT FOUR: Transportation and Assignment problems

4 UNIT FOUR: Transportation and Assignment problems 4 UNIT FOUR: Transportation and Assignment problems 4.1 Objectives By the end of this unit you will be able to: formulate special linear programming problems using the transportation model. define a balanced

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Answer Key for California State Standards: Algebra I

Answer Key for California State Standards: Algebra I Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Cost Models for Vehicle Routing Problems. 8850 Stanford Boulevard, Suite 260 R. H. Smith School of Business

Cost Models for Vehicle Routing Problems. 8850 Stanford Boulevard, Suite 260 R. H. Smith School of Business 0-7695-1435-9/02 $17.00 (c) 2002 IEEE 1 Cost Models for Vehicle Routing Problems John Sniezek Lawerence Bodin RouteSmart Technologies Decision and Information Technologies 8850 Stanford Boulevard, Suite

More information

THE SELECTION OF RETURNS FOR AUDIT BY THE IRS. John P. Hiniker, Internal Revenue Service

THE SELECTION OF RETURNS FOR AUDIT BY THE IRS. John P. Hiniker, Internal Revenue Service THE SELECTION OF RETURNS FOR AUDIT BY THE IRS John P. Hiniker, Internal Revenue Service BACKGROUND The Internal Revenue Service, hereafter referred to as the IRS, is responsible for administering the Internal

More information

An optimization model for aircraft maintenance scheduling and re-assignment

An optimization model for aircraft maintenance scheduling and re-assignment Transportation Research Part A 37 (2003) 29 48 www.elsevier.com/locate/tra An optimization model for aircraft maintenance scheduling and re-assignment Chellappan Sriram 1, Ali Haghani * Department of Civil

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

1.7 Graphs of Functions

1.7 Graphs of Functions 64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most

More information

The Taxman Game. Robert K. Moniot September 5, 2003

The Taxman Game. Robert K. Moniot September 5, 2003 The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information