# List Scheduling in Order of α-points on a Single Machine

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 List Scheduling in Order of α-points on a Single Machine Martin Skutella Fachbereich Mathematik, Universität Dortmund, D 4422 Dortmund, Germany skutella/ Abstract. Many approximation results for single machine scheduling problems rely on the conversion of preemptive schedules into (preemptive or non-preemptive) solutions. The initial preemptive schedule is usually an optimal solution to a combinatorial relaxation or a linear programming relaxation of the scheduling problem under consideration. It therefore provides a lower bound on the optimal obective function value. However, it also contains structural information which is useful for the construction of provably good feasible schedules. In this context, list scheduling in order of so-called α-points has evolved as an important and successful tool. We give a survey and a uniform presentation of several approximation results for single machine scheduling with total weighted completion time obective from the last years which rely on the concept of α-points. Introduction We consider the following single machine scheduling problem. There is a set of n obs J = {,..., n} that must be processed on a single machines. Each ob has a nonnegative integral processing time p, that is, it must be processed during p time units. The machine can process at most one ob at a time. Each ob has a non-negative integral release date r before which it cannot be processed. In preemptive schedules, a ob may repeatedly be interrupted and continued later. In non-preemptive schedules, a ob must be processed in an uninterrupted fashion. There may be precedence constraints between obs. If k for, k J, it is required that is completed before k can start. We denote the completion time of ob by C and seek to minimize the total weighted completion time: A non-negative weight w is associated with each ob and the goal is to minimize J w C. In order to keep the presentation simple, we will almost always assume that the processing times p are positive integers and that w >, for all obs J. However, all results can be carried over to the case with zero processing times and weights. In scheduling, it is quite convenient to refer to the respective problems using the standard classification scheme of Graham, Lawler, Lenstra, and Rinnooy Kan [2]. The problems that we consider are special variants of the single machine scheduling problem r, prec (, pmtn) w C. The entry in the first field indicates that the scheduling environment provides only one machine. The second field can be empty or contains some of the ob characteristics r, prec, and pmtn, indicating whether there E. Bampis et al. (Eds.): Approximation and Online Algorithms, LNCS 3484, pp , 26. c Springer-Verlag Berlin Heidelberg 26

2 List Scheduling in Order of α-points on a Single Machine 25 are nontrivial release dates or precedence constraints, and whether preemption is allowed. We put an item in brackets to indicate that we consider both variants of the problem, with and without the corresponding feature. The third field refers to the obective function. We are interested in minimizing the total weighted completion time w C or, for the special case of unit weights, the total completion time C. Scheduling with the total weighted completion time obective has recently achieved a great deal of attention, partly because of its importance as a fundamental problem in scheduling, and also because of new applications, for instance, in compiler optimization [9] and in parallel computing [6]. In the last years, there has been significant progress in the design of approximation algorithms for various special cases of the general single machine problems r, prec (, pmtn) w C ; see, e.g., [29, 2, 6,, 35, 7, 8, 36,,, 3]. Recall that a ρ-approximation algorithm is a polynomial-time algorithm guaranteed to deliver a solution of cost at most ρ times the optimal value; the value ρ is called performance guarantee or performance ratio of the algorithm. A randomized ρ-approximation algorithm is a polynomial-time algorithm that produces a feasible solution whose expected obective function value is within a factor of ρ of the optimal value. For problems without precedence constraints, we also consider the corresponding on-line setting where obs arrive over time and the number of obs is unknown in advance. Each ob J becomes available at its release date, which is not known in advance; at time r, we learn both its processing time p and its weight w. Even in the on-line setting, the value of the computed schedule is compared to the optimal (offline) schedule. The derived bounds are called competitive ratios. While all randomized approximation algorithms discussed in this paper can be derandomized in the off-line setting without loss in the performance guarantee, there is a significant difference in the on-line setting. It is well-known that randomized on-line algorithms often yield better competitive ratios than any deterministic on-line algorithm can achieve (see, e. g., the positive and negative results on the problem r C in Table ). The conversion of preemptive schedules to nonpreemptive schedules in order to get approximation algorithms was introduced by Phillips, Stein and Wein [29] and was subsequently also used in [7, ], among others. Phillips et al. [29], and Hall, Shmoys, and Wein [22] introduced the idea of list scheduling in order of α-points to convert preemptive schedules to non-preemptive ones. Even earlier, de Sousa [46] used this idea heuristically to turn preemptive solutions to a time-indexed linear programming relaxation into feasible nonpreemptive schedules. For α (, ], the α-point of a ob with respect to a preemptive schedule is the first point in time when an α-fraction of the ob has been completed. Independently from each other, Goemans [6] and Chekuri, Motwani, Nataraan, and Stein [] have taken up this idea, and showed that choosing α randomly leads to better results. Later, α-points with individual values of α for different obs have been used by Goemans, Queyranne, Schulz, Skutella, and Wang [7]. Table summarizes approximation and on-line results based on the concept of α- points which were obtained for the single machine scheduling problems under consideration. These results are compared with the best known approximation and on-line results and also with corresponding hardness results. In the absence of precedence

3 252 M. Skutella Table. Summary of approximation results and non-approximability results. In all cases, ε > can be chosen arbitrarily small. The bounds marked with one and two stars refer to randomized and deterministic on-line algorithms, respectively. The second column contains the results discussed in this paper while the third column gives the best currently known results. The last column contains the best known lower bounds, that is, results on the hardness of approximation. Approximation and hardness results for single machine scheduling problems problem α-points best known lower bounds r C.582 [] r w C.6853 [7] [6] 4 r, pmtn w C [36] 3 + ε [] 2 [29] + ε [] 2 [3] + ε [] 2 [36].582 [48, 5] 2 [23].582 [48, 5] 2 [23].38 [4].73 [4] prec w C 2 + ε [35] 2 [2]? r, prec w C e + ε [35]? r, prec, pmtn w C 2 + ε [35] 2 [2]? constraints, polynomial-time approximation schemes have recently been obtained by Afrati et al. []; however, list scheduling in order of α-points still leads to the best known randomized on-line algorithms for these problems. For problems with precedence constraints, the concept of α-points either yields the best known approximation result (for the problem r, prec w C ) or only slightly worse results. It is one of the most interesting open problems in the area of machine scheduling to obtain nonapproximability results for these problems [38, 5]. We mention further work in the context of α-points: Schulz and Skutella [37] apply the idea of list scheduling in order of α-points to scheduling problems on identical parallel machines, based on a single machine relaxation and random assignments of obs to machines. Savelsbergh, Uma, and Wein [32] give an experimental study of approximation algorithms for the problem r w C. In particular, they analyze list scheduling in order of α-points for one single α and for individual values of α for different obs. They also test the approximation algorithms within a branch and bound scheme and apply it to real world scheduling instances arising at BASF AG in Ludwigshafen. Uma and Wein [49] extend this evaluation and study the relationship between several linear programming based lower bounds and combinatorial lower bounds for the problems r w C. The heuristic use of α-points for resource constrained proect scheduling problems was also empirically analyzed by Cavalcante, de Souza, Savelsbergh, Wang, and Wolsey [5] and by Möhring, Schulz, Stork, and Uetz [26]. In this paper, we give a survey and a uniform presentation of the approximation results listed in the second column of Table. We start with a description and analysis of simple list scheduling heuristics for non-preemptive and preemptive single machine

4 List Scheduling in Order of α-points on a Single Machine 253 scheduling in Section 2. The concept of α-points is introduced in Section 3. In Section 4 we review the result of Chekuri et al. for the problem r C. Time-indexed LP relaxations for general constrained single machine scheduling together with results on their quality are discussed in Section 5. In Section 6 we review related results of Schulz [34] and Hall, Schulz, Shmoys, and Wein [2] on list scheduling in order of LP completion times. The approximation results of Goemans et al. [7] and Schulz and Skutella [36] for single machine scheduling with release dates are discussed in Section 7. The results of Schulz and Skutella [35] for the problem with precedence constraints are presented in Section 8. Finally, the application of these results to the on-line setting is discussed in Section 9. 2 Non-preemptive and Preemptive List Scheduling In this section we introduce and analyze non-preemptive and preemptive list scheduling on a single machine for a given set of obs with release dates and precedence constraints. We consider both the non-preemptive and the preemptive variant of the problem and argue that the class of list schedules always contains an optimal schedule. Finally we discuss simple approximations based on list scheduling. Many results presented in this section belong to the folklore in the field of single machine scheduling. Consider a list representing a total order on the set of obs which extends the partial order given by the precedence constraints. A straightforward way to construct a feasible non-preemptive schedule is to process the obs in the given order as early as possible with respect to release dates. This routine is called LIST SCHEDULING and a schedule constructed in this way is a (non-preemptive) list schedule. The first result for LIST SCHEDULING in this context was achieved by Smith [45] for the case without release dates or precedence constraints and is known as Smith s ratio rule: Theorem. LIST SCHEDULING in order of non-decreasing ratios p /w gives an optimal solution for w C in O(n log n) time. Proof. Since all release dates are zero, we can restrict to schedules without idle time. Consider a schedule S with two successive obs and k and p /w > p k /w k. Exchanging and k in S leads to a new schedule S whose value differs from the value of S by w p k w k p <. Thus, S is not optimal and the result follows. The running time of LIST SCHEDULING in order of non-decreasing ratios p /w is dominated by the time needed to sort the obs, and is therefore O(n log n). For the special case of unit weights (w ), Smith s ratio rule is sometimes also refered to as the shortest processing time rule (SPT-rule). We always assume that obs are numbered such that p /w p n /w n ; moreover, whenever we talk about LIST SCHEDULING in order of non-decreasing ratios p /w, we refer to this sequence of obs. Depending on the given list and the release dates of obs, one may have to introduce idle time when one ob is completed but the next ob in the list is not yet released. On the other hand, if preemptions are allowed, it does not make sense to leave the machine

5 254 M. Skutella idle while another ob at a later position in the list is already available (released) and waiting. We would better start this ob and preempt it from the machine as soon as the next ob in the list is released. In PREEMPTIVE LIST SCHEDULING we process at any point in time the first available ob in the list. The resulting schedule is called preemptive list schedule. Special applications of this routine have been considered, e. g., by Hall et al. [2] and by Goemans [5, 6]. An important property of preemptive list schedules is that whenever a ob is preempted from the machine, it is only continued after all available obs with higher priority are finished. Since we can assume without loss of generality that k implies r r k, and since the given list is a linear extension of the precedence order, the preemptive list schedule respects precedence constraints and is therefore feasible. The following result on the running time of PREEMPTIVE LIST SCHEDULING has been observed by Goemans [6]. Lemma. Given a list of n obs, the corresponding list schedule can be created in linear time while PREEMPTIVE LIST SCHEDULING can be implemented to run in O(n log n) time. Proof. The first part of the lemma is clear. In order to construct a preemptive list schedule in O(n log n) time we use a priority queue that always contains the currently available obs which have not been finished yet; the key of a ob is equal to its position in the given list. At each point in time we process the top element of the priority queue or leave the machine idle if the priority queue is empty. There are two types of events that cause an update of the priority queue: Whenever a ob is completed on the machine, we remove it from the priority queue; when a ob is released, we add it to the priority queue. This results in a total of O(n) priority queue operations each of which can be implemented to run in O(log n) time. As a consequence of the following lemma one can restrict to (preemptive) list schedules. Lemma 2. Given a feasible non-preemptive (preemptive) schedule, (PREEMPTIVE) LIST SCHEDULING in order of non-decreasing completion times does not increase completion times of obs. Proof. The statement for non-preemptive schedules is easy to see. LIST SCHEDULING in order of non-decreasing completion times coincides with shifting the obs in the given non-preemptive schedule one after another in order of non-decreasing completion times as far as possible to the left (backwards in time). Let us turn to the preemptive case. We denote the completion time of a ob in the given schedule by C P and in the preemptive list schedule by C. By construction, the new schedule is feasible since no ob is processed before its release date. For a fixed ob, let t be the earliest point in time such that there is no idle time in the preemptive list schedule during (t, C ] and only obs k with Ck P CP are processed. We denote the set of these obs by K. By the definition of t, we know that r k t, for all k K. Hence, C P t + k K p k. On the other hand, the definition of K implies C = t + k K p k and therefore C C P.

6 List Scheduling in Order of α-points on a Single Machine 255 In PREEMPTIVE LIST SCHEDULING a ob is only preempted if another ob is released at that time. In particular, there are at most n preemptions in a preemptive list schedule. Moreover, since all release dates are integral, preemptions only occur at integral points in time. Therefore we can restrict to schedules meeting this property. Nevertheless, we will sometimes consider schedules where preemptions can occur at arbitrary points in time, but we always keep in mind that those scheduled can be converted by PREEMPTIVE LIST SCHEDULING without increasing their value. 2. Simple Bounds and Approximations One of the most important techniques for approximating NP-hard machine scheduling problems with no preemptions allowed is the conversion of preemptive schedules to non-preemptive ones. The first result in this direction has been given by Phillips et al. [29]. It is a 2-approximation algorithm for the problem r C. Lemma 3. Consider LIST SCHEDULING according to a list, 2,..., n. Then the completion time of ob i, i n, is bounded from above by max r k + p k. k i k i Proof. For fixed i, modify the given instance by increasing the release date of ob to max k i r k. The completion time of i in the list schedule corresponding to this modified instance is equal to max k i r k + k i p k. It is well known that the preemptive problem r, pmtn C can be solved in polynomial time by the shortest remaining processing time rule (SRPT-rule) [4]: Schedule at any point in time the ob with the shortest remaining processing time. The value of this optimal preemptive schedule is a lower bound on the value of an optimal nonpreemptive schedule. Together with the following conversion result this yields the 2- approximation algorithm of Phillips et al. [29] for r C. Theorem 2. Given an arbitrary feasible preemptive schedule P, LIST SCHEDULING in order of non-decreasing completion times yields a non-preemptive schedule where the completion time of each ob is at most twice its completion time in P. Proof. The proof follows directly from Lemma 3 since both max k i r k and k i p k are lower bounds on C P (we use the notation of Lemma 3). An instance showing that the ob-by-ob bound given in Theorem 2 is tight can be found in [29]. 2.2 A Generalization of Smith s Ratio Rule to r, pmtn w C A natural generalization of Smith s ratio rule to r, pmtn w C is PREEMPTIVE LIST SCHEDULING in order of non-decreasing ratios p /w. Schulz and Skutella [36] give the following lower bound on the performance of this simple heuristic.

7 256 M. Skutella Lemma 4. The performance guarantee of PREEMPTIVE LIST SCHEDULING in order of non-decreasing ratios p /w is not better than 2, even if w = for all J. Proof. For an arbitrary n N, consider the following instance with n obs. Let w =, p = n 2 n +, and r = n + + n k=+ p k, for n. Preemptive list scheduling in order of non-decreasing ratios of p /w preempts ob at time r, for = 2,..., n, and finishes it only after all other obs,..., have been completed. The value of this schedule is therefore n 4 2 n3 + 2n. The SRPT-rule, which solves instances of r, pmtn C optimally, sequences the obs in order n,...,. It has value 2 n4 + 3 n3 + 6n. Consequently, the ratio of the obective function values of the SPT-rule and the SRPT-rule goes to 2 when n goes to infinity. On the other hand, one can give a matching upper bound of 2 on the performance of this algorithm. As pointed out in [36], the following observation is due to Goemans, Wein, and Williamson. Lemma 5. PREEMPTIVE LIST SCHEDULING in order of non-decreasing ratios p /w is a 2-approximation for r, pmtn w C. Proof. To prove the result we use two different lower bounds on the value of an optimal solution Z. Since the completion time of a ob is always at least as large as its release date, we get Z w r. The second lower bound is the value of an optimal solution for the relaxed problem where all release dates are zero. This yields Z ( w k p k) by Smith s ratio rule. Let C denote the completion time of ob in the preemptive list schedule. By construction we get C r + k p k and thus w C w r + ( ) w p k 2 Z. This concludes the proof. In spite of the negative result in Lemma 4, Schulz and Skutella [36] present an algorithm that converts the preemptive list schedule (in order of non-decreasing ratios p /w ) into another preemptive schedule and achieves performance guarantee 4/3 for r, pmtn w C ; see Section 7. The analysis is based on the observation of Goemans [5] that the preemptive list schedule represents an optimal solution to an appropriate LP relaxation. k 3 The Concept of α-points In this section we discuss a more sophisticated technique that converts feasible preemptive schedules into non-preemptive ones and generalizes the routine given in Theorem 2. The bounds discussed in this section are at the bottom of the approximation results presented below. Almost all important structural insights are discussed here. We are given a single machine and a set of obs with release dates and precedence constraints. Besides, we consider a fixed feasible preemptive schedule P and the completion time of ob in this schedule is denoted by C P.

8 List Scheduling in Order of α-points on a Single Machine 257 For < α the α-point C P (α) of ob with respect to P is the first point in time when an α-fraction of ob has been completed, i. e., when has been processed on the machine for α p time units. In particular, C P () = CP and for α = we define C P () to be the starting time of ob. The average over all points in time at which ob is being processed on the machine is called the mean busy time of. In other words, the mean busy time is the average over all α-points of, i. e., CP (α) dα. This notion has been introduced by Goemans [6]. If is being continuously processed between C P p and C P, its mean busy time is equal to C P 2 p. Otherwise, it is bounded from above by C P 2 p. We will also use the following notation: For a fixed ob and α we denote the fraction of ob k that is completed by time C P (α) by η k(α); in particular, η (α) = α. The amount of idle time that occurs before time C P (α) on the machine is denoted by t idle (α). The α-point of can be expressed in terms of t idle (α) and η k (α): Lemma 6. For a fixed ob and α the α-point C P (α) of ob can be written as C P (α) = t idle(α) + η k (α) p k η k (α) p k, k J k J where η k (α) denotes the fraction of ob k that is completed by time C P (α). 3. List Scheduling in Order of α-points In Theorem 2 we have analyzed LIST SCHEDULING in order of non-decreasing completion times, i. e., in order of non-decreasing -points. Phillips et al. [29], and Hall et al. [22] introduced the idea of LIST SCHEDULING in order of non-decreasing α-points C P (α) for some α. This is a more general way of capturing the structure of the given preemptive schedule; it can even be refined by choosing α randomly. We call the resulting schedule α-schedule and denote the completion time of ob in this schedule by C α. Goemans et al. [7] have further extended this idea to individual, i. e., ob-dependent α -points C P (α ), for J and α. We denote the vector consisting of all α s by α = (α,..., α n ). Then, the α -point C P (α ) is also called α-point of and the α-schedule is constructed by LIST SCHEDULING in order of non-decreasing α-points; the completion time of ob in the α-schedule is denoted by C α. For the feasible preemptive schedule P, the sequence of the obs in order of nondecreasing α-points respects precedence constraints since k implies C P (α ) C P Ck P () CP k (α k). Therefore the corresponding α-schedule is feasible. To analyze the completion times of obs in an α-schedule, we also consider schedules that are constructed by a slightly different conversion routine which is called α-conversion [7]. A similar procedure is implicitly contained in [, proof of Lemma 2.2].

9 258 M. Skutella α-conversion: Consider the obs J in order of non-increasing α-points C P (α ) and iteratively change the preemptive schedule P to a non-preemptive schedule by applying the following steps: i) remove the α p units of ob that are processed before C P (α ) and leave the machine idle during the corresponding time intervals; we say that this idle time is caused by ob ; ii) delay the whole processing that is done later than C P (α ) by p ; iii) remove the remaining ( α )-fraction of ob from the machine and shrink the corresponding time intervals; shrinking a time interval means to discard the interval and move earlier, by the corresponding amount, any processing that occurs later; iv) process ob in the released time interval (C P (α ), C P (α ) + p ]. Figure contains an example illustrating the action of α-conversion. Observe that in the resulting schedule obs are processed in non-decreasing order of α-points and no ob is started before time C P (α ) r. The latter property will be useful in the analysis of on-line α-schedules in Section 9. Lemma 7. The completion time of ob in the schedule computed by α-conversion is equal to C P (α ( ) + + αk η k (α ) ) p k. k η k (α ) α k Proof. Consider the schedule constructed by α-conversion. The completion time of ob is equal to the idle time before its start plus the sum of processing times of obs that start no later than. Since the obs are processed in non-decreasing order of their α-points, the amount of processing before the completion of ob is k α k η k (α ) p k. () The idle time before the start of ob can be written as the sum of the idle time t idle (α ) that already existed in the preemptive schedule P before C P (α ) plus the idle time before the start of ob that is caused in steps i) of α-conversion; notice that steps iii) do not create any additional idle time since we shrink the affected time intervals. Each ob k that is started no later than, i. e., such that η k (α ) α k, contributes α k p k units of idle time, all other obs k only contribute η k (α ) p k units of idle time. As a result, the total idle time before the start of ob can be written as t idle (α ) + k α k η k (α ) α k p k + k α k >η k (α ) η k (α ) p k. (2) The completion time of ob in the schedule constructed by α-conversion is equal to the sum of the expressions in () and (2); the result then follows from Lemma 6.

10 List Scheduling in Order of α-points on a Single Machine 259 Instance: ob r p α preemptive schedule P : α CONVERSION: C 2(α 2) C 4(α 4) C 3(α 3) C (α ) idle time caused by ob 4 α schedule: Fig.. The conversion of an arbitrary preemptive schedule P to a non-preemptive one by α- CONVERSION and by LIST SCHEDULING in order of non-decreasing α-points It follows from Lemma 7 that the completion time C of each ob in the nonpreemptive schedule constructed by α-conversion satisfies C C P (α ) + p r + p, hence is a feasible schedule. We obtain the following corollary.

11 26 M. Skutella Corollary. The completion time of ob in an α-schedule can be bounded by C α C P (α ) + k η k (α ) α k ( + αk η k (α ) ) p k. Proof. Since the schedule constructed by α-conversion processes the obs in order of non-decreasing α-points, the result follows from Lemma 7 and Lemma Preemptive List Scheduling in Order of α-points Up to now we have considered non-preemptive LIST SCHEDULING in order of α- points. For a vector α we call the schedule that is constructed by PREEMPTIVE LIST SCHEDULING in order of non-decreasing α-points preemptive α-schedule. The completion time of ob in the preemptive α-schedule is denoted by C α pmtn. The reader might wonder why we want to convert a preemptive schedule into another preemptive schedule. Later we will interpret solutions to time-indexed LP relaxations as preemptive schedules. However, the value of these schedules can be arbitrarily bad compared to their LP value. The results derived in this section will help to turn them into provably good preemptive schedules. Again, to analyze preemptive α-schedules we consider an alternative conversion routine which we call PREEMPTIVE α-conversion and which is a modification of α- CONVERSION. The difference is that there is no need for causing idle time by removing the α -fraction of ob from the machine in step i). We rather process α p units of ob. Thus, we have to postpone the whole processing that is done later than C P (α ) only by ( α ) p, because this is the remaining processing time of ob, which is then scheduled in ( C P (α ), C P (α ] ) + ( α ) p. This conversion technique has been introduced by Goemans, Wein, and Williamson [8] for a single α. Figure 2 contains an example illustrating the action of PREEMPTIVE α-conversion. Observe that, as in the non-preemptive case, the order of completion times in the resulting schedule coincides with the order of α-points in P. Moreover, since the initial schedule P is feasible, the same holds for the resulting schedule by construction. Lemma 8. The completion time of ob in the preemptive α-schedule can be bounded by C α pmtn C P (α ) + k η k (α ) α k ( ηk (α ) ) p k. (3) In the absence of nontrivial release dates, the same bound holds for the completion time of ob in the non-preemptive α-schedule. C α Proof. Using the same ideas as in the proof of Lemma 7 it can be seen that the right hand side of (3) is equal to the completion time of ob in the schedule constructed by PREEMPTIVE α-conversion. The only difference to the non-preemptive setting is that no additional idle time is caused by the obs. Since the order of completion times in the schedule constructed by PREEMPTIVE α-conversion coincides with the order of α-points, the bound in (3) follows from Lemma 2.

12 List Scheduling in Order of α-points on a Single Machine 26 preemptive schedule P : PREEMPTIVE α CONVERSION: C 2(α 2) C 4(α 4) C 3(α 3) C (α ) preemptive α schedule: Fig. 2. The conversion of the preemptive schedule P by PREEMPTIVE α-conversion and by PREEMPTIVE LIST SCHEDULING in order of non-decreasing α-points for the instance given in Figure In the absence of nontrivial release dates, PREEMPTIVE LIST SCHEDULING always constructs a non-preemptive schedule that coincides with the non-preemptive list schedule. In particular, C α = C α pmtn and the bound given in (3) holds for C α too. Lemma 6, Corollary, and Lemma 8 contain all structural insights in (preemptive) α-schedules that are needed to derive the approximation results presented below. 3.3 Scheduling in Order of α-points for Only One α Corollary and Lemma 6 yield the following generalization of Theorem 2 that was first presented in []:

13 262 M. Skutella Theorem 3. For each fixed < α, the completion time of ob in the α-schedule can be bounded by C α Proof. Corollary and Lemma 6 yield ( + ) ( C P α (α) + ) C P. α C α C P (α) + k ( + α ηk (α) ) p k η k (α) α C P (α) + C P (α) + k η k (α) α k η k (α) α ( + ) C P α (α). p k η k (α) α p k The second bound in Theorem 3 follows from the definition of α-points. It is shown in [] that the bound given in Theorem 3 is tight. The following lemma is an analogue to Theorem 3 for preemptive α-schedules. Lemma 9. For each fixed < α, the completion time of ob in the preemptive α-schedule can be bounded by C α pmtn α CP (α) α CP. In the absence of nontrivial release dates the same bound holds for the completion time of ob in the non-preemptive α-schedule. C α Proof. Lemma 8 and Lemma 6 yield C α pmtn C P (α) + k ( ηk (α) ) p k η k (α) α C P (α) + ( α) C P (α) + α α α CP (α). k η k (α) α k η k (α) α p k η k (α) p k In the absence of nontrivial release dates the same bound holds for C α of Lemma 8 can be applied in this case too. since the result

14 List Scheduling in Order of α-points on a Single Machine An e/(e )-Approximation Algorithm for r C In this section we present the result of Chekuri et al. [] on the conversion of preemptive to non-preemptive single machine schedules. Moreover, we discuss a class of instances found by Goemans et al. [7] which shows that this result is tight, that is, it yields a tight worst case bound on the value of an optimal non-preemptive schedule in terms of the optimal value with preemptions allowed. In Theorem 3 the best choice of α seems to be α = yielding the factor 2 bound that has already been determined in Theorem 2. The key insight of Chekuri et al. is that one can get a better (expected) bound by drawing α randomly from [, ] instead of using of a given ob in the α-schedule cannot attain the worst case bound ( ) + α C P (α) for all possible choices of α. Moreover, although the worst case bound is bad for small values of α, the overall α-schedule might be good since the total amount of idle time caused by obs in the first step of α-conversion is small and the schedule is therefore short. only one fixed value of α. The underlying intuition is that the completion time C α Theorem 4. For each ob, let α be chosen from a probability distribution over [, ] with density function f(x) = e x /(e ). Then, the expected completion time E[C α] of ob in the α-schedule can be bounded from above by e e CP, where CP denotes the completion time of ob in the given preemptive schedule P. Proof. To simplify notation we denote η k () by η k. For any fixed α, Corollary and Lemma 6 yield C α C P (α ) + k η k α k ( + αk η k (α ) ) p k = C P (α ) + k η k α k C P ( ηk η k (α ) ) p k + k η k α k ( + α k η k ) p k + k η k α k ( + α k η k ) p k. In order to compute a bound on the expected completion time of ob, we integrate the derived bound on C α over all possible choices of the random variables α k, k J, weighted by the given density function f. Since η f(x) ( + x η) dx = η e for each η [, ], the expected completion time of ob can be bounded by E[C α ] C P This concludes the proof. + k = C P + e ηk p k f(α k ) ( + α k η k ) dα k k η k p k e e CP.

15 264 M. Skutella Theorem 4 is slightly more general than the original result in []. The only condition on the random choice of the vector α in Theorem 4 is that each of its entries α has to be drawn with density function f from the interval [, ]. Chekuri et al. only consider the case where α = α for each J and α is drawn from [, ] with density function f. However, the analysis also works for arbitrary interdependencies between the different random variables α, e. g., for ointly or pairwise independent α. The advantage of using only one random variable α for all obs is that the resulting randomized approximation algorithm can easily be derandomized. We discuss this issue in more detail below. Corollary 2. Suppose that α is chosen as described in Theorem 4. Then, the expected value of the α-schedule is bounded from above by e e times the value of the preemptive schedule P. Proof. Using Theorem 4 and linearity of expectations yields [ ] E w C α = w E[C α ] e w C P. e This concludes the proof. Since the preemptive problem r, pmtn C can be solved in polynomial time by the shortest remaining processing time rule, computing this optimal solution and converting it as described in Theorem 4 is a randomized approximation algorithm with expected performance guarantee e e for r C. The variant of this randomized algorithm with only one random variable α for all obs instead of individual α s is of special interest. In fact, starting from a preemptive list schedule P (e. g., the one constructed by the SRPT-rule), one can efficiently compute an α-schedule of least obective function value over all α between and ; we refer to this schedule as the best α-schedule. The following proposition, which can be found in [7], yields a deterministic O(n 2 ) for the problem r C. e e -approximation algorithm with running time Proposition. For a fixed preemptive list schedule P, there are at most n different (preemptive) α-schedules; they can be computed in O(n 2 ) time. Proof. As α goes from to, the α-schedule changes only whenever an α-point, say for ob, reaches a time at which ob is preempted. Thus, the total number of changes in the (preemptive) α-schedule is bounded from above by the total number of preemptions. Since a preemption can occur in the preemptive list schedule P only whenever a ob is released, the total number of preemptions is at most n, and the number of (preemptive) α-schedules is at most n. Since each of these (preemptive) α-schedules can be computed in O(n) time, the result on the running time follows. For the weighted scheduling problem r w C, even the preemptive variant is strongly NP-hard, see [24]. However, given a ρ-approximation algorithm for one of the preemptive scheduling problems r, (prec, ) pmtn w C, the conversion technique of Chekuri et al. can be used to design an approximation with performance

16 e List Scheduling in Order of α-points on a Single Machine 265 ratio ρ e for the corresponding non-preemptive problem. Unfortunately, this does not directly lead to improved approximation results for these problems. In Section 8, however, we present a more sophisticated combination of a 2-approximation algorithm for r, prec, pmtn w C with this conversion technique by Schulz and Skutella [35], which yields performance guarantee e for the resulting algorithm. Another consequence of Corollary 2 is the following result on the power of preemption which can be found in [7]. Theorem 5. For single machine scheduling with release dates and precedence constraints so as to minimize the weighted sum of completion times, the value of an optimal e non-preemptive schedule is at most e times the value of an optimal preemptive schedule and this bound is tight, even in the absence of precedence constraints. Proof. Suppose that P is an optimal preemptive schedule. Then, the expected value of an α-schedule, with α chosen as described in Theorem 4, is bounded by e e times the value of the optimal preemptive schedule P. In particular, there must exist at least one fixed choice of the random variable α such that the value of the corresponding non-preemptive α-schedule can be bounded from above by e e w C P. To prove the tightness of this bound we consider the following instance with n 2 obs. There is one large ob, denoted ob n, and n small obs denoted =,..., n. The large ob has processing time p n = n, weight w n = /n and release date r n =. Each of the n small obs has zero processing time, weight w = n(n ) ( + n )n, and release date r =. The optimal preemptive schedule has ob n start at time, preempted by each of the small obs; hence its completion times are: C = r for =,..., n and C n = n. Its obective function value is (+ n )n (+ n ) and approaches e for large n. Now consider an optimal non-preemptive schedule C and let k = Cn n, so k is the number of small obs that can be processed before ob n. It is then optimal to process all these small obs,..., k at their release dates, and to start processing ob n at date r k = k ust after ob k. It is also optimal to process all remaining obs k +,..., n at date k + n ust after ob n. Let C k denote the resulting schedule, that is, C k = for all k, and C k = k + n otherwise. Its obective function value is ( + n )n n k n(n ). Therefore the optimal schedule is Cn with obective function value ( + n )n n n. Thus, as n grows large, the optimal non-preemptive cost approaches e. Unfortunately, the result of Theorem 5 cannot be easily extended to the case of unit weights. The instance discussed in the proof of the theorem relies on the large number of obs with processing time zero whose weights are small compared to the weight of the large ob. 5 Time-Indexed LP Relaxations As already mentioned in the last section, even the preemptive variants of the scheduling problems r w C and prec w C are NP-hard. In order to give approximation results for these problems, we consider a time-indexed LP relaxation of

17 266 M. Skutella r, prec, pmtn w C whose optimal value serves as a surrogate for the true optimum in our estimations. Moreover, we interpret a solution to this relaxation as a socalled fractional preemptive schedule which can be converted into a non-preemptive one using the techniques discussed in the previous sections. For technical reasons only, we assume in the following that all processing times of obs are positive. This is only done to keep the presentation of the techniques and results as simple as possible. Using additional constraints in the LP relaxation, all results presented can be carried over to the case with zero processing times. The following LP relaxation is an immediate extension of a time-indexed LP proposed by Dyer and Wolsey [3] for the problem without precedence constraints. Here, time is discretized into the periods (t, t+], t =,,..., T where T + is the planning horizon, say T + := max J r + J p. We have a variable y t for every ob and every time period (t, t + ] which is set to if is being processed in that time period and to otherwise. The LP relaxation is as follows: minimize subect to (LP ) J T w C LP y t = p for all J, (4) t=r y t for t =,..., T, (5) J p C LP t t y l y kl p k l=r l=r k for all k and t =,..., T, (6) = p 2 + T ) p for all J, (7) t=r y t ( t + 2 y t = for all J and t =,..., r, y t for all J and t = r,..., T. in (7) corresponds to the real completion time C P of, if we assign the values to the LP variables y t as defined above, that is, y t = if is being processed in the time interval (t, t + ]. If is not being continuously processed but preempted once or several Equations (4) say that all fractions of a ob, which are processed in accordance with its release date, must sum up to the whole ob. Since the machine can process only one ob at a time, the machine capacity constraints (5) must be satisfied. Constraints (6) say that at any point t + in time the completed fraction of ob k must not be larger than the completed fraction of its predecessor. Consider an arbitrary feasible schedule P, where ob is being continuously processed between C P p and C P on the machine. Then, the expression for C LP in (7) is a lower bound on the real completion time. A more precise discussion of this matter is given in Lemma a) below. Hence, (LP ) is a relaxation of the scheduling problem under consideration. times, the expression for C LP

18 List Scheduling in Order of α-points on a Single Machine 267 A feasible solution y to (LP ) does in general not correspond to a feasible preemptive schedule. Consider the following example: Example. Let J = {,..., n} with p = for all J, w = for n, w n = and n for n. We get a feasible solution to (LP ) if we set y t = n for all J and t =,..., T = n. In the solution to (LP ) given in Example several obs share the capacity of the single machine in each time interval. Moreover, the precedence relations between ob n and all other obs are not obeyed since fractions of ob n are processed on the machine before its predecessors are completed. However, the solution fulfills constraint (6) such that at any point in time the completed fraction of ob n is not larger than the completed fraction of each of its predecessors. Phillips et al. [29] introduced the term fractional preemptive schedule for a schedule in which a ob can share the capacity of one machine with several other obs. We extend this notion to the case with precedence constraints and call a fractional preemptive schedule P feasible if no ob is processed before its release date and, at any point in time, the completed fraction of ob is not larger than the completed fraction of each of its predecessors. Carrying over the definition of α-points to fractional preemptive schedules, the latter condition is equivalent to C P (α) CP k (α) for all k and α. Corollary 3. The results presented in Section 3, in particular Lemma 6, Corollary, and Lemma 9, still hold if P is a fractional preemptive schedule. Proof. The proofs of the above-mentioned results can directly be carried over to the more general setting of fractional preemptive schedules. We always identify a feasible solution to (LP ) with the corresponding feasible fractional preemptive schedule and vice versa. We mutually use the interpretation that seems more suitable for our purposes. The expression for C LP in (7) is called the LP completion time of ob. The following lemma highlights the relation between the LP completion times and the α-points of obs in the corresponding fractional preemptive schedule for a feasible solution to (LP ). The observation in a) is due to Goemans [6]; it says that the LP completion time of ob is the sum of half its processing time and its mean busy time. An analogous result to b) was given in [22, Lemma 2.] for a somewhat different LP. Lemma. Consider a feasible solution y to (LP ) and the corresponding feasible fractional preemptive schedule P. Then, for each ob : a) C LP = p 2 + p T ( y t t + 2 t=r ) = p 2 + C P (α) dα C P and equality holds if and only if C P = C P () + p ; b) C P (α) α CLP for any constant α [, ) and this bound is tight.

19 268 M. Skutella Proof. For a ob J, we denote by ζ t, t = r,..., T +, the fraction of that is finished in the fractional preemptive schedule P by time t. Since = ζ r ζ r+ ζ T + =, we get C P (α) dα = = T t=r ζ t+ ζ t C P (α) dα T ( ζ t+ ζ t ) ( ) t + 2 t=r since C P (α) = t + α ζt ζ t+ ζ t for α (ζ t, ζt+ ], = T y t ( t + p t=r 2). The machine capacity constraints (5) yield C P (α) CP ( α) p for α [, ], thus C P (α) dα CP p ( α) dα = C P p 2. Equality holds if and only if C P (α) = CP ( α) p for α, which is equivalent to C P () = CP p. This completes the proof of a). As a consequence we get for α [, ] ( α) C P (α) C P (x) dx α C P (x) dx CLP. In order to prove the tightness of this bound, we consider a ob with p =, r = and an LP solution satisfying y = α ɛ and y T = α + ɛ, where ɛ > small. This yields C LP = + ( α + ɛ) T and C P (α) T. Thus, for ɛ arbitrarily small and T arbitrarily large, the given bound gets tight. As a result of Lemma b) the value of the fractional preemptive schedule given by a solution to (LP ) can be arbitrarily bad compared to its LP value. Nevertheless, one can use the information that is contained in the structure of an LP solution in order to construct a feasible schedule whose value can be bounded in terms of the LP value, as will be shown in the following sections. Notice that we cannot solve (LP ) in polynomial time, but only in pseudo-polynomial time, since T and therefore the number of variables y t is only pseudo-polynomial in the input size of the problem. Schulz and Skutella [37] describe a closely related and only slightly worse LP relaxation of polynomial size in the general context of unrelated parallel machines. The idea is to change to new variables which are not associated with exponentially many time intervals of length, but rather with a polynomial number of intervals of geometrically increasing size. We omit the technical details in this survey.

20 List Scheduling in Order of α-points on a Single Machine 269 In the absence of precedence constraints, it was indicated by Dyer and Wolsey [3] that (LP ) can be solved in O(n log n) time. Goemans [5] developed the following result (see also [7]). Theorem 6. For instances of the problems r (, pmtn) w C, the linear programming relaxation (LP ) can be solved in O(n log n) time and the preemptive list schedule in order of non-decreasing ratios p /w corresponds to an optimal solution. Proof. In the absence of precedence relations, constraints (6) are missing in (LP ). Moreover, we can eliminate the variables C LP from the relaxation by plugging (7) into the obective function. What remains is a transportation problem and, as a result, y t can be assumed to be integral. The remaining part of the proof is based on an interchange argument: Consider any optimal /-solution y to (LP ) and suppose that < k (such that p /w p k /w k ), r t < τ, and y kt = y τ =. Then, replacing y kt = y τ by and y kτ = y t by gives another feasible solution to (LP ) with an increase in the obective function of (τ t) ( ) w k p k w p. Thus, the new solution is also optimal and through iterative application of this interchange argument we arrive at the solution that corresponds to the preemptive list schedule in order of non-decreasing ratios p /w. It follows from Theorem 6 that in the absence of precedence constraints and release dates an optimal single machine schedule corresponds to an optimal solution to (LP ). Moreover, if we allow release dates and preemption, the optimal solution to (LP ) described in Theorem 6 is a feasible schedule that minimizes the weighted sum of mean busy times. It is also shown in [7] that, in the absence of precedence constraints, the LP relaxation (LP ) is equivalent to an LP in completion time variables which defines a supermodular polyhedron. 5. Another Time-Indexed LP Relaxation For the non-preemptive problem r w C Dyer and Wolsey [3] also proposed another time-indexed LP relaxation that can easily be extended to the setting with precedence constraints; see Figure 3. Again, for each ob and each integral point in time t =,..., T, we introduce a decision variable x t. However, now the variable is set to if starts processing at time t and to otherwise. Note that the x t variables do not have the preemptive flavor of the y t variables and therefore lead to an LP relaxation for non-preemptive single machine scheduling only. In the absence of precedence constraints, Dyer and Wolsey showed that this relaxation is stronger than (LP ). In fact, even for instances with precedence constraints, every feasible solution to (LP ) can easily be transformed into a solution to (LP ) of the same value by assigning y t := t l=max{,t p +} x l for J and t =,..., T. In particular, we can interpret a feasible solution to (LP ) as a feasible fractional preemptive schedule and the results in Lemma also hold in this case.

21 27 M. Skutella minimize subect to (LP ) J T w C LP t=r x t = for all J, J l=max{,t p +} t l=r x l t C LP = p + t+p x l for all t =,..., T, (8) l=r k x kl for all k and t = r,..., T p, (9) T t=r t x t for all J, x t = for all J and t =,..., r, () x t for all J and t = r,..., T. Fig. 3. The time-indexed LP relaxation (LP ) 5.2 Combinatorial Relaxations In this subsection we show that the LP relaxation (LP ) is equivalent to a combinatorial relaxation of r, prec w C. We interpret an instance of this scheduling problem as a game for one person who is given the set of obs J and the single machine and wants to find a feasible schedule of minimum value. We consider the following relaxation of this game: Assume that there are k players,..., k instead of only one. Each player i is given one machine and a set of obs J i = J. Moreover, the players are allowed to cooperate by exchanging obs. To be more precise, a ob of player i can be scheduled on an arbitrary machine rather than only on i s machine. However, each player has to respect release dates and precedence constraints of his obs. The aim is to minimize the average over all players of the weighted sum of completion times of their obs. This relaxation is called k-player relaxation, a feasible solution is a k-player schedule. Moreover, if k is not fixed but can be chosen, we call the resulting relaxation multi-player relaxation and a feasible solution is a multi-player schedule. As an example, consider the following single machine instance without precedence constraints consisting of four obs: ob r p w

### The power of -points in preemptive single machine scheduling

JOURNAL OF SCHEDULING J. Sched. 22; 5:121 133 (DOI: 1.12/jos.93) The power of -points in preemptive single machine scheduling Andreas S. Schulz 1; 2; ; and Martin Skutella 1 Massachusetts Institute of

### A class of on-line scheduling algorithms to minimize total completion time

A class of on-line scheduling algorithms to minimize total completion time X. Lu R.A. Sitters L. Stougie Abstract We consider the problem of scheduling jobs on-line on a single machine and on identical

### Completion Time Scheduling and the WSRPT Algorithm

Completion Time Scheduling and the WSRPT Algorithm Bo Xiong, Christine Chung Department of Computer Science, Connecticut College, New London, CT {bxiong,cchung}@conncoll.edu Abstract. We consider the online

### Scheduling Single Machine Scheduling. Tim Nieberg

Scheduling Single Machine Scheduling Tim Nieberg Single machine models Observation: for non-preemptive problems and regular objectives, a sequence in which the jobs are processed is sufficient to describe

### Classification - Examples

Lecture 2 Scheduling 1 Classification - Examples 1 r j C max given: n jobs with processing times p 1,...,p n and release dates r 1,...,r n jobs have to be scheduled without preemption on one machine taking

### Classification - Examples -1- 1 r j C max given: n jobs with processing times p 1,..., p n and release dates

Lecture 2 Scheduling 1 Classification - Examples -1-1 r j C max given: n jobs with processing times p 1,..., p n and release dates r 1,..., r n jobs have to be scheduled without preemption on one machine

### Ecient approximation algorithm for minimizing makespan. on uniformly related machines. Chandra Chekuri. November 25, 1997.

Ecient approximation algorithm for minimizing makespan on uniformly related machines Chandra Chekuri November 25, 1997 Abstract We obtain a new ecient approximation algorithm for scheduling precedence

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### On the approximability of average completion time scheduling under precedence constraints

On the approximability of average completion time scheduling under precedence constraints Gerhard J. Woeginger Abstract We consider the scheduling problem of minimizing the average weighted job completion

### Duplicating and its Applications in Batch Scheduling

Duplicating and its Applications in Batch Scheduling Yuzhong Zhang 1 Chunsong Bai 1 Shouyang Wang 2 1 College of Operations Research and Management Sciences Qufu Normal University, Shandong 276826, China

### Scheduling Real-time Tasks: Algorithms and Complexity

Scheduling Real-time Tasks: Algorithms and Complexity Sanjoy Baruah The University of North Carolina at Chapel Hill Email: baruah@cs.unc.edu Joël Goossens Université Libre de Bruxelles Email: joel.goossens@ulb.ac.be

### University Dortmund. Robotics Research Institute Information Technology. Job Scheduling. Uwe Schwiegelshohn. EPIT 2007, June 5 Ordonnancement

University Dortmund Robotics Research Institute Information Technology Job Scheduling Uwe Schwiegelshohn EPIT 2007, June 5 Ordonnancement ontent of the Lecture What is ob scheduling? Single machine problems

### Offline sorting buffers on Line

Offline sorting buffers on Line Rohit Khandekar 1 and Vinayaka Pandit 2 1 University of Waterloo, ON, Canada. email: rkhandekar@gmail.com 2 IBM India Research Lab, New Delhi. email: pvinayak@in.ibm.com

### Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

### princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

### Scheduling Parallel Jobs with Linear Speedup

Scheduling Parallel Jobs with Linear Speedup Alexander Grigoriev and Marc Uetz Maastricht University, Quantitative Economics, P.O.Box 616, 6200 MD Maastricht, The Netherlands. Email: {a.grigoriev,m.uetz}@ke.unimaas.nl

### 1.1 Related work For non-preemptive scheduling on uniformly related machines the rst algorithm with a constant competitive ratio was given by Aspnes e

A Lower Bound for On-Line Scheduling on Uniformly Related Machines Leah Epstein Jir Sgall September 23, 1999 lea@math.tau.ac.il, Department of Computer Science, Tel-Aviv University, Israel; sgall@math.cas.cz,

### Scheduling Parallel Jobs with Monotone Speedup 1

Scheduling Parallel Jobs with Monotone Speedup 1 Alexander Grigoriev, Marc Uetz Maastricht University, Quantitative Economics, P.O.Box 616, 6200 MD Maastricht, The Netherlands, {a.grigoriev@ke.unimaas.nl,

### Dynamic TCP Acknowledgement: Penalizing Long Delays

Dynamic TCP Acknowledgement: Penalizing Long Delays Karousatou Christina Network Algorithms June 8, 2010 Karousatou Christina (Network Algorithms) Dynamic TCP Acknowledgement June 8, 2010 1 / 63 Layout

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

### The Trip Scheduling Problem

The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems

### An improved on-line algorithm for scheduling on two unrestrictive parallel batch processing machines

This is the Pre-Published Version. An improved on-line algorithm for scheduling on two unrestrictive parallel batch processing machines Q.Q. Nong, T.C.E. Cheng, C.T. Ng Department of Mathematics, Ocean

### Fairness in Routing and Load Balancing

Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria

### Scheduling Parallel Machine Scheduling. Tim Nieberg

Scheduling Parallel Machine Scheduling Tim Nieberg Problem P C max : m machines n jobs with processing times p 1,..., p n Problem P C max : m machines n jobs with processing times p 1,..., p { n 1 if job

### Online Scheduling with Bounded Migration

Online Scheduling with Bounded Migration Peter Sanders, Naveen Sivadasan, and Martin Skutella Max-Planck-Institut für Informatik, Saarbrücken, Germany, {sanders,ns,skutella}@mpi-sb.mpg.de Abstract. Consider

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

Scheduling Sporadic Tasks with Shared Resources in Hard-Real- Systems Kevin Jeffay * University of North Carolina at Chapel Hill Department of Computer Science Chapel Hill, NC 27599-3175 jeffay@cs.unc.edu

### Notes from Week 1: Algorithms for sequential prediction

CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 22-26 Jan 2007 1 Introduction In this course we will be looking

### Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture - 15 Limited Resource Allocation Today we are going to be talking about

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques. My T. Thai

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques My T. Thai 1 Overview An overview of LP relaxation and rounding method is as follows: 1. Formulate an optimization

### Minimize subject to. x S R

Chapter 12 Lagrangian Relaxation This chapter is mostly inspired by Chapter 16 of [1]. In the previous chapters, we have succeeded to find efficient algorithms to solve several important problems such

### Improved Results for Data Migration and Open Shop Scheduling

Improved Results for Data Migration and Open Shop Scheduling Rajiv Gandhi 1, Magnús M. Halldórsson, Guy Kortsarz 1, and Hadas Shachnai 3 1 Department of Computer Science, Rutgers University, Camden, NJ

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Steiner Tree Approximation via IRR. Randomized Rounding

Steiner Tree Approximation via Iterative Randomized Rounding Graduate Program in Logic, Algorithms and Computation μπλ Network Algorithms and Complexity June 18, 2013 Overview 1 Introduction Scope Related

### A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,

### x if x 0, x if x < 0.

Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

### On-line machine scheduling with batch setups

On-line machine scheduling with batch setups Lele Zhang, Andrew Wirth Department of Mechanical Engineering The University of Melbourne, VIC 3010, Australia Abstract We study a class of scheduling problems

### Using Generalized Forecasts for Online Currency Conversion

Using Generalized Forecasts for Online Currency Conversion Kazuo Iwama and Kouki Yonezawa School of Informatics Kyoto University Kyoto 606-8501, Japan {iwama,yonezawa}@kuis.kyoto-u.ac.jp Abstract. El-Yaniv

### Minimizing the Number of Machines in a Unit-Time Scheduling Problem

Minimizing the Number of Machines in a Unit-Time Scheduling Problem Svetlana A. Kravchenko 1 United Institute of Informatics Problems, Surganova St. 6, 220012 Minsk, Belarus kravch@newman.bas-net.by Frank

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### Dimensioning an inbound call center using constraint programming

Dimensioning an inbound call center using constraint programming Cyril Canon 1,2, Jean-Charles Billaut 2, and Jean-Louis Bouquard 2 1 Vitalicom, 643 avenue du grain d or, 41350 Vineuil, France ccanon@fr.snt.com

### Online and Offline Selling in Limit Order Markets

Online and Offline Selling in Limit Order Markets Kevin L. Chang 1 and Aaron Johnson 2 1 Yahoo Inc. klchang@yahoo-inc.com 2 Yale University ajohnson@cs.yale.edu Abstract. Completely automated electronic

### Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs

Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs Leah Epstein Magnús M. Halldórsson Asaf Levin Hadas Shachnai Abstract Motivated by applications in batch scheduling of jobs in manufacturing

### 4.4 The recursion-tree method

4.4 The recursion-tree method Let us see how a recursion tree would provide a good guess for the recurrence = 3 4 ) Start by nding an upper bound Floors and ceilings usually do not matter when solving

### Tangent and normal lines to conics

4.B. Tangent and normal lines to conics Apollonius work on conics includes a study of tangent and normal lines to these curves. The purpose of this document is to relate his approaches to the modern viewpoints

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

Aperiodic Task Scheduling Jian-Jia Chen (slides are based on Peter Marwedel) TU Dortmund, Informatik 12 Germany Springer, 2010 2014 年 11 月 19 日 These slides use Microsoft clip arts. Microsoft copyright

### Introduction to Scheduling Theory

Introduction to Scheduling Theory Arnaud Legrand Laboratoire Informatique et Distribution IMAG CNRS, France arnaud.legrand@imag.fr November 8, 2004 1/ 26 Outline 1 Task graphs from outer space 2 Scheduling

### Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

### Real Time Scheduling Basic Concepts. Radek Pelánek

Real Time Scheduling Basic Concepts Radek Pelánek Basic Elements Model of RT System abstraction focus only on timing constraints idealization (e.g., zero switching time) Basic Elements Basic Notions task

### 1 Polyhedra and Linear Programming

CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

Load balancing of temporary tasks in the l p norm Yossi Azar a,1, Amir Epstein a,2, Leah Epstein b,3 a School of Computer Science, Tel Aviv University, Tel Aviv, Israel. b School of Computer Science, The

### Mathematical Induction

Chapter 2 Mathematical Induction 2.1 First Examples Suppose we want to find a simple formula for the sum of the first n odd numbers: 1 + 3 + 5 +... + (2n 1) = n (2k 1). How might we proceed? The most natural

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### Research Article Batch Scheduling on Two-Machine Flowshop with Machine-Dependent Setup Times

Hindawi Publishing Corporation Advances in Operations Research Volume 2009, Article ID 153910, 10 pages doi:10.1155/2009/153910 Research Article Batch Scheduling on Two-Machine Flowshop with Machine-Dependent

### 1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

### Near Optimal Solutions

Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

### COMPLEX EMBEDDED SYSTEMS

COMPLEX EMBEDDED SYSTEMS Real-Time Scheduling Summer Semester 2012 System and Software Engineering Prof. Dr.-Ing. Armin Zimmermann Contents Introduction Scheduling in Interactive Systems Real-Time Scheduling

### Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs

Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs Leah Epstein Magnús M. Halldórsson Asaf Levin Hadas Shachnai Abstract Motivated by applications in batch scheduling of jobs in manufacturing

### A binary search algorithm for a special case of minimizing the lateness on a single machine

Issue 3, Volume 3, 2009 45 A binary search algorithm for a special case of minimizing the lateness on a single machine Nodari Vakhania Abstract We study the problem of scheduling jobs with release times

### A Note on Maximum Independent Sets in Rectangle Intersection Graphs

A Note on Maximum Independent Sets in Rectangle Intersection Graphs Timothy M. Chan School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1, Canada tmchan@uwaterloo.ca September 12,

### Single machine models: Maximum Lateness -12- Approximation ratio for EDD for problem 1 r j,d j < 0 L max. structure of a schedule Q...

Lecture 4 Scheduling 1 Single machine models: Maximum Lateness -12- Approximation ratio for EDD for problem 1 r j,d j < 0 L max structure of a schedule 0 Q 1100 11 00 11 000 111 0 0 1 1 00 11 00 11 00

### The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### The Online Set Cover Problem

The Online Set Cover Problem Noga Alon Baruch Awerbuch Yossi Azar Niv Buchbinder Joseph Seffi Naor ABSTRACT Let X = {, 2,..., n} be a ground set of n elements, and let S be a family of subsets of X, S

### 8.1 Makespan Scheduling

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programing: Min-Makespan and Bin Packing Date: 2/19/15 Scribe: Gabriel Kaptchuk 8.1 Makespan Scheduling Consider an instance

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### Bi-objective approximation scheme for makespan and reliability optimization on uniform parallel machines

Bi-objective approximation scheme for makespan and reliability optimization on uniform parallel machines Emmanuel Jeannot 1, Erik Saule 2, and Denis Trystram 2 1 INRIA-Lorraine : emmanuel.jeannot@loria.fr

### Lecture Notes 12: Scheduling - Cont.

Online Algorithms 18.1.2012 Professor: Yossi Azar Lecture Notes 12: Scheduling - Cont. Scribe:Inna Kalp 1 Introduction In this Lecture we discuss 2 scheduling models. We review the scheduling over time

### Partitioned real-time scheduling on heterogeneous shared-memory multiprocessors

Partitioned real-time scheduling on heterogeneous shared-memory multiprocessors Martin Niemeier École Polytechnique Fédérale de Lausanne Discrete Optimization Group Lausanne, Switzerland martin.niemeier@epfl.ch

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

### Factoring & Primality

Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount

### Job Scheduling Techniques for Distributed Systems with Heterogeneous Processor Cardinality

Job Scheduling Techniques for Distributed Systems with Heterogeneous Processor Cardinality Hung-Jui Chang Jan-Jan Wu Department of Computer Science and Information Engineering Institute of Information

### JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS. Received December May 12, 2003; revised February 5, 2004

Scientiae Mathematicae Japonicae Online, Vol. 10, (2004), 431 437 431 JUST-IN-TIME SCHEDULING WITH PERIODIC TIME SLOTS Ondřej Čepeka and Shao Chin Sung b Received December May 12, 2003; revised February

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

### Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

### Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs

Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs Stavros Athanassopoulos, Ioannis Caragiannis, and Christos Kaklamanis Research Academic Computer Technology Institute

### Scheduling a sequence of tasks with general completion costs

Scheduling a sequence of tasks with general completion costs Francis Sourd CNRS-LIP6 4, place Jussieu 75252 Paris Cedex 05, France Francis.Sourd@lip6.fr Abstract Scheduling a sequence of tasks in the acceptation

### 4.3 Limit of a Sequence: Theorems

4.3. LIMIT OF A SEQUENCE: THEOREMS 5 4.3 Limit of a Sequence: Theorems These theorems fall in two categories. The first category deals with ways to combine sequences. Like numbers, sequences can be added,

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### Week 1: Introduction to Online Learning

Week 1: Introduction to Online Learning 1 Introduction This is written based on Prediction, Learning, and Games (ISBN: 2184189 / -21-8418-9 Cesa-Bianchi, Nicolo; Lugosi, Gabor 1.1 A Gentle Start Consider

### Principle of Data Reduction

Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

### 5. Convergence of sequences of random variables

5. Convergence of sequences of random variables Throughout this chapter we assume that {X, X 2,...} is a sequence of r.v. and X is a r.v., and all of them are defined on the same probability space (Ω,

### Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

### A simpler and better derandomization of an approximation algorithm for Single Source Rent-or-Buy

A simpler and better derandomization of an approximation algorithm for Single Source Rent-or-Buy David P. Williamson Anke van Zuylen School of Operations Research and Industrial Engineering, Cornell University,

### arxiv:1112.0829v1 [math.pr] 5 Dec 2011

How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### Algorithms for Flow Time Scheduling

Algorithms for Flow Time Scheduling Nikhil Bansal December 2003 School of Computer Science Computer Science Department Carnegie Mellon University Pittsburgh PA 15213 Thesis Committee: Avrim Blum Chair

### Single machine parallel batch scheduling with unbounded capacity

Workshop on Combinatorics and Graph Theory 21th, April, 2006 Nankai University Single machine parallel batch scheduling with unbounded capacity Yuan Jinjiang Department of mathematics, Zhengzhou University

### What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

### Analysis of Micro-Macro Transformations of Railway Networks

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Germany MARCO BLANCO THOMAS SCHLECHTE Analysis of Micro-Macro Transformations of Railway Networks Zuse Institute Berlin

### Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

### ARTICLE IN PRESS. European Journal of Operational Research xxx (2004) xxx xxx. Discrete Optimization. Nan Kong, Andrew J.

A factor 1 European Journal of Operational Research xxx (00) xxx xxx Discrete Optimization approximation algorithm for two-stage stochastic matching problems Nan Kong, Andrew J. Schaefer * Department of

### WORST-CASE PERFORMANCE ANALYSIS OF SOME APPROXIMATION ALGORITHMS FOR MINIMIZING MAKESPAN AND FLOWTIME

WORST-CASE PERFORMANCE ANALYSIS OF SOME APPROXIMATION ALGORITHMS FOR MINIMIZING MAKESPAN AND FLOWTIME PERUVEMBA SUNDARAM RAVI, LEVENT TUNÇEL, MICHAEL HUANG Abstract. In 1976, Coffman and Sethi conjectured

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S