Design and Implementation of a POSIX Compliant Sporadic Server for the Linux Kernel

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Design and Implementation of a POSIX Compliant Sporadic Server for the Linux Kernel"

Transcription

1 Design and Implementation of a POSIX Compliant Sporadic Server for the Linux Kernel Dario Faggioli, Antonio Mancina, Fabio Checconi, Giuseppe Lipari ReTiS Lab Scuola Superiore Sant Anna, CEIIC via G. Moruzzi 1, Pisa (Italy) {d.faggioli, a.mancina, f.checconi, Abstract Increasing interest for real-time support in general purpose operating systems has driven a lot of development efforts inside the Linux kernel community. Thus, a Linux based system may be a suitable platform to run heterogeneous real-time and non real-time, periodic and aperiodic applications. Linux is a POSIX/Unix-like system, so real-time tasks are supported by means of the well established and powerful techniques of fixed priority preemptive scheduling. When dealing with real-time aperiodic tasks under fixed priority scheduling, an effective mechanism that provides fast response time without affecting the scheduling of the periodic tasks is the Sporadic Server. Such an algorithm is also part of the POSIX real-time extensions, but it is not yet supported by the Linux kernel. For all these reasons, we implemented the POSIX SCHED SPORADIC scheduling policy in the Linux kernel, after having extended it to support hierarchical scheduling. Moreover, since we think this could be a useful feature, we are also submitting it to the community, asking for inclusion in the mainline kernel distribution. In this paper we describe our motivations, the implementation and some preliminary experimental results. 1 Introduction All around us we can find a great number of realtime systems, silently running even right now, and even if we are unaware of them. For example, think about flight control and defense systems, that have to respect timing constraints dictated by the controlled environment and by the validity of the sensoracquired data. Moreover, there also are systems where the timing constraints come from the need to provide the user with a good level of Quality of Service (QoS), for example multimedia and streaming applications, video games, entertainment systems, etc. Real-time workloads are known to need predictability rather than throughput or fairness. Thus, they need to run on computer systems where all, including the operating system behavior, is fully predictable as well. There exist many commercial or research kernels where this is done at the maximum possible extent, e.g., VxWorks or Erika, and SHaRK or MarteOS [5, 8, 4, 9]. The issue with them is that they provide a special execution environment, able to run only a small set, and sometimes even one single application for whom they have been configured. Conversely, especially because of the dramatical increase in the performances of microprocessors and other hardware devices, it is becoming common to integrate real-time support in the so-called general purpose operating systems. This way, the large number of off-the-shelf available software can be used for user interface or other non real-time activities. In these cases, although full predictability of all kernel code paths is often impossible to guarantee, it is required that the operating system introduces as low as possible overhead, so that the additional latency the real-time activities have to pay is reduced to its minimum extent. If this is done, supporting soft, and perhaps also some of the hard real-time workloads could become possible. This work has been partially supported by the European projects FRESCOR FP6/2005/IST/ and IRMOS FP7/2008/ICT/

2 With this respect, Linux is becoming the preferred choice, due to the fact it has an open source license, a huge base of enthusiastic programmers, it is available for an astonishing number of architectures and there are thousands of applications running on it. Furthermore, the Linux kernel is also being enriched with more and more real-time capabilities [18], usually proposed as separate patches, that are progressively being integrated into the main branch. For example, some developers are maintaining the rt preempt patch, which greatly reduces the non preemptable code sections, thus lowering the worstcase latencies. Moreover, the support for priority inheritance has been added and can also be extended to in-kernel locking primitives, and a great amount of interrupt handling code is moved to schedulable threads. Alternatively, more invasive modification of the Linux kernel exists as separate projects, e.g., RTLinux, RTAI and Xenomai [11, 12, 14]. They all share a common approach, that is they introduce a layer between the OS and the hardware, with the aim of totally separating the real-time and the non realtime execution environments. Some more details on each of them follows. RTLinux originally developed by Victor Yodaiken at the New Mexico Institute of Mining and Technology, it consists of a small real-time kernel running alongside an unmodified version of the Linux kernel which runs at the lowest priority. RTAI the Real-Time Application Interface born at the Politecnico of Milan. It adopts a similar approach to RTLinux, by means of the Adeos patch [27] and a slightly modified version of Linux. Xenomai overtook RTAI since it provides a much cleaner and more elegant code structure and interface, but it suffers from a slightly higher worst-case latency when comparing IRQ dispatching and syscalls. However, this is negligible with respect to the advantages that come, as far as tracking problems and enriching the code with ports to new architectures are concerned. Coming back to theoretical real-time issues, huge amount of research has been carried out on real-time scheduling, and it is now possible to design a system that is guaranteed to meet its timing constraints. The most analyzed theoretical model is made up of a set of periodic activities, i.e., tasks to be performed at some specified and usually constant rate. One well established framework, dating back to 1970 [29], that has been proposed to tackle the issues raised by this situation is the preemptive fixed priority scheduling theory. Obviously, real-time systems, especially in embedded and control environments, may contain both periodic and aperiodic activities, and fixed priority scheduling theory has been extended to also be able to deal with that. The real-time extensions to the well known POSIX [3] standard support fixed priority preemptive scheduling too, when they come to real-time scheduling. Using the API provided by those extensions (POSIX.1b) it is possible to write complex realtime applications that will result in being portable to a large variety of different operating systems. The existence of the POSIX real-time extension and the fact that workstation operating systems (and among them, Linux) are supporting large chunks of it, is a clear example of successful efforts in pushing realtime into general purpose kernels. Anyway, to deal with real-time aperiodic activities, the POSIX standard suggests to use a variant of the Sporadic Server algorithm, an effective solution that as been thoroughly studied and analyzed along years. However, this (optional) extension is supported only in few readily available operating systems that claim to be POSIX compliant, probably because of some concerns about implementation complexity and run time overhead. The remainder of the paper is organized as follows: Sec. 2 gives needed background on real-time systems and briefly goes through related works. Sec. 3 summarized the POSIX real-time extensions and the aim of Linux toward them. In Sec. 4 we describe in details our design and implementation work and, finally, in Sec. 5 we show some simulation results and illustrate our detailed test plan. Sec. 6 draws the conclusions and suggests some future works. 2 Background and Related Work In this section we provide what we believe it is a necessary background on real-time systems, especially w.r.t. fixed priority scheduling. We also talk about the related works and about the few other implementations of Sporadic Server in general purpose and real-time operating systems. 2.1 Real-Time Systems Basics A real-time system can be defined as a set of activities with timing requirements: a computer system the correctness of which depends not only on the correctness of the computation results, but also on 2

3 on the time instant at which those results are produced. Due to this time consciousness characteristic the CPU scheduler of a real-time operating system (RTOS) is demanded to achieve predictability more than absolute throughput or any other performance metric. In some more details there exist hard real-time systems, completely falling through if failing to respect the timing constraints they are subjected to. Soft real-time systems, on the other hand, can tolerate some guarantee miss to happen, considering these events only temporary failures. Hard real-time systems are used in critical activities, nuclear power plants, flight control systems, etc. Soft real-time systems typically are non critical systems, such as multimedia, entertainment or communication systems Real-Time Scheduling In real-time scheduling literature each activity is usually called a task 1, and the i-th task is called τ i. Each task consists of a stream of jobs, J i,j (j = 1, 2,..., n) and each job is characterized by an arrival time, a i,j, an execution time, c i,j, a finishing time, f i,j, an absolute deadline, d i,j, and a relative deadline, D i,j = d i,j a i,j. A hard real-time task is also characterized by a worst case execution time (WCET), C i = max{c i,j }, and a minimum inter-arrival time (MIT) of consecutive jobs, T i = min{a i,j+1 a i,j }. A task is periodic if a i,j+1 = a i,j + T i for any job J i,j, otherwise it is called sporadic. For all the hard real-time tasks in the system f i,j d i,j must hold for any job J i,j, to avoid critical failures. Usually, knowing a-priori (C i, T i ) for each hard-real time task i is sufficient to perform some kind of feasibility analysis and guarantee timely behavior in all the possible situations. As said, for soft real-time tasks some deadline misses (f i,j d i,j ) are tolerated. Usually hard real-time task are periodic activities, while soft real-time ones issue requests of job activation in an aperiodic or sporadic fashion. Scheduling a real-time systems means trying to find an order of execution of all the jobs of all the tasks, even interlaced one with another if preemption is enabled, so that any of them meets its deadline Fixed Priority Scheduling Real-time scheduling algorithms often use the concept of priority: a property assigned to each task to account for its relative importance in the system. The algorithm chooses which task to serve on the basis of its assigned priority. We can distinguish fixed priority algorithms, where the priority assigned to each task does not (automatically) vary along the task lifetime, and dynamic priority algorithms, where priority may vary from time to time. An example of an on-line fixed priority scheduling algorithm is the Rate Monotonic (RM [29]) algorithm, where each task τ i is given a priority p i inversely proportional to its period: p i = 1 T i. Some important results have been derived for the RM algorithm. For example it has been shown that RM priority assignment is optimum. Furthermore, defined U i = Ci T i as the processor utilization factor for the task τ i, it is guaranteed that a feasible RM schedule exists if: or if [25]: U = 1 Independently from the fact that it is implemented as a thread or as a process. n U i = i=0 n i=0 C i T i n (2 1 n 1) n U i 2 i=1 Since this worst case utilization bound decreases monotonically down to (ln(2), as n approaches infinity), this means that, with RM, the total utilization of the processor shall not overcome 69%. However, this is only sufficient condition and it is quite pessimistic, since such a worst-case task set is contrived and unlikely to be encountered in practice. To know if a set of given tasks with utilization greater than the worst case least upper bound other tests exists and can be found in [31]. This is the exact schedulability criterion for independent periodic tasks under RM and it is less pessimistic than the previous one, but it is more complicated and time consuming if you want to implement it and run it on-line. On the other hand, one of the most known dynamic priority scheduling algorithm is the Earliest Deadline First (EDF [29]) algorithm. In EDF, the task τ i with the job with the earliest absolute deadline d i is assigned the highest dynamic priority. It is well known that EDF, thanks to the dynamic priority assignment, is able to provide 100% CPU utilization, while RM only can guarantee a fraction of that. Notice that also possible contention situations among the various tasks, e.g. for acquiring the lock on a shared resource, should be taken into account, and that there exist protocols and theoretical formulations for this. Examples are the Priority Inheritance or Priority Ceiling (PIP and PCP, [30]) protocols, but it is beyond the scope of this paper to describe them. Even if comparing fixed and dynamic priority algorithm is not among our intents (if interested in, it can be found in [24]) it is worth saying that, even 3

4 if being less utilization efficient, Rate Monotonic is simpler to implement (the number of priority levels is bounded and usually low). For example, it can be implemented efficiently by splitting the ready queue into several FIFO queues, one for each priority level. On the other hand, implementing EDF could be quite complex, since absolute deadlines change from a job to the other and the new priority needs to be computed at each job activation, reordering the ready queue. These could some of the reasons why the POSIX realtime extensions (IEEE Std b-1993) only prescribes fixed priority scheduling to be implemented Hierarchical Scheduling Hierarchical scheduling generalizes the capabilities of CPU schedulers, by allowing them to schedule not only tasks but also other schedulers. Within the greatest possible generality, each of the schedulers can be different from the others and able to enforce its own policy and rules for the entities, i.e., tasks and other schedulers, that it is in charge of. However, it is also common to restrict these capabilities a little bit, for example by the constraint that all the schedulers in the hierarchy have to act according to the same algorithm. This way the concept of hierarchical scheduling collapses into the so-called multiserver scheduling or group scheduling as well, as a scheduler collapses into a task group (or server). What happens is that each entity to be scheduled can be either a task or a group of tasks, and it still is provided with the characteristic parameters of the chosen scheduling algorithm. This means each entity will have a fixed priority if a group version of Rate Monotonic is being used, or alternatively a deadline, if EDF is in charge. Whenever a scheduling decision has to be taken, we start from the root of the hierarchy and the rules of the algorithm are applied to chose one of the entity. As one can imagine, if the picked up entity come to be a task group, the same operation is performed recursively, until an entity which is a task is selected. Notably theoretical results exist for truly hierarchical and group scheduling as well, for example the ones illustrated in [26]. Unfortunately, they are quite complex to describe, and going into their details is out of the scope of this paper Aperiodic Task Scheduling As it can be easily seen, all the considerations of the previous Sections apply at their best if tasks are perfectly periodic activities. This could be quite common in hard real-time environments, but is an unacceptable constraint for soft real-time or mixed realtime and non real-time systems. Furthermore, also in hard systems, sporadic or even aperiodic activities may be present, for example for servicing interrupt requests from hardware devices, or to handle non critical tasks, user interfaces and alike. In those situations, the main concern is typically twofold: 1. guarantee that aperiodic/sporadic activities are not jeopardizing the guarantees provided to periodic tasks; 2. be able to execute the aperiodic/sporadic activities providing them with fast response time, either for each instance and on average. Actually, fixed priority scheduling suffers from some issues with respect to this, since it has been proved [15] that there is no algorithm that can minimize the response time of every aperiodic request and yet guarantee schedulability of the periodic tasks. Anyway, since we are interested in fixed priority scheduling of mixed periodic and aperiodic task sets, we describe, in the following paragraphs, some of the proposed algorithms and strategies that make it possible to integrate aperiodic task scheduling in the Rate Monotonic algorithm. For all of them (with the only exception of background service) the idea is to create an aperiodic server, i.e., a task that is scheduled together with the periodic tasks of the system and that gives the aperiodic requests a chance to run. This usually is achieved by considering one or more new tasks with period T s and a budget Q s (both specified in time units). When a periodic or sporadic task wants to run, its execution request is queued, and it is satisfied only when the server is scheduled. Furthermore, as long as the aperiodic task execution proceeds, the budget of the aperiodic server is diminished by the same amount, until it reaches zero. At that point in time the server usually is no longer considered a ready to run task. The strategy with which the budget gets replenished and the server is reactivated is what characterizes the specific algorithm and what determines its properties, advantages and drawbacks. Background Service A trivial solution is to only execute aperiodic tasks in background, i.e., when the CPU is idle because all the released periodic jobs already have been completed for their current execution period. This typically is very simple to achieve and does not affect scheduling guarantees provided to periodic tasks. Obviously, if periodic load is high, the utilization left for background service may be small, and background service opportunities relatively infrequent. 4

5 Polling Server A Polling Server (PS) is a periodic task used to serve aperiodic requests. At regular intervals, the PS task is started and it services pending requests, if there are any. Otherwise it is suspended until the next period. Note that if requests occur while the task is suspended, they have to wait until the next activation of the PS. The algorithm can be used without modifying the Rate Monotonic scheduling analysis, and we simply have to add a new periodic task with the period of the PS and the worst case execution time equal to its budget. The main drawback is that aperiodic response times can still be quite long. The closed formula that can be used to compute the response time of an aperiodic execution request, if arrived at r a with a computational demand known to be C a, and served by means of a Polling Server with period T s and budget Q s is: r a + T s + ( Ca Q s + 1) T s Deferrable Server The Deferrable Server (DS [20]) algorithm is similar to Polling Server, the only difference being the fact that it preserves its budget even if no aperiodic requests are pending (bandwidth preserving property). Since DS retains the budget, aperiodic requests can be served anytime as long as the budget itself has not been exhausted. When this happens, the server is no longer eligible, and so further aperiodic requests have to wait until the budget is replenished, at the very beginning of the next period. Thus, the DS algorithm can provide better aperiodic responsiveness than PS, and it is also still easy to implement. However the price for this is a schedulability penalty, in terms of lower schedulable utilization bound. In fact, if a DS with U s utilization is used to serve aperiodic requests, the Rate Monotonic asymptotic least upper bound the periodic load has to stay below varies toward: ln( U s U s + 1 ) which is worse than the previous one. Sporadic Server Sporadic Server (SS [28]) is a variation on the Deferrable Server with a capacity planning mechanism that make it possible to consider an SS task as a supplementary periodic task with period and execution time equal, respectively to the period and the budget of the server. Basically SS only replenishes its budget after either some or all of the execution time is consumed by aperiodic tasks, and its particular method of replenishment scheduling is what sets it apart from other algorithms. In more details, what it is important to determine is the time at which a replenishment has to occur and how much of the budget should be recovered. This happens according to the following rules, for a Sporadic Server task S SS = QSS T SS with priority p SS : 1. if the server still has available budget and the priority of the running task is greater or equal to p SS, replenishment time RT SS is set. The value of RT SS is the current time plus T SS ; 2. the amount to be replenished at RT SS is set when the priority of the running task becomes lower than p SS, or when the budget is exhausted. At that time the budget will be replenished by the time consumed from the last instant when the priority of the running task changed from lower to higher than p SS, and the current time. The main advantage of SS over PS is the bandwidth preserving property. In fact, SS is often able to provide immediate service to an aperiodic task. But that property also has some drawbacks, and the reason why SS is preferable to DS is that it is able to compensate for them, thanks to its replenishment strategy. So we can say that Sporadic Server has some clear advantages toward DS and PS, while the least upper bound of the sufficient schedulability region is (n+1) (2 1 n+1 1), where the n+1 accounts for the fact that the SS is considered a supplementary periodic task. Furthermore, the performances of SS, as regarding the aperiodic tasks response time, are comparable with the ones provided by DS, as it is reported by many studies [28, 24]. SS also reduces replenishment overhead, since no replenishment is scheduled until the budget has been fully consumed. 2.2 Related Work Since we are implementing the POSIX SCHED SPORADIC scheduling policy in the Linux kernel we are interested either in any other OS supporting it, or in prior works performed on the Linux kernel (or its variants) and dealing with sporadic servers. Among commercial operating systems, support for SCHED SPORADIC is claimed only by two of them: the QNX Neutrino Microkernel and the Real-Time Executive for Multiprocessor Systems (RTEMS). QNX is a POSIX-compliant RTOS aimed primarily at the embedded, especially automotive, systems market. It is microkernel-based and the OS runs in a number of small user-space servers. RTEMS is a real-time executive providing an hard 5

6 real-time environment for embedded critical applications. It exports many different API, among with one that adheres to POSIX b. Regarding previous attempts of implementing SCHED SPORADIC in Linux or any of its variants, we have been able to find out some work on RTLinux [1, 23]. Both of them describe the implementation of the scheduling policy in the RTLinux kernel, with [1] also considering many other fixed priority aperiodic server algorithms (Deferrable Server, Slack Stealer and Priority Exchange) but, despite that, the present RTLinux distribution still does not include POSIX SCHED SPORADIC scheduling. Anyway, we are quite convinced that it is not possible to compare these two works with our one. In fact, the Linux kernel and the RTLinux hard real-time executive layer are completely different environments, where different implementation related solution have to be adopted and different behavior and performances can be achieved. there exist two recent works, regarding the implementation of SCHED SPORADIC in the standard Linux kernel: [16, 21]. This reinforces our persuasion that this is eventually becoming quite an interesting topic inside real-time community. Anyway, we think our work is quite different from them two, mainly because of the following reasons: we are proposing a slightly extension of the semantic of the policy rules, so that they become applicable to a hierarchical system. In fact, our implementation affects the Linux group scheduling infrastructure and make it possible to create SCHED SPORADIC task groups (more on this in Sec. 4.1). Neither [16] nor [21] deal with this. Since Linux is a multiprocessor capable kernel, our code has been designed and tested to not interfere with how real-time multiprocessor scheduling is implemented in Linux, and to be fully SMP-safe. Other minor differences are: the code provided by [16] and [21] are patches to be applied on top of some specific version of the kernel. Rather, we are tracking the current developing of Linux and continuously updating our patch to be applicable to the very last status of the code. we have released our code and submitted it to the community on the Linux Kernel Mailing List (LKML), so to get useful comments and suggestions. Indeed, our aim is to try to get the SCHED SPORADIC scheduling policy merged in the mainline kernel distribution, if we are able to convince other developers that it could be an useful feature to have. Finally, [10, 22] are research papers where, the support for SCHED SPORADIC in the Linux kernel is advised as a valid mean of fitting device drivers and hardware handling routines into analyzable scheduling framework. [17] reports some preliminary works on Sporadic Server algorithm used together with hierarchical scheduling. 3 The POSIX Standard POSIX stands for Portable Operating System Interface and is the collective name of IEEE 1003 (or ISO/IEC 9945) family of standards, jointly developed by the IEEE Portable Application Standards Committee (PASC) and the Austin Common Standards Revision Group (CSRG) of The Open Group. It dates its starting back to the mid-1980s. The last version is, formally, The Open Group Base Specifications Issue 6, or IEEE Std , 2004 Edition, but it is also still referred to as POSIX as well. 3.1 POSIX Real-time Extensions It was in 1998 that IEEE Std , the first POSIX real-time profile was published. More recently, in Std the X/Open System Interface (XSI) extensions, grouped together to form the so-called XSI Option Groups have been defined. Nowadays, a compliant XSI implementation has to support at least the following options: file synchronization, memory mapped files, memory protection, threads, thread synchronization, thread stack address attribute and size; and may also support a bunch of other option groups: Real-time asynchronous, synchronized and prioritized I/O, shared memory objects, process and range based memory locking, semaphores, timers, real-time signals, message passing, process scheduling; Advanced Real-time clock selection, process CPU-time clocks, monotonic clock, timeouts, typed memory objects; Real-time Threads thread priority inheritance and protection, thread scheduling; 6

7 Advanced Real-time Threads thread CPU-time clocks, thread sporadic server, spin locks and barriers. The SCHED SPORADIC scheduling policy we have implemented is part of both the Real-time group and the Real-time Threads group, as an optional part of process scheduling and thread scheduling. 3.2 POSIX Scheduling POSIX describes CPU scheduling by means of the concept of task lists. There shall be many different priorities and one task list for each priority. There also shall be different scheduling policies, and for each of them, different handling of this set of lists can occur. Each policy is also associated with a priority range. A conforming implementation is required make the task which is at the head of the highest priority non-empty task list to become a running task. This means tasks can only be preempted by other tasks if the latter are in a task list of higher priority than the one where the former is. The following scheduling policies are defined (SCHED SPORADIC is optional): SCHED FIFO SCHED RR SCHED SPORADIC SCHED OTHER Below, we describe in some details the three real-time scheduling policies. The fourth policy, SCHED OTHER is the one that is commonly used to schedule non real-time tasks and dealing with it is out of the scope of this work. SCHED FIFO Tasks under SCHED FIFO are scheduled choosing them from a task list where they are ordered by the time they arrived on it. Generally, the head of the list, for each priority, is the task that has stayed on that list for the longest time. When a SCHED FIFO task is preempted, it becomes the head of its own task list; when it becomes blocked or yields execution, it becomes the tail. All this means that the highest priority task having this scheduling policy is allowed to run undisturbed until it blocks, voluntary relinquish the CPU or completes. SCHED RR Tasks under SCHED RR are scheduled identically to SCHED FIFO ones. They only have to fulfil the additional constraints that, after having exceeded a maximum execution quantum, the running task becomes the tail of its task list, and the task at the head of it shall become the running one. The duration of the quantum is implementation defined. This guarantees that, if there are multiple SCHED RR tasks at the same priority, one of them does not monopolizes the processor. SCHED SPORADIC SCHED SPORADIC is particularly interesting to us, since it is the one that is missing in Linux and that we have implemented. For this reason we describe it with a little more details than the previous two. For this policy to be implemented struct sched param, which accommodates the parameters of each scheduling policy, has to be enlarged. In fact, if only SCHED FIFO and SCHED RR policies are supported, it is sufficient for struct sched param to contain a single integer field sched priority, being the priority of the task. As we saw in Sec the Sporadic Server needs both a period and a budget, and so two parameters, sched ss repl period and sched ss init budget have to be added. Moreover, SCHED SPORADIC also prescribes two more parameters to be within such structure: sched ss low priority and sched ss repl max. The scheduling of a SCHED SPORADIC task is controlled primarily by sched ss repl period) and sched ss init budget. Like SCHED RR, also SCHED SPORADIC is identical to SCHED FIFO, with additional conditions that causes priority to be switched between sched priority and sched ss low priority. The actual priority of a SCHED SPORADIC task is: sched priority if the its current budget is greater than zero and the number of pending replenishment is less than sched ss max repl; sched ss low priority otherwise. The modification of the current budget of a task is done as follows: 1. when a task becomes running, its execution time shall be limited to at most its current budget; 2. each time a task is inserted in the sched priority task list (because it became runnable or because of a budget replenishment) the time is posted as the activation time; 3. when a task running atsched priority is preempted, the execution time it consumed is subtracted from its current budget; 7

8 4. when a task running at sched priority blocks, the execution time it consumed is subtracted from its current budget and a replenishment operation is scheduled; 5. when a task running at sched priority exhausts its current budget, it becomes the tail of its sched ss low priority task list. Also, a replenishment operation is scheduled as well; 6. each time a replenishment is scheduled, replenish amount is set to the execution time consumed since activation time and the replenishment is scheduled to occur at activation time plus sched ss repl period (or immediately if that instant already has passed). 7. a replenishment consists of adding replenish amount to the current budget of a task. If it was running at sched ss low priority, then it suddenly becomes sched priority. Furthermore, consider that the number of replenishment simultaneously pending shall not be greater than sched ss max repl, and the current budget of a task shall not be neither lower than 0 nor greater than sched ss init budget. All this means that SCHED SPORADIC behaves exactly the same as SCHED FIFO toward non real-time tasks, but it may affect which real-time task is selected to run, by continuously switching the priority of SCHED SPORADIC ones. It is easy to see that, with some little differences due to the implementative perspective of the POSIX specifications (e.g., the limited number of replenishment), SCHED SPORADIC can be used to mimic the Sporadic Server algorithm described in Sec The only significant difference is that a SCHED SPORADIC that exhausts its budget is not forbidden to run until the next replenishment. Rather, it is requeued inside a different task list, the one corresponding to sched ss low priority. 3.3 POSIX Support in Linux Even if nobody has probably ever attempted to write a conformance document for it, Linux supports large chunks of the POSIX standards, as the following paragraphs tries to summarize, focusing on the features being part of the real-time option groups. Automated tools to test the behavior of the Linux kernel, and of a GNU/Linux system as well, exists, for example the Open POSIX Test Suite (OPTS) or the NPTL Trace Tool [2, 19]. They can be used to verify in great detail what is missing or what is misbehaving toward the specifications. Here, what we are interested in pointing out is that Linux already supports almost all the features a POSIX compliant real-time system requires, with the SCHED SPORADIC scheduling policy being the biggest gap. Real-Time Signals POSIX real-time extensions to standard Unix signals generation and delivery is fully supported in the standard Linux kernel with the following characteristics: signal number form SIGRTMIN to SIGRTMAX; they can accommodate a small piece of data; they can be queued and are delivered in FIFO order; it is possible to create a thread in response to a signal. Asynchronous I/O AIO Although the most widely used I/O model in general purpose applications is the synchronous one, issuing asynchronous operations could be quite useful in real-time contexts. Linux supports such a model making it possible to perform asynchronous I/O in all the following ways: async. blocking I/O which means issuing multiple non-blocking requests and waiting for some of them complete; sync. non-blocking I/O which means issuing a non-blocking request and start polling the kernel to know if it completed; async. non-blocking I/O which means issuing non-blocking requests and being notified by the kernel (by a signal or a callback function) about their completion. Memory Management Since typically a realtime task can not tolerate the unpredictable overhead introduced by virtual memory and paging POSIX requires that those features could be disabled, at least for a specified range of addresses. For that purpose, Linux coherently implements mlock and mlockall system calls. Furthermore, as shared memory is the most used communications paradigm in real-time applications, Linux support both POSIX.1b memory mapped files (mmap) and shared memory regions (shm {open, unlink}). 8

9 CPU Scheduling The Linux scheduler supports SCHED FIFO, SCHED RR and SCHED OTHER scheduling policies, with SCHED SPORADIC still missing. there exist 140 priority levels, with dedicated to realtime policies, and available only for users with sufficient capabilities. As for SCHED OTHER the old heuristic based algorithm has recently, been replaced by a new one, the Completely Fair Scheduler (CFS). It avoids interactivity-guessing adopting a straightforward mechanism which tries to ensure that all tasks at the same static priority level are treated equal, and that their dynamic priority vary depending on whether they are is getting theirs fair share of CPU or not. Timers The Linux kernel supports both the interval timers (BSD itimers) interface and the b real-time perprocess timer functions (timer {create, settime, etc.}). Clock selection is possible among CLOCK REALTIME, CLOCK MONOTONIC and CLOCK {PROCESS, THREAD} CPUTIME ID. It is worth to note that, even if POSIX does not strictly specify any given timer resolution to be accomplished, the Linux kernel has recently gained support for high precision timers (high resolution timers, hrt). This means, for hardware platforms that include accurate enough time sources, that one can obtain tenth of microseconds precision time accounting either from within the kernel and in user space. Threads Since the introduction of the Native POSIX Threads Library (NPTL [13]) the Linux kernel supports multithreading a lot better than how it did it by means of the old LinuxThreads implementation. In particular, POSIX compliance has been improved, especially solving some odd issues happening inside signal handling. The implemented thread model is 1-to-1, i.e., one kernel thread for each user level thread, with either low complexity and overhead. As required by POSIX, the scheduling policies a thread and a process can use are the same. Thread Synchronization Synchronization and mutual exclusively executing of some code sections are key points of POSIX-like multithreading environment. The Linux kernel provides a variety in kernel synchronization primitives, such as spinlocks, mutexes, real-time mutexes, read-copy update, and so on. It also exports to user level the Fast Userspace mutex synchronization object (futex [6, 7]), on top of which the POSIX thread mutexes and condition variables are implemented. Thread Priority Inversion Control Since the use of classical mutexes may be prone to the well known phenomenon of priority inversion, POSIX real-time extensions requires priority inheritance and variant of priority ceiling to be supported. In Linux, PTHREAD PRIO INHERIT for POSIX mutexes is implemented by means of futexes and real-time mutexes (struct rtmutex). PTHREAD PRIO PROTECT is implemented in the GNU C Library, only using futexes. 4 SCHED SPORADIC Design and Implementation In this section, we give a picture of the present Linux scheduler code architecture, and then provide all the details about our design and implementation activity. As regards to our motivations, i.e., the reasons why we decided to implement SCHED SPORADIC and try to get it merged, we can summarize them this way: 1. as shown in Sec. 2, there seems to be a lot of interest, especially coming from the research community, in both hierarchical fixed priority scheduling by means of Sporadic Servers, and in POSIX SCHED SPORADIC support in Linux as well; 2. we think Linux would benefit of improved POSIX conformance (even this may not be achieved at its full extent, see Sec ), gaining even more interest than it now has, especially from deeply embedded system designers; 3. since, as described in Sec. 4.1, real-time group scheduling is implemented, we think it would be positive thing to be able to make it more analyzable and theoretically well established. Anyway, whether it will be integrated or not in mainline Linux, we will use this implementation of hierarchical SCHED SPORADIC scheduling for our future research and implementation activities. The code has already been sent to the LKML and so it is freely available therein. Anyway, it can also be downloaded from faggioli/, or by asking via to the authors. 4.1 Linux CPU Scheduler As repeatedly said, Linux has, excepted for SCHED SPORADIC, a POSIX conforming scheduler 9

10 with support for real-time and non real-time policies. The only supported non real-time policy is SCHED OTHER but, since Linux mainly is a general purpose kernel, it is by far the most used one. The code structure have quite recently been reworked, and turned into the so-called Modular Scheduler Framework, as well as provided with the group scheduling capability. The both of these features are briefly described in the following paragraphs. Modular Scheduler Framework Since kernel release the monolithic scheduling core of the Linux kernel has been replaced by an extensible hierarchy of scheduler modules where each scheduling module ( scheduling class) is implemented in a different source file. Currently, only two modules are present: the fair scheduler module in sched fair.c, implementing SCHED OTHER scheduling policy, and the real-time module in sched rt.c, implementing SCHED FIFO and SCHED RR policies. The design is not that much flexible as it can appear, e.g., there is no way to dynamically prioritize a scheduling class, and adding new classes is definitely not trivial at all. Obviously, in the module hierarchy, which is a simple linked list of scheduling classes, the real-time class appears at the first position. The interface each class has to implement is relatively small, and it basically comprises the following functions: enqueue task() dequeue task() requeue task() task tick() check preempt curr() pick next task() put prev task() We think the name of the functions already explains what they are meant for, and we are not going in more details due to space reasons. Either sched fair.c and sched rt.c provides the core scheduler with their own implementations of each and every of these functions. The core scheduler, to deal with a specific scheduling event, invokes the correct implementation of the specific function depending on the scheduling class the involved task belongs to. The core scheduler is implemented in sched.c source file. Linux and Group Scheduling Another interesting recently introduced feature of the Linux kernel is group scheduling support, already described in Sec Both tasks and task groups are considered scheduling entities, and a task entity can be moved underneath a group entity by means of the specific interface. Moreover, each group scheduling entity has a run queue, and a ready to run scheduling entity is always put in the run queue associated with its parent entity. When the scheduler goes to pick the next task to run, it chooses it within the top-level scheduling entities. If that entity is not a task the scheduler further looks inside its run queue, and this is iterated until a task is picked up. In the domain of general purpose scheduling (in sched fair.c), this feature is used to enforce fairness among different task groups instead than only among different tasks. For real-time scheduling this is used to try to allocate some guaranteed run time to each task group over a given period. The way it this is handled, by means of a budget (runtime), consumed by the entities running inside a group and periodically recharged, resembles an aperiodic request managing algorithm. Actually, it could be seen as mimicking a Deferrable Server but, indeed it is something different from all we have in theory. Unfortunately, what we defined in Sec as group or hierarchical scheduling is not fully implemented yet. The problem is real-time task groups can not be provided with their own priority to be used for scheduling them under Rate Monotonic, as one would expect. By now, the priority of a task group is determined to be equal to the highest priority of tasks and task groups it contains. This makes it possible to affect the POSIX CPU scheduling behavior as few as possible, but breaks hierarchical realtime theory, and makes all its results unusable. Finally, an effective interface has been devised in order to make grouping and ungrouping possible. It is the process container interface, which is part of the more general Control Group subsystem and that fully works by means of standard file and directory operations on a specially mounted filesystem. 4.2 SCHED SPORADIC in Linux SCHED SPORADIC is one of the real-time scheduling policy specified by the POSIX standard, as well as SCHED FIFO and SCHED RR. For this reason the best place where to place the code in the Linux scheduling framework, is inside the real-time scheduling class, within sched rt.c source file. The rules reported in Sec. 3.2 have to be enforced by modifying the implementation of the scheduling class functions, in par- 10

11 ticular dequeue task rt(), set curr task rt() and update curr rt(), enqueue rt entity() and dequeue rt entity(). Obviously some new functions to handle the budget accounting and the replenishment scheduling have to be added, as well as other source files are affected by minor modifications, mainly sched.c. Below we show, throughout the diffstat output, some quantitative data about the amount of insertions, deletions and modifications introduced by our patch to the Linux kernel: arch/x86/kernel/ syscall table 32.S 3 include/asm x86/unistd 32.h 4 include/asm x86/unistd 64.h 7 include/linux/sched.h 79 init/kconfig 42 kernel/exit.c 5 kernel/fork.c 15 kernel/sched.c 451 kernel/sched rt.c files changed, 1130 insertions(+), 18 deletions( ) SCHED SPORADIC Group Scheduling Extending SCHED SPORADIC to Linux task groups is straightforward, since all the seven rules characterizing the scheduling policy described in Sec. 3.2 can be applied to a task group as well. The first thing we have to do is to provide each task group with the typical SCHED SPORADIC parameters, so that any of them has its own budget to consume, and its period for scheduling replenishments. Also, we think that the following semantic adaptations are sufficient: accounting when a task executes for a given amount, the budget of all the groups encountered ascending its hierarchy till the root are diminished by the same amount; blocking a task group is considered to be blocking when it is being removed from a run queue because it no longer has any task or task groups to run inside its own queue. unblocking a task group is considered to be unblocking when it is being added to a run ready queue because it again has some task or task group able to run inside its own queue. It is easy to see that, by means of these extensions, our implementation makes task groups behave like Sporadic Servers inside a Rate Monotonic based hierarchical scheduling framework, which is a well known and analyzable situation. Although this is not yet perfectly true, due to discrepancies between true hierarchical scheduling and Linux group scheduling one, i.e., the lack of task group independent priorities, we think it is at least a starting point. Schedulability Test Finally, note we have not implemented any kind of schedulability admission test yet. This is because, as said in Sec it would not be simple to code and run it on-line. Rather, we think that the application designer, knowing in greater detail what he exactly needs, can implement its own task and group admission strategy. Moreover, the present implementation of group scheduling in Linux, with group not having their own priority and deriving it by the tasks (and other task groups) they contain, does not fully adhere with what real-time theory calls a hierarchical system. For this reason, any of the existing scheduling analysis results is directly applicable, unless we alter the scheduling test or we turn the model into a truly hierarchical one (that actually is in our future work list). 4.3 Implementation Details Here we give the details of the implementation of SCHED SPORADIC. We also we discuss some issues that raised concerning the interface, and explain why it may not be not a good idea to strictly adhere to what the POSIX standard prescribes SCHED SPORADIC Configuration We make it possible, when configuring the Linux kernel after our patch has been applied, to choose if SCHED SPORADIC scheduling has to be enabled, by means of CONFIG POSIX SCHED SPORADIC. If that one is selected as Yes the SCHED SPORADIC policy becomes available for Linux tasks, while the sporadic group scheduling and its interface is compiled iff also CONFIG RT GROUP SCHED is enabled. As a dependency of CONFIG POSIX SCHED SPORADIC the numeric input field CONFIG SCHED SPORADIC REPL MAX appears, to setup how big the maximum number of pending replenishment (SS MAX REPL) should be. Since we are aware of the fact that, along with a new feature, we also are introducing some overhead, the choice of making it configurable and tunable has been a key issue for our design. Notice that all our code is written to not cause problem if compiled for and executed on a multiprocessor machine, so no dependency on the kernel being UP is in place. If on a SMP system, standard Linux migration mechanisms to handle real-time scheduling in multiprocessors are let in place and are not affected at all by our patch SCHED SPORADIC Data Structures The core of our implementation is the data structure that stores the status of a SCHED SPORADIC 11

12 task or task group. As said in previous sections, each Linux task and task group has a scheduling entity. For tasks the scheduling entity is the basis of each and every scheduling decision, for task group this is not true, since group scheduling is based on the information stored in the group run queue and within the task group data structure itself. So, our struct sched sporadic data field is stored inside the scheduling entity of SCHED SPORADIC tasks, and inside the task group data structure for task groups. Our main concern was not to impose too much memory overhead to the system for implementing a scheduling policy which, especially for tasks, will probably would hardly be used. In fact, especially in a standard (non real-time) Linux installation, the tasks that use real-time policies (SCHED FIFO and SCHED RR) are only few, and so it is natural to argue the number of tasks that are going to use SCHED SPORADIC to be even less. The problem is worsened by the fact that we need some kind of buffer of pending replenishments. It can be implemented as an array, since its maximum size is specified by POSIX to be constant. The array solution is also superior, with respect for example to a linked list, since it does not require to dynamically create a new list element when we have to add one. In fact this may happen every time a task blocks, and dynamically allocating memory on all such events would have caused unacceptable overhead. Unfortunately, a data structure containing some integer (64 bit) fields for the task or the group status, a timer for the replenishment and an array of possible pending replenishments, each one with replenishment time and amount, tends to be quite a big piece of kernel memory. This given, our solution is to store only a pointer inside the real-time scheduling entity data structure, and dynamically allocate the memory for our struct sched sporadic data when a task chooses SCHED SPORADIC as its scheduling policy. We still have to deal with dynamic memory allocation, but doing it at each scheduling policy change is by far more acceptable than at each task deactivation event. On the other hand, for task groups, we add our data structure to each task group. We think this could be a good solution since the number of task groups created in a system is much smaller than the number of tasks it is running. Since we are dealing with dynamic memory allocations, it is difficult to evaluate the memory overhead imposed to the system as a whole. In fact, the total memory overhead SCHED SPORADIC is introducing fully depends on how many task groups are created in the system, and on how many tasks uses SCHED SPORADIC as their own scheduling policy SCHED SPORADIC Tasks Briefly going into little more details on the implementation of the SCHED SPORADIC for tasks, we can say that we spent a lot of effort in integrating our code in the present Linux real-time scheduling class (sched rt.c) and keeping the modifications at minimum extent. The normal real-time Linux scheduling code path is only altered by calling one function for the scheduling entity of the current task, rt se ss budget exhausted(), which checks if we are dealing with a SCHED SPORADIC task and if it is eventually running out of budget. This happens inside the existing update curr rt() function which is the one that is responsible for the accounting to the task of its execution time and. For that reason it is called at each scheduler tick, and at each task deactivation or preemption. In our function, if the budget of the task is exhausted, we honor the POSIX prescribed behavior and post a recharge event as well as requeue the task inside a different run-list, since we have changed its priority to the low level value. For recharge event to happen, we use one timer for each task, and so we wrote the rt se ss repl timer() function, the handler of that timer. Having only one timer, this function has not only to perform the scheduled replenishment, but also to check if any other event already has been posted and to reprogram the timer to fire again at the right time instant. Since replenishment can only be requested in sequentially increasing time order, we prefer to adopt this design and avoiding having one timer for each replenishment event. Again, considering the fact that the number of SCHED SPORADIC tasks in most of the configurations is probably going to be quite low, the overhead we are introducing in the system by adding a timer for each one of those task is not too high. Also, consider that thanks to the possibility of using the high resolution timers from within the scheduler related functions we are able to serve the replenishment events with as much precision as the hardware is able to provide SCHED SPORADIC Task Groups When both SCHED SPORADIC and real-time group scheduling are configured by the user, what we described before for tasks also happens to task groups, with the semantic we already defined and explained in the previous sections. What we do is essentially the same, and it is also similar to what the present Linux kernel code does to make real-time group scheduling effective. We have another function, called rt rq ss is exhausted budget() which check if a task group is to be scheduled under our 12

13 SCHED SPORADIC adaptation and, if yes, if it is running out of budget. Obviously such a function is called for each task group into which the current task is enqueued, from its own parent group to the hierarchy root group. We also have one timer for each task group for replenishment to happen at their scheduled time, and the handler function is rt rq ss repl timer SCHED SPORADIC Interface POSIX is clear when talking about the scheduler exported user interface: it is essentially made up of a couple of functions, the most important ones being sched setscheduer(), sched getscheduler(), sched setparam() and sched getparam(). As regard to data structures the one of paramount importance is struct sched param, which necessarily contains an integer field called sched priority for accommodating the priority of a task. If SCHED SPORADIC scheduling is supported, it also should contain some other fields: sched ss low priority, sched ss init budget, sched ss repl period and sched max repl, with which we already are familiar. Unfortunately, the Linux kernel already implements and exports the struct sched param data structure, and it only containing the int sched priority field. One would think that the best way to go is to extend such a data structure as well as (slightly) modify the previous listed functions in order to deal with the new scheduling policy and parameters. Actually, this is what we did in the first version of our implementation, but we soon discovered that this approach suffers from a so-called binary incompatibility issue. To figure out what this means, just consider what would happen to all the binary distributed applications, that have been compiled while struct sched param was only one integer size long, if we change it toward something so much bigger! Moreover, there exist no trivial solution to this problem, and the only idea we have, that also resulted in being the most appreciated during discussing the problem in the LKML, is not to modify struct sched param nor the behavior of the already present functions and system calls. Instead, we introduce a new scheduling parameter data structure (struct sched param2) and new syscalls (sched {set,get}scheduler2(), sched {set,get}param2) dealing with it. This is how we are doing in our second (and present) patch since, even if being disappointed about the fact we are no longer fully implementing the POSIX interface. However, we think that breaking the binary compatibility of the Linux kernel with all the precompiled application that use struct sched param is, definitely, something that we do not want. Summarizing, to set the scheduling policy of a task to SCHED SPORADIC, we almost provide what POSIX prescribes, unless the name of the functions and of the data structure to be used are slightly different. With respect to groups, since POSIX does not cover the topic of group scheduling, we are following the way the Linux kernel is presently doing, i.e., we exploit the cgroups filesystem interface. This means each task group is a directory that has to be created under the mount point of the special cgroups filesystem. The files inside such directories we are interested in are calledcpu.rt ss period ns, cpu.rt ss budget ns andcpu.rt ss max repl, and we think their name is enough to understand what each one of them is meant for. Notice that the group interface for SCHED SPORADIC is nanoseconds based, as exactly the task interface is. 5 Experimental Evaluation The code is still is at its early development stage. By now, we have only been able to run some behavioral simulation to test its correctness. Moreover, we think this work needs thorough experimental evaluation to prove its usefulness. Thus, we have identified many different test and simulations that we would like to perform, either concerning our implementation or comparing it with the Linux native group scheduling solution. So, what this section contains is the brief description of the results of the simple tests we have performed, showing that the algorithms behaves correctly in each scenario. We also describe what kind of tests, simulations and performance evaluations we think are useful to pinpoint the characteristics of our implementation. Thus, this will serve as our test plan for future works. 5.1 Functionality Tests Here we show two traces of execution showing that our SCHED SPORADIC implementation behaves as one would expect. In the upper part of Fig. 1 we see the execution trace drawn from asched SPORADIC task (tau A ) with high priority and low priority equal to, respectively 40 and 20, replenishment period equal to 100 msec and budget 50 msec. The task is a greedy one, i.e., always has something to do. The label of the task is the priority at which it is running. As we can see, SCHED SPORADIC policy is effective in throttling the task down to its low priority level when it exhausts the budget, and pushing it back to the high 13

14 one just after the replenishments. In the lower part of the same figure, we can see the same task (tau A ), running together with a SCHED FIFO task (tau b ) with priority equal to 30. As we expected, tau A is only allowed to run when it has its high priority level, since during its staying at priority 20 tau B is scheduled on the CPU. 5.2 Future Planned Tests There are many aspects of our implementation that we are interested in evaluating thoroughly. The following paragraphs give a glimpse to any of them, as long as our planned test and measurement methodology and our expectations. Execution Time Overhead Evaluation The first thing that we want to establish, is how much overhead we are introducing with respect to the standard Linux kernel, both for task and task group scheduling. In order of doing this we will proceed this way: we prepare a version of our code instrumented by TSC reading at the beginning and at the end of the new functions we have implemented and, more important, of the existing functions that we have modified; we look at the output produced by ftrace, the Linux kernel tracer that is being integrated right at the time of our writing, in order to check if we are worsening the performances of the system with respect to worst case execution latencies (priority inversions, preemption and interrupts disabling, etc.). For both of this analysis we are interesting in comparing the results with the original Linux kernel, especially with respect to the group scheduling implementation. In fact, what we expect is that the scheduling of SCHED SPORADIC tasks will introduce some amount of overhead, but that this come back to some negligible extent if those tasks are removed. On the other hand, although the Sporadic Server policy is more complex that the heuristic one that the kernel uses for handling the budget of the groups, we think that our group scheduling solution is not introducing so much overhead. Memory Overhead Evaluation This is clearly highly depends on how heavily SCHED SPORADIC tasks and groups are created and used, but still is an interesting aspect to be evaluated. This is particularly true if we think that our solution could be effective for embedded systems developer, and they typically have to deal with systems with tight requirements on kernel memory footprint. Here the idea is to define some meaningful set of randomly generated SCHED SPORADIC task and group workloads and evaluate the memory overhead introduced in the kernel for both the binary image and the up and running system. To make this even more interesting such an estimation could be repeated for as much hardware architectures as possible, with particular focus on the ones that typically runs embedded devices. Response Time Analysis We think that a good estimation on how good our idea to implement the Sporadic Server on Linux is, could be to evaluate the response time of some (randomly generated) aperiodic activities run under different system load conditions. In order of doing this, after having generated the aperiodic tasks, we will run them as SCHED SPORADIC tasks and/or inside SCHED SPORADIC task groups and check with ftrace how many of them fulfil the theoretical expectations and, if not, why and by how much. Comparison with the results we will achieve using the present group scheduling implementation of Linux is also worthwhile. Real World Applications As said, the vast majority of Linux applications, at least in a standard installation, do not use the facilities provided by the POSIX real-time capabilities of the Linux kernel. However, we are thinking to modify a real soft real-time Linux application, so that it uses SCHED SPORADIC. This way it can be provided with some time guarantees as well as do not starve the rest of the system. 6 Conclusions and Future Works In this paper, after giving an introduction on realtime computing, POSIX real-time extensions and on how well Linux supports them, we presented a slightly extended version of the SCHED SPORADIC POSIX scheduling policy and its implementation in the Linux kernel. In recent past there have been made other attempts to implement such a policy, but any of them supports group scheduling, as Linux now ships with. We think both standard and group Sporadic Server scheduling capabilities are valuable extensions to the Linux kernel, pushing it a bit further on the path of supporting real-time analyzable and predictable applications. Now, having the code fully functioning, as shown by a couple of synthetic experiments, we can devote 14

15 τ A τ A τ B Figure 1: SCHED SPORADIC tasks running. Timescale on the axis is sec 1 ourselves to more deep and thorough experimental evaluation of it. Regarding future works, more than what is already described in Sec. 5, we are working on a modification of the fundamentals of the group scheduling in Linux to make it possible to turn it into what it is broadly known as a real-time hierarchal system. Moreover we will re-run all the tests after having adapted our implementation on the rt-preempt variant of the Linux kernel. Finally, some more effort is also needed to make our solution cope with SMP and task synchronization and priority inheritance. References [1] L. Burdalo, A. Espinosa, A. Garcia-Fornes and A. Terrasa. Framework fo the Development and Evaluation of New Scheduling Policies in RT- Linux. OSPERT Workshop July 2006 [2] [3] The Open Group Base Specifications, IEEE Std [4] [5] [6] R. Russel, H. Franke and M. Kirkwood. Fuss, Futexes and Furwocks: Fast Userlevel Locking in Linux. Ottawa Linux Symposium 2002 [7] U. Drepper. Futexes Are Tricky. December th IEEE Real-Time and Embedded Technology and Application Symposium. April 2007 [11] [12] [13] U. Drepper and I. Molnar. The Native POSIX Thread Library for Linux. February 2005 [14] [15] T.-S. Tia, J.W.-S. Liu M. and Shankar. Algorithms and optimality of scheduling aperiodic requests in fixed-priority preemptive systems. Journal of Real-Time Systems 10(1): [16] rosentha/cop5641/ [17] R.J. Bril and P.J.L. Cuijpers. Towards Exploiting the Preservation Strategy of Sporadic Servers. Euromicro Conference on Real-Time Systems, Work-In-Progress Session. July 2008 [18] G. Lipari and C. Scordino. Current Approaches and Future Opportinities. International Congress ANIPLA, November 2006 [19] [20] J.K. Strosnider, J.P. Lehoczky and L.Sha. The Deferrable Server Algorithm for Enhanced Aperiodic Responsiveness in Hard-Real-Time Environments. IEEE Transactions on Computers, 4, 1, 1995 [21] Mikael Bertlin. Proportional and Sporadic [8] Scheduling in Real-Time Operating Systems. [9] Master Thesis. March 2008 [10] M. Lewandowski, M. Stanovich, T. Baker, K. Gopalan and A. Wang. Modelling device driver effects in real-time schedulability analysis: Study of a network driver. In Proc of [22] T.P. Baker, A.A. Wang and M.J. Stanovich. Fitting Linux Device Drivers into an Analyzable Scheduling Framework. OSPERT Workshop

16 [23] W. Shi. Implementation and Performance of POSIX Sporadic Server Scheduling in RTLinux. Master Thesis. Summer 2001 [24] G. Buttazzo. Rate Monotonic vs. EDF: Judgment Day. Real-Time Systems, 29, 526, 2005 [25] E. Bini and G. Buttazzo. Schedulability Analysis of Periodic Fixed Priority Systems. IEEE Transactions on Computers. 53, 11, , 2004 [26] G. Lipari and E. Bini. A Methodology for Designing Hierarchical Scheduling Systems. Journal of Embedded Computing. 1, 2, 2004 [27] [28] B. Sprunt, L. Sha and J.P. Lehoczky. Aperiodic Task Scheduling for Hard-Real-Time Systems. Real-Time Systems, 1, 27-60, 1989 [29] C. L. Liu and J.W. Layland. Scheduling Algorithms for Multiprogramming in a Hard real- Time Environment. Journal of the Association for Computing Machinery.20, 1, 46-61, 1973 [30] L. Sha, R. Rajkumar and J.P. Lehoczky. Priority Inheritance Protocols: An Approach to Real- Time Synchronization. IEEE Transactions on Computers. 39, 9, 1990 [31] J. Mathai and P.K. Pandya. Finding Response Times in a Real-Time System. The Computer Journal. 29, 5, ,

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 26 Real - Time POSIX. (Contd.) Ok Good morning, so let us get

More information

Predictable response times in event-driven real-time systems

Predictable response times in event-driven real-time systems Predictable response times in event-driven real-time systems Automotive 2006 - Security and Reliability in Automotive Systems Stuttgart, October 2006. Presented by: Michael González Harbour mgh@unican.es

More information

4. Fixed-Priority Scheduling

4. Fixed-Priority Scheduling Simple workload model 4. Fixed-Priority Scheduling Credits to A. Burns and A. Wellings The application is assumed to consist of a fixed set of tasks All tasks are periodic with known periods This defines

More information

Tasks Schedule Analysis in RTAI/Linux-GPL

Tasks Schedule Analysis in RTAI/Linux-GPL Tasks Schedule Analysis in RTAI/Linux-GPL Claudio Aciti and Nelson Acosta INTIA - Depto de Computación y Sistemas - Facultad de Ciencias Exactas Universidad Nacional del Centro de la Provincia de Buenos

More information

Lecture 3 Theoretical Foundations of RTOS

Lecture 3 Theoretical Foundations of RTOS CENG 383 Real-Time Systems Lecture 3 Theoretical Foundations of RTOS Asst. Prof. Tolga Ayav, Ph.D. Department of Computer Engineering Task States Executing Ready Suspended (or blocked) Dormant (or sleeping)

More information

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm PURPOSE Getting familiar with the Linux kernel source code. Understanding process scheduling and how different parameters

More information

Embedded Systems. 6. Real-Time Operating Systems

Embedded Systems. 6. Real-Time Operating Systems Embedded Systems 6. Real-Time Operating Systems Lothar Thiele 6-1 Contents of Course 1. Embedded Systems Introduction 2. Software Introduction 7. System Components 10. Models 3. Real-Time Models 4. Periodic/Aperiodic

More information

Hard Real-Time Linux

Hard Real-Time Linux Hard Real-Time Linux (or: How to Get RT Performances Using Linux) Andrea Bastoni University of Rome Tor Vergata System Programming Research Group bastoni@sprg.uniroma2.it Linux Kernel Hacking Free Course

More information

Real- Time Scheduling

Real- Time Scheduling Real- Time Scheduling Chenyang Lu CSE 467S Embedded Compu5ng Systems Readings Ø Single-Processor Scheduling: Hard Real-Time Computing Systems, by G. Buttazzo. q Chapter 4 Periodic Task Scheduling q Chapter

More information

Performance Comparison of RTOS

Performance Comparison of RTOS Performance Comparison of RTOS Shahmil Merchant, Kalpen Dedhia Dept Of Computer Science. Columbia University Abstract: Embedded systems are becoming an integral part of commercial products today. Mobile

More information

Quality of Service su Linux: Passato Presente e Futuro

Quality of Service su Linux: Passato Presente e Futuro Quality of Service su Linux: Passato Presente e Futuro Luca Abeni luca.abeni@unitn.it Università di Trento Quality of Service su Linux:Passato Presente e Futuro p. 1 Quality of Service Time Sensitive applications

More information

Sporadic Server Revisited

Sporadic Server Revisited Sporadic Server Revisited Dario Faggioli, Marko Bertogna, Fabio Checconi Scuola Superiore Sant Anna, Pisa, Italy SAC, Sierre March 25th, 2010 Summary System Model Resource Reservation Original Sporadic

More information

Modular Real-Time Linux

Modular Real-Time Linux Modular Real-Time Linux Shinpei Kato Department of Information and Computer Science, Keio University 3-14-1 Hiyoshi, Kohoku, Yokohama, Japan shinpei@ny.ics.keio.ac.jp Nobuyuki Yamasaki Department of Information

More information

III. Process Scheduling

III. Process Scheduling Intended Schedule III. Process Scheduling Date Lecture Hand out Submission 0 20.04. Introduction to Operating Systems Course registration 1 27.04. Systems Programming using C (File Subsystem) 1. Assignment

More information

III. Process Scheduling

III. Process Scheduling III. Process Scheduling 1 Intended Schedule Date Lecture Hand out Submission 0 20.04. Introduction to Operating Systems Course registration 1 27.04. Systems Programming using C (File Subsystem) 1. Assignment

More information

Enhancing the Monitoring of Real-Time Performance in Linux

Enhancing the Monitoring of Real-Time Performance in Linux Master of Science Thesis Enhancing the Monitoring of Real-Time Performance in Linux Author: Nima Asadi nai10001@student.mdh.se Supervisor: Mehrdad Saadatmand mehrdad.saadatmand@mdh.se Examiner: Mikael

More information

An EDF scheduling class for the Linux kernel

An EDF scheduling class for the Linux kernel An EDF scheduling class for the Linux kernel Dario Faggioli, Fabio Checconi Scuola Superiore Sant Anna Pisa, Italy {d.faggioli, f.checconi}@sssup.it Michael Trimarchi, Claudio Scordino Evidence Srl Pisa,

More information

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Operating Systems Concepts: Chapter 7: Scheduling Strategies Operating Systems Concepts: Chapter 7: Scheduling Strategies Olav Beckmann Huxley 449 http://www.doc.ic.ac.uk/~ob3 Acknowledgements: There are lots. See end of Chapter 1. Home Page for the course: http://www.doc.ic.ac.uk/~ob3/teaching/operatingsystemsconcepts/

More information

Description of Project: Scheduling algorithms used in a particular application can have a significant impact on

Description of Project: Scheduling algorithms used in a particular application can have a significant impact on MS project proposal: A comparison of real-time scheduling algorithms using visualization of tasks and evaluation of real-time extensions to Linux Kevin Churnetski Computer Science-RIT 8/21/2003 Abstract:

More information

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operatin g Systems: Internals and Design Principle s Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Bear in mind,

More information

White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux

White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux White Paper Real-time Capabilities for Linux SGI REACT Real-Time for Linux Abstract This white paper describes the real-time capabilities provided by SGI REACT Real-Time for Linux. software. REACT enables

More information

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods

More information

RTAI. Antonio Barbalace antonio.barbalace@unipd.it. (modified by M.Moro 2011) RTAI

RTAI. Antonio Barbalace antonio.barbalace@unipd.it. (modified by M.Moro 2011) RTAI Antonio Barbalace antonio.barbalace@unipd.it (modified by M.Moro 2011) Real Time Application Interface by Dipartimento di Ingegneria Aereospaziale dell Università di Milano (DIAPM) It is not a complete

More information

Chapter 2: OS Overview

Chapter 2: OS Overview Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:

More information

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann Andre.Brinkmann@uni-paderborn.de Universität Paderborn PC² Agenda Multiprocessor and

More information

Linux Plumbers 2010. API for Real-Time Scheduling with Temporal Isolation on Linux

Linux Plumbers 2010. API for Real-Time Scheduling with Temporal Isolation on Linux Linux Plumbers 2010 November 3rd, Boston API for Real-Time Scheduling with Temporal Isolation on Linux Tommaso Cucinotta, Cucinotta, Dhaval Giani, Dario Faggioli, Fabio Checconi Real-Time Systems Lab (RETIS)

More information

Real-Time Scheduling 1 / 39

Real-Time Scheduling 1 / 39 Real-Time Scheduling 1 / 39 Multiple Real-Time Processes A runs every 30 msec; each time it needs 10 msec of CPU time B runs 25 times/sec for 15 msec C runs 20 times/sec for 5 msec For our equation, A

More information

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/ Operating Systems Institut Mines-Telecom III. Scheduling Ludovic Apvrille ludovic.apvrille@telecom-paristech.fr Eurecom, office 470 http://soc.eurecom.fr/os/ Outline Basics of Scheduling Definitions Switching

More information

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses Overview of Real-Time Scheduling Embedded Real-Time Software Lecture 3 Lecture Outline Overview of real-time scheduling algorithms Clock-driven Weighted round-robin Priority-driven Dynamic vs. static Deadline

More information

Operating System Aspects. Real-Time Systems. Resource Management Tasks

Operating System Aspects. Real-Time Systems. Resource Management Tasks Operating System Aspects Chapter 2: Basics Chapter 3: Multimedia Systems Communication Aspects and Services Multimedia Applications and Communication Multimedia Transfer and Control Protocols Quality of

More information

SYSTEM ecos Embedded Configurable Operating System

SYSTEM ecos Embedded Configurable Operating System BELONGS TO THE CYGNUS SOLUTIONS founded about 1989 initiative connected with an idea of free software ( commercial support for the free software ). Recently merged with RedHat. CYGNUS was also the original

More information

Linux Block I/O Scheduling. Aaron Carroll aaronc@gelato.unsw.edu.au December 22, 2007

Linux Block I/O Scheduling. Aaron Carroll aaronc@gelato.unsw.edu.au December 22, 2007 Linux Block I/O Scheduling Aaron Carroll aaronc@gelato.unsw.edu.au December 22, 2007 As of version 2.6.24, the mainline Linux tree provides four block I/O schedulers: Noop, Deadline, Anticipatory (AS)

More information

The Design and Implementation of Real-Time Schedulers in RED-Linux

The Design and Implementation of Real-Time Schedulers in RED-Linux The Design and Implementation of Real-Time Schedulers in RED-Linux KWEI-JAY LIN, SENIOR MEMBER, IEEE AND YU-CHUNG WANG Invited Paper Researchers in the real-time system community have designed and studied

More information

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what

More information

What is best for embedded development? Do most embedded projects still need an RTOS?

What is best for embedded development? Do most embedded projects still need an RTOS? RTOS versus GPOS: What is best for embedded development? Do most embedded projects still need an RTOS? It is a good question, given the speed of today s high-performance processors and the availability

More information

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS CPU SCHEDULING CPU SCHEDULING (CONT D) Aims to assign processes to be executed by the CPU in a way that meets system objectives such as response time, throughput, and processor efficiency Broken down into

More information

Real-Time Operating Systems for MPSoCs

Real-Time Operating Systems for MPSoCs Real-Time Operating Systems for MPSoCs Hiroyuki Tomiyama Graduate School of Information Science Nagoya University http://member.acm.org/~hiroyuki MPSoC 2009 1 Contributors Hiroaki Takada Director and Professor

More information

Exercises : Real-time Scheduling analysis

Exercises : Real-time Scheduling analysis Exercises : Real-time Scheduling analysis Frank Singhoff University of Brest June 2013 Exercise 1 : Fixed priority scheduling and Rate Monotonic priority assignment Given a set of tasks defined by the

More information

Using EDF in Linux: SCHED DEADLINE. Luca Abeni luca.abeni@unitn.it

Using EDF in Linux: SCHED DEADLINE. Luca Abeni luca.abeni@unitn.it Using EDF in Linux: Luca Abeni luca.abeni@unitn.it Using Fixed Priorities in Linux SCHED FIFO and SCHED RR use fixed priorities They can be used for real-time tasks, to implement RM and DM Real-time tasks

More information

CPU Scheduling Outline

CPU Scheduling Outline CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different

More information

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1 Module 8 Industrial Embedded and Communication Systems Version 2 EE IIT, Kharagpur 1 Lesson 37 Real-Time Operating Systems: Introduction and Process Management Version 2 EE IIT, Kharagpur 2 Instructional

More information

Periodic Task Scheduling

Periodic Task Scheduling Periodic Task Scheduling Radek Pelánek Motivation and Assumptions Examples of Periodic Tasks sensory data acquisition control loops action planning system monitoring Motivation and Assumptions Simplifying

More information

Real-time Operating Systems. VO Embedded Systems Engineering Armin Wasicek 11.12.2012

Real-time Operating Systems. VO Embedded Systems Engineering Armin Wasicek 11.12.2012 Real-time Operating Systems VO Embedded Systems Engineering Armin Wasicek 11.12.2012 Overview Introduction OS and RTOS RTOS taxonomy and architecture Application areas Mixed-criticality systems Examples:

More information

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2) RTOSes Part I Christopher Kenna September 24, 2010 POSIX Portable Operating System for UnIX Application portability at source-code level POSIX Family formally known as IEEE 1003 Originally 17 separate

More information

Aperiodic Task Scheduling

Aperiodic Task Scheduling Aperiodic Task Scheduling Jian-Jia Chen (slides are based on Peter Marwedel) TU Dortmund, Informatik 12 Germany Springer, 2010 2014 年 11 月 19 日 These slides use Microsoft clip arts. Microsoft copyright

More information

Linux scheduler history. We will be talking about the O(1) scheduler

Linux scheduler history. We will be talking about the O(1) scheduler CPU Scheduling Linux scheduler history We will be talking about the O(1) scheduler SMP Support in 2.4 and 2.6 versions 2.4 Kernel 2.6 Kernel CPU1 CPU2 CPU3 CPU1 CPU2 CPU3 Linux Scheduling 3 scheduling

More information

Resource Reservation & Resource Servers. Problems to solve

Resource Reservation & Resource Servers. Problems to solve Resource Reservation & Resource Servers Problems to solve Hard-deadline tasks may be Periodic or Sporadic (with a known minimum arrival time) or Non periodic (how to deal with this?) Soft-deadline tasks

More information

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts

More information

An Implementation Of Multiprocessor Linux

An Implementation Of Multiprocessor Linux An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than

More information

Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems

Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems Carsten Emde Open Source Automation Development Lab (OSADL) eg Aichhalder Str. 39, 78713 Schramberg, Germany C.Emde@osadl.org

More information

Chapter 2 Operating System Overview

Chapter 2 Operating System Overview Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 2 Operating System Overview Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Roadmap Operating System Objectives/Functions

More information

CPU Scheduling. User/Kernel Threads. Mixed User/Kernel Threads. Solaris/Linux Threads. CS 256/456 Dept. of Computer Science, University of Rochester

CPU Scheduling. User/Kernel Threads. Mixed User/Kernel Threads. Solaris/Linux Threads. CS 256/456 Dept. of Computer Science, University of Rochester CPU Scheduling CS 256/456 Dept. of Computer Science, University of Rochester User/Kernel Threads User threads Thread data structure is in user-mode memory scheduling/switching done at user mode Kernel

More information

CPU Scheduling Yi Shi Fall 2015 Xi an Jiaotong University

CPU Scheduling Yi Shi Fall 2015 Xi an Jiaotong University CPU Scheduling Yi Shi Fall 2015 Xi an Jiaotong University Goals for Today CPU Schedulers Scheduling Algorithms Algorithm Evaluation Metrics Algorithm details Thread Scheduling Multiple-Processor Scheduling

More information

OPERATING SYSTEMS SCHEDULING

OPERATING SYSTEMS SCHEDULING OPERATING SYSTEMS SCHEDULING Jerry Breecher 5: CPU- 1 CPU What Is In This Chapter? This chapter is about how to get a process attached to a processor. It centers around efficient algorithms that perform

More information

Operating Systems 4 th Class

Operating Systems 4 th Class Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science

More information

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 5: PROCESS SCHEDULING Chapter 5: Process Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor

More information

Scheduling Aperiodic and Sporadic Jobs in Priority- Driven Systems

Scheduling Aperiodic and Sporadic Jobs in Priority- Driven Systems Scheduling Aperiodic and Sporadic Jobs in Priority- Driven Systems Ingo Sander ingo@kth.se Liu: Chapter 7 IL2212 Embedded Software 1 Outline l System Model and Assumptions l Scheduling Aperiodic Jobs l

More information

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1 Module 6 Embedded System Software Version 2 EE IIT, Kharagpur 1 Lesson 30 Real-Time Task Scheduling Part 2 Version 2 EE IIT, Kharagpur 2 Specific Instructional Objectives At the end of this lesson, the

More information

Operating Systems for Embedded Computers

Operating Systems for Embedded Computers University of Zagreb Faculty of Electrical Engineering and Computing Department of Electronics, Microelectronics, Computer and Intelligent Systems Operating Systems for Embedded Computers Summary of textbook:

More information

Real-Time Component Software. slide credits: H. Kopetz, P. Puschner

Real-Time Component Software. slide credits: H. Kopetz, P. Puschner Real-Time Component Software slide credits: H. Kopetz, P. Puschner Overview OS services Task Structure Task Interaction Input/Output Error Detection 2 Operating System and Middleware Applica3on So5ware

More information

Comparison between scheduling algorithms in RTLinux and VxWorks

Comparison between scheduling algorithms in RTLinux and VxWorks Comparison between scheduling algorithms in RTLinux and VxWorks Linköpings Universitet Linköping 2006-11-19 Daniel Forsberg (danfo601@student.liu.se) Magnus Nilsson (magni141@student.liu.se) Abstract The

More information

ICS 143 - Principles of Operating Systems

ICS 143 - Principles of Operating Systems ICS 143 - Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian nalini@ics.uci.edu Note that some slides are adapted from course text slides 2008 Silberschatz. Some

More information

Implementing and Using Execution Time Clocks in Ada Hard Real-Time Applications

Implementing and Using Execution Time Clocks in Ada Hard Real-Time Applications Implementing and Using Execution Time Clocks in Ada Hard Real-Time Applications By: M. González Harbour, M. Aldea Rivas, J.J. Gutiérrez García, and J.C. Palencia Gutiérrez Departamento de Electrónica y

More information

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput Import Settings: Base Settings: Brownstone Default Highest Answer Letter: D Multiple Keywords in Same Paragraph: No Chapter: Chapter 5 Multiple Choice 1. Which of the following is true of cooperative scheduling?

More information

Aperiodic Task Scheduling

Aperiodic Task Scheduling Aperiodic Task Scheduling Gerhard Fohler Mälardalen University, Sweden gerhard.fohler@mdh.se Real-Time Systems Gerhard Fohler 2005 Non Periodic Tasks So far periodic events and tasks what about others?

More information

1 Organization of Operating Systems

1 Organization of Operating Systems COMP 730 (242) Class Notes Section 10: Organization of Operating Systems 1 Organization of Operating Systems We have studied in detail the organization of Xinu. Naturally, this organization is far from

More information

What is an RTOS? Introduction to Real-Time Operating Systems. So what is an RTOS?(contd)

What is an RTOS? Introduction to Real-Time Operating Systems. So what is an RTOS?(contd) Introduction to Real-Time Operating Systems Mahesh Balasubramaniam What is an RTOS? An RTOS is a class of operating systems that are intended for real time-applications What is a real time application?

More information

Chapter 5: Process Scheduling

Chapter 5: Process Scheduling Chapter 5: Process Scheduling Chapter 5: Process Scheduling 5.1 Basic Concepts 5.2 Scheduling Criteria 5.3 Scheduling Algorithms 5.3.1 First-Come, First-Served Scheduling 5.3.2 Shortest-Job-First Scheduling

More information

CSC 539: Operating Systems Structure and Design. Spring 2006

CSC 539: Operating Systems Structure and Design. Spring 2006 CSC 539: Operating Systems Structure and Design Spring 2006 CPU scheduling historical perspective CPU-I/O bursts preemptive vs. nonpreemptive scheduling scheduling criteria scheduling algorithms: FCFS,

More information

Programming real-time systems with C/C++ and POSIX

Programming real-time systems with C/C++ and POSIX Programming real-time systems with C/C++ and POSIX Michael González Harbour 1. Introduction The C language [1], developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories, is the most widely

More information

Completely Fair Scheduler and its tuning 1

Completely Fair Scheduler and its tuning 1 Completely Fair Scheduler and its tuning 1 Jacek Kobus and Rafał Szklarski 1 Introduction The introduction of a new, the so called completely fair scheduler (CFS) to the Linux kernel 2.6.23 (October 2007)

More information

Real Time Programming: Concepts

Real Time Programming: Concepts Real Time Programming: Concepts Radek Pelánek Plan at first we will study basic concepts related to real time programming then we will have a look at specific programming languages and study how they realize

More information

CS 4410 Operating Systems. CPU Scheduling. Summer 2011 Cornell University

CS 4410 Operating Systems. CPU Scheduling. Summer 2011 Cornell University CS 4410 Operating Systems CPU Scheduling Summer 2011 Cornell University Today How does CPU manage the execution of simultaneously ready processes? Example Multitasking - Scheduling Scheduling Metrics Scheduling

More information

CHAPTER 5 Exercises 5.1 Answer: 5.2 Answer: 5.3 lottery scheduling

CHAPTER 5 Exercises 5.1 Answer: 5.2 Answer: 5.3 lottery scheduling CHAPTER 5 CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In this chapter, we introduce

More information

COMPLEX EMBEDDED SYSTEMS

COMPLEX EMBEDDED SYSTEMS COMPLEX EMBEDDED SYSTEMS Real-Time Scheduling Summer Semester 2012 System and Software Engineering Prof. Dr.-Ing. Armin Zimmermann Contents Introduction Scheduling in Interactive Systems Real-Time Scheduling

More information

A Predictable and IO Bandwidth Reservation Task Scheduler

A Predictable and IO Bandwidth Reservation Task Scheduler A Predictable and IO Bandwidth Reservation Task Scheduler By HAO CAI B.S., Nan Kai University, 1992 A Project Submitted in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE In

More information

Overview of the Linux Scheduler Framework

Overview of the Linux Scheduler Framework Overview of the Linux Scheduler Framework WORKSHOP ON REAL-TIME SCHEDULING IN THE LINUX KERNEL Pisa, June 27th, 2014 Marco Cesati University of Rome Tor Vergata Marco Cesati (Univ. of Rome Tor Vergata)

More information

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking. CPU Scheduling The scheduler is the component of the kernel that selects which process to run next. The scheduler (or process scheduler, as it is sometimes called) can be viewed as the code that divides

More information

CPU Scheduling. Date. 2/2/2004 Operating Systems 1

CPU Scheduling. Date. 2/2/2004 Operating Systems 1 CPU Scheduling Date 2/2/2004 Operating Systems 1 Basic concepts Maximize CPU utilization with multi programming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait.

More information

Real-Time Operating Systems. http://soc.eurecom.fr/os/

Real-Time Operating Systems. http://soc.eurecom.fr/os/ Institut Mines-Telecom Ludovic Apvrille ludovic.apvrille@telecom-paristech.fr Eurecom, office 470 http://soc.eurecom.fr/os/ Outline 2/66 Fall 2014 Institut Mines-Telecom Definitions What is an Embedded

More information

Linux Audio Conference 2011

Linux Audio Conference 2011 Linux Audio Conference 2011 May 6-8 th, Maynooth Low-Latency Audio on Linux by Means of Real-Time Scheduling Tommaso Cucinotta,, Dario Faggioli, Giacomo Bagnoli Real-Time Systems Lab (RETIS) Scuola Superiore

More information

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run

More information

Operating System Components

Operating System Components Lecture Overview Operating system software introduction OS components OS services OS structure Operating Systems - April 24, 2001 Operating System Components Process management Memory management Secondary

More information

Threads (Ch.4) ! Many software packages are multi-threaded. ! A thread is sometimes called a lightweight process

Threads (Ch.4) ! Many software packages are multi-threaded. ! A thread is sometimes called a lightweight process Threads (Ch.4)! Many software packages are multi-threaded l Web browser: one thread display images, another thread retrieves data from the network l Word processor: threads for displaying graphics, reading

More information

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts Objectives Chapter 5: CPU Scheduling Introduce CPU scheduling, which is the basis for multiprogrammed operating systems Describe various CPU-scheduling algorithms Discuss evaluation criteria for selecting

More information

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis Mark Stanovich Florida State University, USA Contents 1 Introduction 2 2 Scheduling Theory 3 2.1 Workload

More information

Linux Driver Devices. Why, When, Which, How?

Linux Driver Devices. Why, When, Which, How? Bertrand Mermet Sylvain Ract Linux Driver Devices. Why, When, Which, How? Since its creation in the early 1990 s Linux has been installed on millions of computers or embedded systems. These systems may

More information

Chapter 5 Linux Load Balancing Mechanisms

Chapter 5 Linux Load Balancing Mechanisms Chapter 5 Linux Load Balancing Mechanisms Load balancing mechanisms in multiprocessor systems have two compatible objectives. One is to prevent processors from being idle while others processors still

More information

Operating System: Scheduling

Operating System: Scheduling Process Management Operating System: Scheduling OS maintains a data structure for each process called Process Control Block (PCB) Information associated with each PCB: Process state: e.g. ready, or waiting

More information

Chapter 5: CPU Scheduling!

Chapter 5: CPU Scheduling! Chapter 5: CPU Scheduling Operating System Concepts 8 th Edition, Silberschatz, Galvin and Gagne 2009 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling

More information

First-class User Level Threads

First-class User Level Threads First-class User Level Threads based on paper: First-Class User Level Threads by Marsh, Scott, LeBlanc, and Markatos research paper, not merely an implementation report User-level Threads Threads managed

More information

Operating Systems, 6 th ed. Test Bank Chapter 7

Operating Systems, 6 th ed. Test Bank Chapter 7 True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one

More information

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example Real-Time Scheduling (Part 1) (Working Draft) Insup Lee Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania www.cis.upenn.edu/~lee/ CIS 41,

More information

Real Time Scheduling Basic Concepts. Radek Pelánek

Real Time Scheduling Basic Concepts. Radek Pelánek Real Time Scheduling Basic Concepts Radek Pelánek Basic Elements Model of RT System abstraction focus only on timing constraints idealization (e.g., zero switching time) Basic Elements Basic Notions task

More information

CPU Scheduling. CPU Scheduling

CPU Scheduling. CPU Scheduling CPU Scheduling Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & APPS 1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling

More information

Processor Scheduling. Background. Scheduling. Scheduling

Processor Scheduling. Background. Scheduling. Scheduling Background Processor Scheduling The previous lecture introduced the basics of concurrency Processes and threads Definition, representation, management We now understand how a programmer can spawn concurrent

More information

Chapter 3 Operating-System Structures

Chapter 3 Operating-System Structures Contents 1. Introduction 2. Computer-System Structures 3. Operating-System Structures 4. Processes 5. Threads 6. CPU Scheduling 7. Process Synchronization 8. Deadlocks 9. Memory Management 10. Virtual

More information

REAL TIME OPERATING SYSTEMS. Lesson-10:

REAL TIME OPERATING SYSTEMS. Lesson-10: REAL TIME OPERATING SYSTEMS Lesson-10: Real Time Operating System 1 1. Real Time Operating System Definition 2 Real Time A real time is the time which continuously increments at regular intervals after

More information

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition, Chapter 5: CPU Scheduling, Silberschatz, Galvin and Gagne 2009 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Linux Example

More information

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling 1 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation 6.1 2 Basic Concepts Maximum CPU utilization

More information