A Predictable and IO Bandwidth Reservation Task Scheduler

Size: px
Start display at page:

Download "A Predictable and IO Bandwidth Reservation Task Scheduler"

Transcription

1 A Predictable and IO Bandwidth Reservation Task Scheduler By HAO CAI B.S., Nan Kai University, 1992 A Project Submitted in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE In the Department of Computer Science We accept this project as conforming to the required standard Dr. Mantis H. M. Cheng, Supervisor (Department of Computer Science) Dr. Kui Wu, Department Member (Department of Computer Science) HAO CAI, 2005 University of Victoria All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author.

2 ii Supervisor: Dr. Mantis H. M. Cheng ABSTRACT With normal priority based preemptive scheduling, conventional RTOS cannot efficiently address today s real time application requirements such as latency, jitter, IO bandwidth and QoS. However, the predictability in terms of timing, bandwidth and IO requirements is essential for today s real time applications and therefore it should be guaranteed by the RTOS. In this work, we investigate RTAI, the Linux Real Time Application Interface, and RTNET, the real time networking approach for RTAI, and augment RTAI with the support of an application friendly, predictable and IO bandwidth reservation based scheduler. We examine several approaches to prioritize data transmission and to reserve IO bandwidth at the network protocol layer based on RTNET. Our measurements demonstrate that with minimum scheduling overhead an improved predictable timing and I/O bandwidth-based scheduler is feasible. Examiners: Dr. Mantis H. M. Cheng, Supervisor (Department of Computer Science) Dr. Kui Wu, Department Member (Department of Computer Science)

3 iii Table of Contents ABSTRACT... II TABLE OF CONTENTS...III LIST OF TABLES...V LIST OF FIGURES... VI ACKNOWLEDGEMENTS... VIII 1 INTRODUCTION FUNDAMENTALS AND RELATED WORKS RTAI Real Time Hardware Abstraction Layer Interrupt Handling in RTAI RTAI Scheduler LXRT and IPC Facilities A PREDICTABLE RTOS ARCHITECTURE A Predictable Scheduler IO Bandwidth Modeling RTNET, REAL TIME NETWORKING SUMMARY SYSTEM DESIGN OVERVIEW OUR SCHEDULER Scheduler Interface Class, Constraint and Rank Modeling Generic Scheduling Method IO Bandwidth Task scheduling Task State Transitions Real Time Task Descriptor Timer Support and Timing Functions Context Switching Scheduling Routine SCHEDULER EXTERNAL INTERFACES PREDICTABLE NETWORKING IO TRANSMISSION Real Time Socket Interface Priority Queuing and Transmission... 46

4 iv 4 IMPLEMENTATION SCHEDULER IMPLEMENTATION RTNET PROTOCOL MODIFICATION SYSTEM TEST MODULES SYSTEM TEST AND MEASUREMENT OVERVIEW SCHEDULING LATENCY AND PERIODIC JITTER IO TASK MEASUREMENT PPP MEASUREMENT DEVICE TASK MEASUREMENT TASK PREEMPTION CONCLUSION REFERENCE APPENDIX RTHAL DATA STRUCTURE REAL TIME TASK DATA STRUCTURE REAL TIME SOCKET BUFFER DATA STRUCTURE RTAI AND RTNET CONFIGURATION AND INSTALLATION PROCEDURE IO TASK NETWORK TRANSMISSION CODE ROUTINE SCHEDULER FUNCTION LIST SYSTEM DEFINITION IPC QUEUING TASK CONTEXT SWITCH ROUTINE IPC AND PRIORITY INHERITANCE SEMAPHORE EXAMPLE... 83

5 v List of Tables Table 2-1 RTAI Performance Measurement Result... 3 Table 2-2 IO Bandwidth Task Modeling and Scheduling Plan Example... 9 Table 3-1 Maximum IO Processing Power Estimation Table 3-2 Priority Assignment of Outgoing Network Queue... 47

6 vi List of Figures Figure 2-1 Simplified RTAI and Linux Kernel Architecture... 4 Figure 2-2 RTAI Interrupt Dispatcher Processing Flow... 6 Figure 2-3 RTNET Protocol Stack Architecture Figure 3-1 System Architecture, Modules and Interfaces Figure 3-2 Real Time Task Scheduling Class Hierarchy Figure 3-3 Scheduling Classes and Constraints Mapping Figure 3-4 Call Flow of IO Bandwidth Task Yield and Yield Checking Figure 3-5 Call Flow of Setting Yield time for IO Bandwidth Task Figure 3-6 Simplified Task State Transition Table Figure 3-7 Initial Task Stack Content Figure 3-8 Code example to program and setup 8254 Timer Chip Figure 3-9 Real Time Timer Interrupt Handler Execution Flow Figure 3-10 Scheduler external interface Figure 3-11 Proposed Predictable Network IO Transmission Architecture Figure 3-12 Real time socket interface Figure 3-13 Priority Outgoing Queue Data Structure and Relationship Figure 5-1 Generic Test Architecture Figure 5-2 Interrupt and scheduling latency measurement Figure 5-3 Light load scheduling latency Figure 5-4 Medium load scheduling latency Figure 5-5 Medium load periodic scheduling jitter Figure 5-6 IOB task combined test mechanism Figure 5-7 IOB task bandwidth measurement Figure 5-8 IOB Task load comparison Figure 5-9 IOB task load ratio comparison Figure 5-10 IOB task bandwidth ratio Figure 5-11 IOB Task segment comparison Figure 5-12 PPP task delay Figure 5-13 PPP task delay and timeline ratio... 61

7 vii Figure 5-14 Delay for short timeline PPP task Figure 5-15 delay and deadline ratio for short timeline PPP task Figure 5-16 Device task latency Figure 5-17 Device task jitter Figure 5-18 Preemptive under stress load (75%) Figure 8-1 RTAI RTHAL Data Structure Figure 8-2 Real Time Task Descriptor Definition Figure 8-3 Real Time Socket Buffer Data Structure Figure 8-4 Procedure of Configure, Rebuild Linux, RTAI and RTNET Figure 8-5 IOB Network Transmission Code Routine Figure 8-6 Scheduler Function Prototype List Figure 8-7 System Definition Figure 8-8 Semaphore Based IPC Queuing Figure 8-9 Task Context Switch Routine Figure 8-10 Simplified IPC semaphore acquisition call flow Figure 8-11 Simplified IPC priority passing and inheritance call flow... 85

8 viii Acknowledgements I would like to express my sincere appreciation to my supervisor, Dr. Mantis H. M. Cheng, for supervising me to advance myself in this interesting research area. His advice, encouragement and guidance continuously influence and help me while either studying in campus or working in industry. My studying and staying at University of Victoria is a best part of my memory. I should also thank Dr. Kui Wu for all his kind support and helps. I should also thank Richard Liu for all his kind helps. Special thanks also go to RTAI and RTNET open source developer communities. Lastly, and also the most important one, my wife, May, without her understanding, encouragement, and tirelessly support for taking care of the whole family, it would be extremely hard for me to accomplish my study plan.

9 1 1 Introduction In recent years, Linux Operating System has attracted much attentions and popularity both in the research community and in the industry. Linux is a full-featured multitasking operating system with a monolithic kernel, and provides a preemptive but fair task scheduling to achieve good throughputs for server and desktop applications. The disadvantage of Linux is that it cannot guarantee bounded response time and provide predictability that is mandatory for real time and embedded applications, such as robotic device control, air traffic and flight control. Proprietary RTOS, for instance QNX, Wind River s VxWorks and Lynx Works have been adopted in the industry for many years. Comparing with commercial RTOS, Linux is fully open source, extremely popular and active in development communities. It supports a broad range of hardware and devices, is POSIX-compliant and has many applications developed. Therefore, both the industry and the research communities have devoted great efforts and developed various approaches to extend Linux with real time performance. Fundamentally, all real time extensions to Linux can be divided into two approaches. The internal approach enhances the non-preemptive Linux kernel with preemption. Time Sys Linux, Lynx Works Blue Cat, Monta Vista Linux and the newly released standard Linux 2.6 all are examples of this category. The external approach, called as interrupt abstraction, inserts a small real time kernel underneath and runs the standard Linux kernel as its lowest priority real time task. The best known examples are RT-Linux and RTAI. Experimental measurements demonstrate that these approaches have greatly improved the predictability of execution timing; especially the external approach has brought the hard real time performance into the standard Linux world. Similar to most RTOS, both RT-Linux and RTAI are designed with a prioritybased preemptive scheduler. Priority is simple for task scheduling from the perspective of the scheduler, but not intuitive for application designer. It is difficult, and sometimes impossible, to capture various mixed and complicated real time constraints such as latency, jitter, and bandwidth reservation - by priority assignment. This is because there is

10 2 no clear relationship between priority and these constraints. Priority is relative; where latency, jitter or bandwidth is absolute. A more flexible and programmer friendly scheduler is needed if these applications are going to meet their real time constraints. In [1] A predictable Real Time Operating System Architecture, Cheng proposed a predictable and IO bandwidth based scheduling approach to address these constraints, particularly, jitter, latency and IO bandwidth reservation. In this project, we augmented RTAI Linux with similar scheduling policy and performed various comparison and measurements. Besides implementing the new scheduler, we also investigated a TDMAbased - real time network protocol stack for RTAI, RTNET, and proposed an alternative transmission layer implementation for demonstration of IO bandwidth task scheduling without TDMA. In terms of timing and bandwidth requirement specified by application, our RTOS scheduler provides better predictability than RTAI. The reminder of the report is organized as follows. Chapter 2 describes the fundamental background and works related to this project. Chapter 3 presents RTAI internals and the design changes required by the new scheduler. Chapter 4 briefly discusses some implementation and platform details. Chapter 5 summarizes our test approaches and measurement results. Finally, we conclude our findings in Chapter 6.

11 3 2 Fundamentals and Related Works 2.1 RTAI RTAI [3], stands for real time application interface, is an external approach to incorporate real time capabilities into standard Linux. Table 2-1 summarizes the performance figures of RTAI and shows that the external approach can provide hard real time performance. It presents a combined result set from works performed by Peter Laurich and David Beal from Metrowerks, Inc.[7] [8] [9] Table 2-1 RTAI Performance Measurement Result Interrupt Context Switch Maximum Periodical Task Maximum Load Response Time Periodical Task Jitter Preemption Rate (UP) Latency 15us 7us 30Khz 30us 40us Between the two popular implementations of the external approach, we chose RTAI as the development platform for this project due to the following: it is the most active external approach project; it is fully open to public developer communities; it runs on numerous platforms and has many devices support; it has a rich set of RTOS support facilities for application development both in kernel and user space (for example, RTAI LXRT allows to develop RT applications in Linux user space and is one of the most important features of RTAI); it is used by a vast number of real time application projects such as RTNET [5]; and it clearly separates itself from standard Linux kernel by using RTHAL (Real Time Hardware Abstraction Layer).

12 4 To achieve hard real time performance without directly modifying standard Linux kernel itself, RTAI provides RTHAL to interface with standard Linux. It traps and substitutes Linux hardware interrupt related functions with RTAI alternatives. Figure 2-1 illustrates the overall architecture of RTAI. A discussion of RTAI key components follows. Figure 2-1 Simplified RTAI and Linux Kernel Architecture Standard Linux Kernel RTHAL B A RTAI KERNEL B Hardware Processor and Peripherals Real Time Hardware Abstraction Layer RTHAL provides a generic mechanism to separate RTAI from the standard Linux kernel, and takes the hardware and interrupt control away from Linux whenever necessary. An RTAI kernel patch installs the RTHAL into the standard Linux kernel. RTHAL performs three primary functions: 1. gathering and encapsulating all the pointers to the required hardware and interrupt related Linux internal data and functions into a single structure; thus allowing an easy trapping of all real time related Linux kernel functions, where they can be dynamically switched by RTAI,

13 5 2. making substitutions of above trapped functions by RTAI equivalents, and 3. substituting the original function calls in standard Linux kernel with equivalent RTHAL function pointers. In Figure 2-1, the connection label A is how RTHAL provides an indirect connection between the hardware and the standard Linux kernel before RTAI is loaded. Once the RTAI module is loaded into the Linux kernel, the connections with label B are in place and they take away the control of the hardware from the Linux kernel. The behavior of Linux is almost unaffected by RTHAL except a slight performance loss due to enabling and disabling interrupts (calling CLI and STI), and calling equivalent flags related functions. The rthal structure is the central data structure of RTHAL and its detailed definition is given in Appendix Interrupt Handling in RTAI Using RTHAL, RTAI can take over interrupt control from Linux as illustrated in connections B in Figure 2-1. Once loaded, all interrupt related function calls will switch into RTAI equivalents, and as a result, RTAI interrupt dispatcher would handle all hardware interrupts. RTAI provides a transparent interrupt control to and from Linux. In addition, it also introduces a number of abstractions and facilities to real time tasks including interrupt allocation and timer control, and a generic System Request (SRQ) mechanism - to call on the standard Linux facilities. This SRQ facility can be used to dynamically extend the services provided to user space programs. RTAI virtualizes hardware interrupts in Linux by replacing the cli () and sti () with RTAI alternatives. The RTAI alternative functions will record Linux s view of hardware interrupt states and deliver them at appropriate time. Figure 2-2 illustrates the execution path from the moment interrupt occurs to the moment when the control is returned to the program execution. It will check if the related dispatcher and handler have registered and then process the interrupt appropriately. (A detailed explanation about RTAI interrupt handling can be found in a RTAI internals presentation document written by Paolo Mantegazza [3])

14 6 Figure 2-2 RTAI Interrupt Dispatcher Processing Flow IT RTAI Dispatcher Save register Linux Dispatcher Linux HDL RTAI RT Interrupt Handler SRQ Dispatcher RTAI SRQ HDL Linux Interrupt Return Interrupt Return Restore register Return to Execution RTAI Scheduler There are three schedulers that come with RTAI distribution, a UP (uniprocessor) scheduler, a SMP (symmetric multiprocessor) scheduler and a MUP (multiprocessor) scheduler. These schedulers can be used either in one-shot or periodic mode, which is related to the way the hardware timer is programmed. In this project, we replace the UP scheduler with our proposed predictable and bandwidth reservation scheduler, and therefore, we only discuss UP scheduler below. By default, the UP scheduler implements a priority based preemptive scheduling policy; the highest priority task is chosen to run until completion, yielding or being blocked. Linux kernel is running at the lowest priority, 0x7fffffff while priority for real time task is from 0 to 0x3fffffff, 0 being the highest priority. The scheduler is

15 7 implemented as a Linux loadable kernel module. Whenever it is loaded, it will mount and activate the RTHAL, and take over the interrupt control from Linux Kernel. The scheduler can then be triggered either by related system calls or by the timer interrupt event. The RTAI scheduler also provides timing, task, semaphore, and IPC services to real time applications LXRT and IPC Facilities RTAI supports many different IPC mechanisms, dynamic memory management service, c standard compatible POSIX thread, and Linux User Space Real time extension, called LXRT. FIFO, originally from RT-Linux, is an asynchronous and unblocking one-way communication channel. RTAI has enhanced it to support multiple readers and writers, dynamic symbolic identifier, and event notification with signal. RTAI also supports semaphore, shared memory, message queue and mailbox. The LXRT module provides a symmetrical user level API inside Linux. It allows easy developing, testing and debugging real time applications in user space; and the user level real time tasks can eventually switch into kernel space without modification. By using a real time agent, LXRT allows applications to switch between soft real time and hard real time. When switching into hard real time mode, user task cannot make system calls, and must also make sure that the memory being used stays in RAM and hence must disable memory paging by making mlockall () call. Such tasks would be removed from Linux running queue, and would be scheduled by RTAI scheduler instead. 2.2 A Predictable RTOS Architecture Although RTAI provides bounded hard real time performance, as stated in previous sections, it still uses a conventional priority based scheduler, which is not intuitive for application designer to specify real time constraints. For example, it is not clear how to specify a periodic data sampling task with bounded jitter constraint, a soft real time task requires 50KB/S networking IO bandwidth and processing, or a time critical task with complex timing cycles and deadline.

16 8 Therefore, an improved scheduler is required to provide the guarantees required by real time applications while also allowing an arbitrary, mixed workload. Furthermore, a good scheduler should provide an intuitive interface that is easy to understand by application designers, and capture real time constraints more accurately. Many research activities have performed on these related topics [1] [13] [14] [16] [23] [28] [29] [30]. In this work, we implement and test the scheduling policy presented by Cheng [1] and we describe its main concept briefly in sections below A Predictable Scheduler In [1], Cheng proposes a static four-level scheduler to address the different characteristics of mixed real time application workload, and he presents a generic approach of modeling IO bandwidth requirement. 1. Device level is mainly designed for data sampling tasks with a fixed period and a jitter tolerance requirement. These periodic tasks usually require very short processing time, but with minimum period jitter. 2. Planned level can address periodic tasks with complex timing requirement, for example, one with multiple non-uniform execution phases within a single cycle. 3. IO level addresses tasks that are IO bandwidth and latency bounded. Each task in this level would have a maximum IO bandwidth and a latency requirement. The bandwidth would be translated into the execution timing and IO buffering and the latency is translated into segmentation size for each cycle. For example, a bandwidth of 8KB per second means a buffer of 8KB is adequate provided the IO task is scheduled once per second. A latency of 250 milliseconds means the same IO task will be scheduled 4 times a second with a bandwidth of 2KB per cycle. Without latency requirement, the same IO task may suffer the worst case of 2 seconds latency between two successive 8KB IO operations. 4. Sporadic level is designed for aperiodic task with a relative urgency requirement. There is a 1-to-1 mapping between task priority and urgency.

17 9 Device level tasks have the highest priority while sporadic tasks have the least. Higher-level tasks can preempt lower ones, and preemption within the same scheduling level is not allowed for level 1 to level 3 tasks. The execution timing and IO bandwidth utilization of level 1-3 tasks are guaranteed. Any violation should be detected and then reported IO Bandwidth Modeling With the rapid development in processor technology, real time applications nowadays are more IO bandwidth hungry; therefore modeling IO bandwidth requirement into proper scheduling policy becomes crucial. In [1], Cheng proposed a novel but simple method for modeling IO bandwidth utilization and latency. We briefly describe the main concept below. An IO task can specify two parameters, bandwidth and segmentation. Assuming a task requires an IO bandwidth of 8K bytes/s, by adjusting the segment size, several scheduling plans are feasible, and are shown in Table 2-2. Table 2-2 IO Bandwidth Task Modeling and Scheduling Plan Example Options Segment Size (KB) Period (ms) Bandwidth (Kbps) CPU utilization Latency (ms) % % % % 230 With the assumption of a 100K bytes/s maximum IO processing capability, this IO task can be scheduled in above four options with different segment size for each. By studying above result, we can conclude that no matter which option is chosen, the CPU utilization is always 8% in each period, that matches its bandwidth requirement (8/100=8%). Another interesting observation is by using the different segment size for each period, the latency varies. The latency is the maximum gap between two consecutive IO segments. Therefore, by using above modeling, IO bandwidth task can be scheduled as periodic task, with bandwidth and latency enforced.

18 RTNET, Real Time Networking RTNET [5], is an open source project, and implements a real time networking framework based on RTAI. By using TDMA, Real Time Media Access Control (RTMAC) within a closed network environment, RTNET can provide predictable, deterministic network communication. This is because the access to communication media is controlled by TDMA (Time Division Multiplex Access); each station takes the assigned time slots for communication. Inside TDMA RTMAC, a priority queue is also enforced. All outgoing packets would be delivered based on their priority level. Additionally, RTNET implements the real time driver model and UDP/IP stack in RTAI kernel. Via a virtual NIC driver based on RTMAC, RTNET can proxy TCP/IP traffic to Linux TCP/IP stack and the user space applications. Figure 2-3 illustrates the protocol layer of RTNET. Figure 2-3 RTNET Protocol Stack Architecture RT Application Linux TCP/IP Stack RT UDP/IP Stack VNIC RTMAC Layer TDMA Layer TDMA Algorithm Real Time Ethernet Driver Real Time Ethernet Driver Network Hardware

19 Summary From above discussion, we can conclude that, in order to address mixed real time application workload, and to achieve predictability and determinacy, we need to improve upon a priority-based scheduler with a time- and IO-based scheduler. Furthermore, today s IO bandwidth hungry real time application requires the bandwidth guarantee of both the CPU processing and IO resource. In next chapter, we will present the detailed design of a five-level scheduler implemented using RTAI, and propose an enhancement in RTNET protocol stack to demonstrate a simple IO bandwidth and priority based packet delivery mechanism without the support of the RTMAC and TDMA. The design of our scheduler s API allows an application designer to express his absolute real time constrains, such as jitter, latency and bandwidth, more directly without translating into unintuitive relative priorities. We show how these constraints can be implemented by a time-triggered priority-based scheduler, which is compatible with existing priority-based scheduler.

20 12 3 System Design 3.1 Overview All RTAI and RTNET components are implemented as individual Linux loadable kernel modules, and the related functionality module can be replaced in a similar way but with enhanced application interfaces. In this work, we modified the RTAI UP scheduler and RTNET real time network driver modules to support a hierarchical, predictable, and IO bandwidth reservation based scheduler. Figure 3-1 depicts the system level overview. Highlighted components are the modified modules. The dotted lines with marked text identify related functional interfaces. Figure 3-1 System Architecture, Modules and Interfaces RT Task RT Task IV RT Task I Linux Kernel Socket UDP/IP RT Net Dev V RT NET Stack Mgr Real Time Ethernet Driver IPC II RT Scheduler RTHAL III Hardware

21 13 The RTAI UP scheduler is replaced with a new and backward compatible scheduler that implements our proposed scheduling policy. Task and time related functions to other application components are provided via the interface I. RTHAL provides hardware timer facility, context switch support, interrupts handling and global synchronization mechanisms via the interface III. Intentionally, IPC interface II remains the same for both application tasks and the scheduler to avoid massive code and design changes, especially with the consideration of priority inversion and inheritance. The RTNET module contains the whole RT networking protocol stack as shown in Figure 3-1. At the top level, RTNET provides the socket interface via the real time driver module, the RTDM module. In this work, we do not use RTMAC and TDMA due to the lack of support of a dedicated multi-node real time networking environment. A simplified IO buffer reservation and prioritized transmission queuing is designed for demonstration of outgoing network transmission. The interface IV provides the IO buffer and priority configuration interface to applications. Data packets, along with these parameters, transverse to the stack bottom and would be queued in the related priority queue of the multiplexed RTNET device component. The outgoing stack manager task fetches outgoing packets from the queue, and passes them to the Ethernet driver, and finally to the IO device DMA memory. An alternative and simple design is no queuing at all; due to our CPU bandwidth reservation scheduling, the networking subsystem will be implemented by different IO bandwidth tasks. 3.2 Our Scheduler Figure 3-2 Real Time Task Scheduling Class Hierarchy

22 14 RT Tasks Bounded System (Class 0) Unbounded (Class 4) - Priority Device (Class 1) Aperiodic (Class 2) IO (Class 3) Periodic Aperiodic - Period - Jitter - Deadline - Bandwidth - Latency Linux Kernel We have designed five scheduling classes to address different concerns of a mixed real time application workload. As illustrated in Figure 3-2, all application tasks are classified into either bounded or unbounded categories. Each category is further subdivided based on different real time characteristics and constraints. Bounded tasks take precedence over unbounded tasks. Tasks in lower scheduling class have higher rank (similar to priority) than tasks in higher scheduling class. Tasks with higher rank always preempt lower rank tasks whenever they become runable. We will discuss our notion of rank further in later sections. 1. Class 0 Class 0, reserved for system task such as a watchdog task, is the highest scheduling class. A class 0 task can preempt tasks in any other classes. Within this class, a priority may be assigned to each system task. Scheduling for this class would be based on this assigned priority. The rationale behind this is because tasks, such as a watchdog, may preempt any other system task for any resource violation, or for detecting overloads conditions, etc. For example, a watchdog task can be designed to detect CPU usage violation and report and then terminate the violating tasks, or to administer run time admission control.

23 15 2. Class 1 Bounded periodic data sampling task Class 1, called Device class, is primarily designed for periodic sampling tasks with stringent jitter requirement. Tasks running at this class usually have bounded latency and jitter requirement with very short execution time. Device tasks can preempt any other class tasks including another Device task, except tasks in class 0. Each Device task has a period and a jitter. The jitter is specified in (+/-) percentage of the period and will be converted into internal clock units. The lower the jitter tolerance value, the higher the rank (priority) within this class. When multiple periodic Device tasks are ready to run at the same time, the scheduler will choose tasks with shorter period and lower jitter value first. A device task may be scheduled earlier than its specified period, as permitted by its jitter tolerance. (Note: All device tasks will eventually become runable at the same time. Those with shorter periods have precedence over longer ones.) 3. Class 2 Bounded, aperiodic task Some real time applications need handle some aperiodic events either immediately, or within bounded response time. Probably the most suitable model for this type of bounded, aperiodic tasks is deadline driven scheduling. In our work, we simplify this with a static time-based scheduling class enhanced with a soft deadline. 1 There are two parameters to specify in this case: the first specifies a start time and the second specifies an expected completion deadline, and both are relative to current system time. The second parameter is then converted into a rank; the earlier the deadline, and the higher the task rank. Class 0, class 1 and other higher rank class 2 tasks can preempt lower rank class 2 tasks. After a class 2 task completes its work, it may reset with a new future start time and deadline by itself or by another task. In our current design, we allow a deadline change only on non-ready tasks or on the currently running task. 1 In our current implementation, since our focus is not about deadline-driven scheduling, our scheduler does not enforce hard deadlines. We use deadlines only as a form of soft priority.

24 16 4. Class 3 IO bandwidth task Class 3 has priority lower than above scheduling classes but higher than the unbounded class. An IO task must specify its required IO bandwidth in KB/S and segment size in KB. As discussed in previous sections, IO tasks are modeled as periodic tasks with CPU reservation. The scheduler would provide processing guarantee until the completion of the reserved IO bandwidth. If an IO task is preempted by another higher rank tasks, it would be resumed its execution for the remaining bandwidth whenever feasible. IO tasks with smaller segment size (lower latency) would have higher rank. Higher rank IO tasks can preempt IO tasks with lower rank. When multiple IO tasks are competing for CPU at the same time, the scheduler would choose IO tasks with smaller segment size to run first. 5. Class 4 Unbounded task Finally, unbounded tasks, class 4, can be either periodic or aperiodic tasks that do not have bounded latency or jitter requirement. All original RTAI RT tasks, periodic or aperiodic, belong to this class. Unbounded class is the lowest priority real time scheduling class. Each unbounded task has a priority. The scheduling policy for this class is preemptive and priority-based. By default, the Linux kernel and its applications, viewed as a single task in our scheduler, is scheduled to run in this class Scheduler Interface A good RTOS scheduler should accurately capture unique application constraints and characteristics. It should provide an easy to use and application designer friendly programming interface. In our work, this implies to provide an intuitive API for 2 We designed this class so that existing RTAI code can run without any changes. Our work mainly focuses on the enhancement of bounded tasks.

25 17 the above proposed classes. Both task and timing functions are essential. Other functions such as IPC are required as well. We discuss a few essential API functions in details in following sections. A complete list of our scheduler API functions is given in appendix 8.6. In section 3.2.5, we present the task states and transitions in details. int rt_task_init(rt_task *task, void (*rt_thread)(int), int data, int stack_size, int priority, int uses_fpu, void(*signal)(void)) A task is initially created by calling rt_task_init () function. Before this call, the related task context (RT_TASK or struct rt_task_struct) should be pre-allocated and its address is passed as the task argument. The rt_thread is the address of the task routine to be executed. A data argument is used to pass data to the task routine. The stack_size is the stack size used by the new task. Priority is the priority of the task. By default, a task is created in class 4 and is compatible with existing RTAI functions. After a successful creation, a task is in suspended state. int rt_task_make_dev(rt_task *task, RTIME start_time, RTIME period, RTIME jitter) After creating a task, an application designer can change this task to any other class. For example, to create a Device task, we use the above API function. The parameter start_time is the number of clock ticks to wait relative to current system time before this task is allowed to run. The absolute resumption time in clock ticks is then stored in this task s context descriptor. The parameter period is the task period in clock ticks and the jitter corresponds to the task s jitter tolerance in percentage. The jitter in percentage is then converted into jitter in clock ticks internally. This task would then become a device task and move from the suspended state into the delayed state. While in delayed state, the task will be sorted in a delayed queue by its resumption time. The resume time field in the task context descriptor contains the task s next absolute starting time and will be updated accordingly after each period. If this API call is successful, then the class, the period, the jitter, the base rank and the rank parameters will

26 18 be updated accordingly. This task will move to the ready state and wait for a processor when its resumption time is reached. int rt_task_make_iob(rt_task *task, RTIME start_time, int bandwidth, int segment) Using the above API, an application designer can create a class 3 IO bandwidth task. The start_time is the time in clock ticks relative to current time for this task to wait before it starts running. The bandwidth is the required IO bandwidth in KB/S and the segment is the size of each contiguous segment in KB. If this call is successful, then the class, the bandwidth, the segment, the resumption time, the period (related to IO task latency), the IO bandwidth quantum, the base rank and the rank in the task context descriptor are all updated accordingly. As a result, this IO task is moved into the delayed state and would be queued on the delayed queue. There is a limit on the total run time load of all IO bandwidth tasks, which is defined in the Appendix 8.7 and the admission control is enforced at task creation time. Admission control for other class tasks can be enhanced similarly, but it is not covered in our current design. int rt_task_make_edf(rt_task *task, RTIME start_time, RTIME deadline) void rt_task_reset_resume_deadline(rtime new_resume_time, RTIME new_deadline) The API call rt_task_make_edf () is used to create a class 2 task. The start_time specifies the time in clock ticks relative to current time (the resumption time), and to wait before this task starts running. The deadline specifies the expected completion time relative to current time in clock ticks (the deadline should be equal to or greater than the start time). It also informally specifies when a class 2 task should complete it assigned work and also serves as a relative rank parameter besides the resumption time. (In our work, a deadline is treated as a time reference rather than an actual deadline. Therefore, dynamic scheduling based on deadlines or detecting missing deadlines is not currently addressed.). A class 2 task becomes ready when its resumption time has passed. In the

27 19 case when multiple class 2 tasks are ready at the same time 3 (), their respective deadline reference will be used for arbitration; the one with the earliest completion time will be chosen to execute first. If this call is successful, a class 2 task is put on the delayed queue and all parameters such as the class, the resume time, the base rank, and the rank are all updated accordingly. Currently running class 2 tasks can set up a new resume time in the future and a new deadline using the rt_task_reset_resume_deadline () function. A running task would then move into delayed state immediately and will continue executing when this function returns after its resumption time is reached. Class 2 tasks without resetting its resumption time and deadline should terminate upon completion. void rt_set_sched_class (RT_TASK *task, int class, int priority) int rt_change_prio (RT_TASK *task, int priority) There is no explicit task creation function defined for class 0 tasks. However, a user can use above first API call to create a class 0 task from an initialized class 4 task. These two API calls apply only to class 0 and class 4 tasks. int rt_task_make_periodic (RT_TASK *task, RTIME start_time, RTIME period) void rt_task_wait_period (void) void rt_task_yield (void) Both class 0 and class 4 tasks can call rt_task_make_periodic () to become periodic tasks. The start_time specifies the resumption time in clock tick relative to current system time and the period specifies the period in clock ticks. By design, class 1 Device task and class 3 IO task are periodic task. The rt_task_wait_period () call can be used by all currently running periodic tasks except class 3 IO tasks (IO task has its own API, discussed later in IO task scheduling) to delay their execution to the next 3 Our timing precision is one half of a system clock tick in periodic mode and is the calibrated 8254 timer setup latency in one-shot mode.

28 20 period. After calling this function, the periodic task s resumption time is updated with its period and it would move into the delayed state. The rt_task_yield () API function is provided for a running unbounded aperiodic task to yield its execution to other tasks with same rank. (Note. For class 2, bounded aperiodic tasks, since the rank contains converted deadline reference relative to its original resumption time, it is meaningless to yield its execution; therefore yielding for class 2 tasks is not allowed.) A yielded task remains on the ready queue Class, Constraint and Rank Modeling To simplify our implementation, we convert all application level scheduling parameters into an internal rank value (much like a priority value). The rationale behind this implementation decision is: to remain consistent with existing RTAI source codes such as scheduling logic processing, queuing, semaphore and message queue and IPC operations; simple, easy to maintain - all processing in related source code module would only need be aware of this uniform and generic rank argument; to separate the implementation of scheduling logic from high level application oriented parameters or data structures and to take advantages of both of them; less runtime scheduling (including IPC operations) overhead under regular load, no complicated scheduling logic processing based on different class and scheduling parameters such as jitter, or latency, or bandwidth or deadline, a single parameter comparison is adequate; and to combine the advantage of flexibility of our user oriented API with simplicity of conventional priority based scheduling. The conversion takes place at task creation time. Task rank is used when runable tasks are scheduled. A disadvantage of this approach is that our rank mapping may not accurately capture all the intended application real time constraints (e.g., task

29 21 deadline). But, our mapping is crucial in our implementation. We sacrifice accuracy for efficiency. One benefit of conventional priority based preemptive scheduling is its simplicity and low scheduling overhead. The scheduler just need check the task s run time priority and make context-switching decision accordingly. Other OS facilities such as IPC would also benefit from this simple priority based scheme. Application friendly scheduling algorithms (e.g., deadline scheduling), although they can address particular application constraints, usually incur extra run time processing overhead due to complex scheduling logic. When replacing an existing scheduler with another, other RTOS facilities such as IPC might have to be modified as a result. In order to minimize run time scheduling overhead and changes to other parts of the previous scheduler, a mapping from classes and application constraints to a linear ordering, called rank, is devised. All ready tasks are scheduled based on their rank. All other user-level scheduling parameters, such as task class, deadline, latency, jitter or priority will be combined together into a single rank parameter. This simplification has the side effect that the scheduling overhead may increase as the number of tasks increases due to a single run queue. This is an implementation choice and an alternative approach of using per class run queue could be used instead. The obvious benefits of this approach are simpler scheduler processing logic and minimum changes to source code to existing scheduler and IPC facilities in RTAI. Our scheduler still need do extra work for dealing with IO task due to their unique nature. Figure 3-3 Scheduling Classes and Constraints Mapping

30 22 Scheduling Classes Class 1 Class 2 Class 3 Class 0 & 4 Jitter Deadline Latency Priority Or Or Or Classes Base Rank Rank Deadline High Rank Parameters (4 bytes) Low As depicted in Figure 3-3, the scheduling classes are mapped into the upper four bits of the most significant byte in the four-byte rank attribute in the real time task data structure. A rank is treated as a 32-bit unsigned integer for comparison. The lower the rank value, the higher the priority. For class 1 tasks, the jitter constraint (expressed in clock ticks) is encoded into the lowest 24 bits of rank attribute; the lower the jitter tolerance, the lower the rank value. For class 2 tasks, the absolute timeline (deadline reference) constraint is encoded into the lowest 24 bits; the earlier the timeline, the lower the rank value. For class 3 tasks, the latency, converted from segment, is encoded into the lowest 24 bits; the shorter the latency, the lower the rank value. Finally, the priority constraint for class 4 and class 0 task is encoded directly; the higher the priority constraint, the lower the rank value. Each real time task descriptor, the real time task data structure and context as defined in section 3.2.6, would have two rank parameters the base rank parameter and the current rank parameter. They are both assigned with the same value initially. However, the actual rank is used to track for the actual rank used for scheduling. The actual rank would change dynamically in case of priority inheritance.

31 23 When mapping some parameters whose data types are longer than 32 bits, such as a deadline that has an 8-byte TIME type, fewer bits are available in our base rank and rank attributes; as a result, we employ a special rank encoding. The design for this encoding would be different due to different scheduling class characteristics. Our generic design concept, in this case, is then to reduce the precision but still have maximum distinct values. For instance, we define the latency constraint for class 3 task internally as an 8- byte RTIME type (in nanoseconds, this is based on RTAI timing by default, and can be changed into a value based 100 internal clock ticks or microseconds with less precision). The latency parameter can be converted into a precision of microseconds by dividing 1000 and stored in the least significant 3-byte of the rank parameter. Therefore, we can distinguish IO tasks latency constraints from 1us to 16,777,215 us (0xFFFFFF) with a precision of 1us. An IO task with latency value greater than the maximum distinguished (0XFFFFFF) value would be assigned with the lowest rank (priority) value (0X30FFFFFF) in class Generic Scheduling Method All real time tasks are either periodic or aperiodic. Periodic tasks have an additional period argument specified; they are required to run at the specified period. Examples of periodic tasks could be Device class tasks, IO bandwidth tasks, and periodic but unbounded class 4 tasks. On the other hand, an aperiodic task can either be bounded, thus has a deadline argument, or just be unbounded aperiodic tasks. An example for the first case would be Class 2 tasks and an example for the second case would be aperiodic class 4 tasks. Task timing requirements are specified in several attributes in the real time task descriptor as furthered discussed in We present our generic scheduling mechanism in details as below. All time-based tasks, with specific resumption times, would be either in delayed or ready state. A delayed task is put on the delayed task queue sorted by its resumption time. A delayed task is scheduled to run (i.e., becomes ready) whenever its

32 24 resumption time is earlier than the current system time. When a task finishes its execution, if it is periodic, it will be put back into the delayed task queue with its next resumption time updated with its previous resumption time plus its period. If it is an aperiodic task, it may simply terminate, or specify a new start time and deadline and thus put itself back on the delayed task queue. For Class 1 task, in the current design, the jitter value is combined together with its resumption time to decide whether the task can be put into ready state. As stated earlier, we convert the jitter value into internal clock ticks, and a class 1 task can move into ready state jitter ticks earlier than its resumption time. Scheduling arguments associated with timing constraints in the task context are listed below: period, (in internal clock ticks) deadline, (the deadline reference mentioned earlier) jitter,(in internal clock ticks) resume_time, (an absolute system time reference) yield_time, (recording the completion time for IO task, absolute system time notion) latency, (for IO tasks, in clock ticks) iob_quantum, (for IO tasks, in clock ticks), and iob_remaining (for IO tasks in clock ticks). All ready tasks are maintained in a ready queue in rank order. Task with the highest rank would be the first task on the ready queue and is chosen to run next. A running task could be preempted immediately when another task with a higher rank becomes ready. As discussed earlier, two rank parameters are designed for each task and are stored in the task context descriptor: rank---the actual rank used for scheduling of ready task, and base rank ---the assigned rank converted from other constraints.

33 25 Here is an example. A Device task is created with a period of 1000 ms, a 1% jitter constraint, and a start time. These constraints are captured in the task context descriptor as, period, jitter, and resume time in clock ticks. This task is then put on the delayed queue. The delayed task queue is periodically examined by a system timer interrupt handler (timer and timer support is discussed in 3.2.7). This examination can also be invoked whenever a running task yields, completes its execution (resets a new resumption time and deadline), or becomes blocked (whether to check this in this last case is a scheduler compile-time option). The resume_time with jitter constraint is then compared with the current system time and the task is put into the ready queue upon satisfaction of its constraints. The jitter constraint specifies that this task can be scheduled to run 10 ms earlier or later than its actual resume_time. As described in earlier sections, tasks in ready state are ordered by their rank values. The base and the actual rank parameters for this task would have a value of 0x (the most significant byte represents its scheduling class, 0x10, the least significant 3 bytes have the converted jitter value in microseconds, 1% * 1000 ms = 10 ms = us = 0x002710). The task would be chosen to run based its rank. After the completion of its current cycle, this task s next resume_time is increased by its period (1000 ms) and the task is put back into the delayed queue. Since task constraints such as scheduling class, timeline, latency, and jitter all have been captured in the rank attribute, a simple priority ( rank ) based preemptive scheduling can satisfy our scheduling requirements. Each task would have two rank attributes, one is the base rank and another one is the actual rank. The actual rank usually equals to the base rank unless priority inheritance takes place. In that case, priority inherited task will restore to its base rank once the resource is released. Although constraints such as jitter, deadline, and latency have been carefully considered in the algorithm, by its nature, our scheduler cannot provide full predictability guarantee for all tasks especially under overload situation. Therefore, predictability and determinacy can only be assured for the most urgent tasks with highest rank in current

34 26 design. Approaches such as task admission control, run time task monitoring by using system class watchdog task, etc., can be used for this purpose. (An Alcatel sponsored watchdog project for RTAI can be enhanced [3].) It is currently outside the scope of this work IO Bandwidth Task scheduling Besides our generic scheduling mechanism described above, IO bandwidth tasks need special handling. As discussed in 2.2.2, the creation of an IO bandwidth task requires two essential parameters, the bandwidth and the segment size. An earliest start time is also specified as a task argument. The ratio of bandwidth and segment size (per second) is the task running frequency, and will be stored in the period argument of the task descriptor. The bandwidth requirement would be converted into CPU IO processing time, hence the CPU utilization, internally in CPU clock ticks per period. We capture and store this information in the iob_quantum field. An IO task context descriptor contains following relevant fields: period, (calculated IO task period in clock ticks), yield_time, (recording the ideal expected completion time for IO task for each run, it is in absolute system time notion and updated in each run), latency, (for IO tasks, in clock ticks, actually similar as IO task period), iob_quantum, (for IO tasks, in clock ticks), iob_remaining, (for IO tasks in clock ticks), resume_time (IO task resumption time, updated each period), rank, (the actual rank, converted from IO latency or period), and base rank. IO bandwidth task is modeling as a periodic task with a bounded CPU processing time enforced for each period. Due to its periodic nature, the scheduling of an IO bandwidth task would first obey the generic time-based scheduling policy according to its timing constraint i.e., its resume time. Once it is chosen to run based on its rank, this IO bandwidth task would then follow its own IO scheduling logic.

35 27 Besides the period and the iob_quantum attributes, an additional attribute called iob_remaining is used to track required but unconsumed CPU processing time for each period. Initially, at the beginning of each period, the iob_remaining is charged with the value of iob_quantum. The iob_remaining value will decrease and be updated accordingly based on the actual CPU used for each run. This update takes place when an IO task is being context-switched. Another field in the task descriptor, called yield_time, is used to record when an IO bandwidth task should yield its execution and be forced to delay until next period. When an IO bandwidth task is chosen to run, its yield_time (an absolute time reference) is initialized into the current system time plus the iob_remaining value. Therefore, the yield_time specifies the moment when the IO bandwidth task should voluntarily yield the CPU to other tasks. However, an IO bandwidth task may be preempted by a task with higher rank before it voluntarily yields. For example, a class 1 Device task in the delayed state with resumption time earlier than this IO task s yield_time. In this case, the scheduler timer interrupt would be setup based on the Device task s resumption time, and not on the IO task s expected yield_time; when the interrupt takes place, the scheduler would force the IO task to yield; its iob_remaining field is then decreased by the CPU clocks it has been running until interruption time (with some timer resolution calibration). If the iob_remaining is greater than 0, it implies it has not used up all its allocated CPU time (i.e., its bandwidth) for the current period and thus it would remain on the ready queue with updated iob_remaining for the next run. When it resumes its execution later, it will again set up its new yield_time based on new current system time and the updated iob_remaining attribute value. This is repeated until its iob_remaining attributes becomes 0 or negative. If the iob_remaining become 0 or negative, it implies the IO bandwidth task has used up its allocated CPU time for the current period, and thus it should be delayed till next period. In this case, the iob_remaining attribute is recharged with the iob_quantum attribute value for next period, and the resume_time of this IO bandwidth task is also increased with its period. And if the new resume_time is later than the current system time, the IO bandwidth task would move into delayed state; otherwise, this means it

36 28 has missed its next period due to possible preemption, and it would then remain in the ready state. We will discuss the detailed scheduling control flow further in the following sections and in section Figure 3-4 depicts the control flow of task yielding for a possible interrupted IO bandwidth task the iob_yield_check (). This routine is called inside the scheduler function or the timer interrupt handler; its main function is to decide whenever a currently running IO bandwidth task is interrupted, and whether this task should remain in the ready state or move into delay state. As discussed earlier, an IO bandwidth task, that has used up its allocated CPU bandwidth and has a new resumption time greater than current system time, would move into the delayed state; otherwise, it will remain in the ready state. As also noted in Figure 3-5, the new value of the iob_remaining attribute is calculated from the yield_time and the interruption time. The resume_time is compared with the rt_time_h the interruption time plus calibrated timer and scheduling latency value. (Timer support is further discussed in 3.2.7) Figure 3-5 illustrates the control flow of setting the voluntary yield_time for IO bandwidth task, - iob_set_yield (). The scheduling function (rt_schedule ()) or the timer interrupt handler (rt_timer_handler ()) calls this function whenever a new IO bandwidth task is chosen to run next. When the IO task is chosen to run, this function is called to set up an initial interruption time (the timer interrupt would not be setup at this point). The initial interruption time (yield_time) is set into the current system time plus the value of iob_remaining attribute this yield_time is the ideal interruption time for the IO bandwidth task to complete its allocated bandwidth for the current period. The scheduler would then continue checking tasks in the delay queue, to see if there is any task with an earlier resumption time than the above yield_time and with a higher or same

37 29 scheduling class. If there is no such task, the interruption is now set up based on above yield_time; otherwise, the interruption is set up based on that task with higher rank and earlier resumption time. The scheduling routine, timer support and IPC are further discussed in late sections. Figure 3-4 Call Flow of IO Bandwidth Task Yield and Yield Checking Policy == IOB_TASK io_remaining = yield_time tick_time io_remaining = io_quantum N N io_remaining <=0 Y resume_time += period N resume_time >= rt_time_h state == ready N state = delayed rem_ready_current() enq_delayed_current() Figure 3-5 Call Flow of Setting Yield time for IO Bandwidth Task policy == IOB_TASK yield_time = rt_time_h + io_remaining Set possible interrupt time In order to calculate the iob_quantum for each running period, we need obtain the platform dependent maximum IO bandwidth processing. Various methods are available for this purpose and our approach is simple. By performing 100, 000 memory copy

38 30 operations and obtaining the time duration spent for these operations by reading TSC, the 386 CPU timestamp clock counter, we can then calculate the average memory copy cost and obtain an approximate maximum CPU memory copy speed. A measurement result conducted on a Pentium II 350 development machine with 256MB memory is presented in Table 3-1. Table 3-1 Maximum IO Processing Power Estimation Date Length (Byte) Average Cost Time (ns) Maximum IO Processing Speed Case MB/s Case MB/s Therefore, for example, with an assumption of maximum 1024 MB/s CPU IO bandwidth capability, we can calculate the CPU processing time required for any IO bandwidth task Task State Transitions Independent of which class a task belongs, after creation, a task is in the suspended state. A suspended task becomes ready immediately if it is resumed, or becomes delayed if it is assigned a start time to resume later. A running task moves into delayed state if it is a periodic task and it yields to wait for its next period; or if it is an aperiodic task and resets its next start time to the future. Tasks in the delayed state are sorted based on their absolute timelines, i.e., their resumption times. When a task s resumption time passes the current system time, it will change its state into ready and will move into the ready queue waiting for execution. For class 1 Device tasks, they can be wakening up earlier than their resumption time if they can meet their jitter constraints. Initially, all other non-device class tasks are initialized with a jitter value of zero.

39 31 On a single processor machine, there is always only one running task in the system. Based on our scheduling algorithm, the scheduler chooses a task in ready state to be the current running task. Tasks may be blocked, due to resource contention. In our scheduler, all tasks would be in one of the following states and we maintain the state information in the task context descriptor. Figure 3-6 depicts all possible states and the simplified state transitions. Please note in this diagram, there is no notion of periods. Tasks in ready state are sorted based on their rank. Tasks in delayed state are sorted based on their absolute timelines, their resumption time. We use timelines for timebased scheduling, and use ranks for priority-based scheduling. Suspended State By calling task function rt_task_init (), we initialize a task. In this case, the task is in suspended state implicitly. By calling rt_task_suspend (), we can explicitly suspend a task as well. Ready State We can resume a task in suspended state by calling rt_task_resume () and it then becomes ready. All tasks in ready state are runable. Tasks in ready state are sorted based on their rank parameters and linked on a doubly linked ready queue. The head of the ready queue is the task with the lowest rank (highest priority) and the lowest rank task will be selected to run by the scheduler and switched into the running state. Figure 3-6 Simplified Task State Transition Table

40 32 Creation Running Suspend Selected Suspended Resume Ready Expired Set timeline Delayed Completed Wait Preemption Or Yield Signal Wait next period or Reset timeline Done Blocked Delayed State Delayed tasks are tasks scheduled to run at a specified later time. All delayed tasks are in delayed state, and sorted by resumption time on a doubly linked delayed queue. After being initialized, tasks can switch into delayed state by calling rt_task_make_periodic (), rt_task_make_dev (), rt_task_make_timeline () or rt_task_make_iob () etc. task functions. A task in delayed state switches into ready state by the scheduler when its resumption time is met. A running task can switch back into delayed state by calling rt_task_wait_period () for class 1 tasks or rt_task_reset_resume_deadline () for class 2 tasks to set up to run at a later time. When a running task switches into delayed state, its next resumption time is updated with a new timeline; for periodic tasks, the new resumption time is the previous resumption plus its period. All tasks in delayed state have an absolute resumption time later than the current system time plus a calibrated timer latency value. For an IO (class 3) task in running state, when its assigned CPU quantum (iob_quantum) is used up and resumption time is later than current system time, it would

41 33 be moved back to delayed state automatically. Its next resumption time would be updated to the start time of next period, and its allocated CPU quantum (iob_remaining) is also recharged. Running State A task with the lowest rank (highest priority) on the ready queue would be chosen to run. There is no explicit running state. The scheduler always chooses the lowest rank task for execution. The execution time for a running task would depend on whether there are lower rank delayed tasks, or whether it has exhausted its CPU bandwidth in case of an IO bandwidth task, or whether it completes its current execution cycle in case of a periodic task, or whether it competes for other resources such as IPC. A running task may yield voluntarily or be preempted involuntarily. In either case, it remains ready but not running. Furthermore, a running task could give up its CPU and becomes delayed if it resets a new resumption time, waits until next period, or uses up all its allocated processing time. When a running task completes its execution, it may terminate and becomes Done. As a result, this task and its allocated resources will be completely released. Periodic task such as class 1 device task should call the rt_task_wait_period () to switch into delayed state. An IO task would move into delayed when it yields. Upon preemption, a periodic task would remain in the ready state. After being chosen to run again, a periodic task could miss its next period due to long preemption time. In this case, when the call rt_task_wait_period () is called, it will check the resumption time, if it is earlier than the current system time, the task would remain in the ready state; otherwise, it will move into delayed state. Blocked State The running task can become blocked when it attempts to acquire resources owned by other tasks. This can be caused by calling rt_sem_wait (), rt_sem_wait_timed () etc semaphore functions either explicitly or implicitly. Tasks in blocked state are linked together on the associated resource blocked queues. These blocked queues could be either

42 34 a simple FIFO (First come First Out) queue or a priority-based queue. A blocked task becomes runable again when the resource becomes available. Only the first task on the blocked queue would become runable. IPC mechanism is not the focus of our work; however, we provide some brief discussion here and in section and section 3.3. When an IO bandwidth task makes related blocking IPC calls, it will update its iob_remaining attribute value (the new value equals to its absolute yield time subtracted by current system time). As a result, the scheduler function would be initiated and a new task would be chosen to run and a new timer interruption time would be reset accordingly. This IO bandwidth task would resume its execution when it has acquired the related resource (or being timed out if timed IPC interface is used). In this case, the previous discussed IO bandwidth-scheduling algorithm would be enforced. Similar behavior would be expected for other bounded task by calling IPC functions. IPC can cause unpredictability issues and this has not been fully addressed in our design. It is recommended that the existing RTAI (with modification for IOB as above) timed IPC interfaces should be used if timing constraints must be met Real Time Task Descriptor The task descriptor is the fundamental and most critical data structure for our scheduler and contains context information, task state, task management, scheduling and running information, etc. The original RTAI real time task data structure has been modified for our purpose. In previous sections, we have already discussed many scheduling and state related attributes. In this section, we will present some other important task data attributes related to our scheduler design. The modified and complete real time task data structure is described in Appendix Task scheduling information Scheduling information needs to be stored in the task context. The following scheduling related information is captured in the task data structure:

43 35 state, the task s current state; class, the task s scheduling class, [0-4]; base_rank, the task s base rank assigned; rank, base_rank, the task s actual or inherited rank; period, the running period of the task in internal clock tick; resume_time, the task s next expected running time in clock tick; yield_time, the expected yield time, usually for IO bandwidth task; jitter, the required jitter of the device task; bandwidth, the required IO bandwidth of IOB task; segment, the required segment size of IOB task; latency, the IO task s latency converted from segment; iob_quantum, the calculated execution time based on IOB bandwidth; iob_remaining, the remaining execution time for IOB task; and deadline, the relative timeline in ns or in internal clock tick count Generic Task and Stack Information The following task attributes are needed for all real time tasks: task_id, a unique identifier for each task; task_name, the task name; signal_hdlr, a signal handler can be registered for each task; execution_time, the accumulated task execution time; errno, per task error number; heap, the real time heap pointer; stack, the current stack pointer; stact_ bottom, the stack bottom pointer. A stack is statically allocated from kernel memory and is 4-byte aligned when task is created, and the stack pointer and the stack bottom pointer are used to record them. When being created initially, the addresses of task execution routine and the execution

44 36 routine arguments are pushed onto the stack. We define the stack pointer as the first element in the task descriptor, and therefore its address is the same (by dereferencing) as the address of the task context, and which makes the context save, restore and switch easier and more convenient by simply using the pointer to the task descriptor. Figure 3-7 illustrates the content of the task stack after successful creation. In section 3.2.8, we will discuss the context switch further using the task stack. Figure 3-7 Initial Task Stack Content Top of Stack Current Stack Ptr Data Pointer of rt_thread() 0 (retrun address) Pointer of rt_startup() TSP TSP-4 TSP-8 TSP-12 Stack Bottom Queuing and Task Management All tasks are linked on various queues accordingly, and this queuing information is maintained in each task s context descriptor. A global task descriptor, rt_linux_task, is declared and initialized as both the starting point and the ending point for generic task queue, ready task queue, and delayed task queue. This task descriptor is also used particularly to refer to the Linux kernel - the highest rank (lowest priority) real time task, and contains current Linux task context information. prev_task and next_task pointer: all real time tasks in the system are queued on this doubly linked list. delayed_prev and delayed_next: this is the delayed task queue ordered by task s resume time.

45 37 ready_prev and ready_next: the is the ready task queue and ordered by rank as discussed previously. The scheduler checks the delayed task queue in the timer interrupt handler. This check can also be performed whenever the scheduler is activated. It will therefore remove tasks, whose resume time requirement is satisfied, from delayed queue into ready queue. On the other side, the running task will be removed from ready queue and inserted into the delayed queue based on its updated resume time appropriately IPC and IPC Queue Elements Several IPC and related queue elements have maintained in the task descriptor (see below). Whenever a task is blocked, the task would be on one of these queues. Since our design goal is to keep consistence with the existing IPC interface, these information elements remain the same. Providing bounded response IPC and priority inheritance is a crucial part of our scheduler predictability in the case of inter-task communication, therefore, we present a semaphore based IPC queuing diagram in Appendix 8.8. The IPC related queues and attributes are: blocked_on, the queue task blocked on, message or semaphore queue etc.; message_queue, the blocked link list for message sending and receiving; blocked_queue, the blocked link list combined with blocked on parameter; owned_resource, the task owned resource counter Timer Support and Timing Functions By default, the Pentium 8254 time chip in Linux usually runs at 100HZ or 1KHZ (Linux 2.6 kernel) at maximum and can only provide precision in milliseconds. In order to support real time application, a high-resolution timing mechanism is required. RTAI Linux uses the 8254 Timer chip and Pentium CPU TSC for this purpose.

46 38 Our new scheduler continuously uses this mechanism and supports both periodic and one shot timer mode. The Pentium CPU TSC increases by one in each CPU cycle, and the TSC value can be obtained by using the rdtsc () assemble instruction and this provides us the mean to track the timing. Timing related attributes therefore can seamlessly convert from nanoseconds into CPU clock ticks or vice vise. At the application level, one interface would be enough (clock tick chosen in our case). The 8254 chip, with a frequency of HZ, can be programmed to either generate timer interrupts in periodic mode or in one shot mode. Therefore, the 8254 timer can provide us a timer interrupt source with microsecond level resolution that is enough for most real time applications. Figure 3-8 illustrates three code examples to program 8254 into periodic mode, one shot mode and to reprogram 8254 with different counter value (In one shot mode) for interrupt generation. An interrupt handler (the scheduler function) is also set up. This is the exact way how our scheduler installs and activates the scheduler function when it is loaded into the kernel. Only the third register counter of chip 8254 is used. In periodic mode, the 8254 generate time interrupt at periodic interval and no additional reprogramming is required. In one shot mode, the scheduler use Pentium TSC to keep track of the timing, converts the CPU TSC value into 8254 counter value. By programming the 8254 with different count, it can generate an interrupt at appropriate time. Due to the reprogramming of 8254 in one shot mode the periodic mode has less overhead. Our experimental results on a Pentium II 350 shows that resetting 8254 takes about 2100ns and the total calibrated 8254 timer interrupt latency is around 4600ns. Figure 3-8 Code example to program and setup 8254 Timer Chip

47 39 1. Periodic Mode Outb (0x34, 0x43); Outb (tick & 0xFF, 0x40); Outb (tick >> 8, 0x40); rt_free_global_irq (TIMER_8254_IRQ); rt_request_global_irq (TIMER_8254_IRQ, handler); 2. One Shot Mode Outb (0x30, 0x43); Outb (tick & 0xFF, 0x40); outb (tick >> 8, 0x40); rt_free_global_irq (TIMER_8254_IRQ); rt_request_global_irq (TIMER_8254_IRQ, handler); 3. Resetting next interrupt Outb (tick & 0xFF, 0x40); Outb (tick >> 8, 0x40); Our scheduler also provides more accurate time functions, rt_get_time () (in clock ticks) and rt_get_time_ns () etc. to real time applications. Appendix 8.6 provides a detailed function prototype list of our new scheduler Context Switching The task context switching is the fundamental part of a multi-task scheduler. As described in task data structure, the current stack pointer is the first data member of the structure. Figure 3-7 depicts the initial stack content and the value of related stack pointers. The function address and the related parameters have been pushed onto the stack after the completion of task creation. Run time task context will be pushed into the stack and would be popped when task is chosen to run again. Our scheduler calls the rt_switch_to () function, the real time context switch routine, whenever a context switch is performed. The detailed assemble code of this routine is given and explained in Appendix Scheduling Routine

48 40 The scheduling routine (rt_schedule () or rt_timer_handler ()) is our scheduler that enforces the scheduling policy and completes the context switch. All situations that trigger scheduling fall into two scenarios. Tasks give up execution due to completion, yielding, delaying or blocking on unavailable resources. The scheduling routine is started implicitly by the related system functions rt_task_yield (), rt_task_wait_next_period () or rt_sem_wait () etc. and the task can become ready, delayed, or blocked accordingly. It is the task itself to initiate the scheduling routine. A task with higher rank becomes ready and preempts the current task. The timer interrupt handler performs the preemption. In this scenario, the interrupted task would remain on the system ready queue. (Alternatively, in the case of an IO task, if it has used up its currently allocated CPU time and with resumption time later than current system time, it will be moved into delayed queue). Above situations lead to the design of two separate scheduling routines in our scheduler. The first one is called rt_schedule () and the second one is called rt_timer_handler (). When our new scheduler is mounted, that is when the scheduler module is loaded into the RTHAL patched Linux kernel, it will request the 8254 timer interrupt and register it with above scheduler timer handler as illustrated in Figure 3-8. Although we design two separate scheduling routines to address above two scenarios, the majority functionality is the same. For instance, in the first case, it is not necessary for the scheduler to wake up tasks on the delayed queue because they are guarded by their start time constraint, thus by our timer interrupt handler. Therefore, it is mandatory for our timer interrupt handler to check the task delayed queue regularly. In addition, the current system tick time and Linux time is updated in the timer handler. Figure 3-9 depicts the call execution flow of our timer interrupt handler. Figure 3-9 Real Time Timer Interrupt Handler Execution Flow

49 Timer Interrupt Timer Interrupt Handler Update RT tick time using TSC N Need update Linux time with RT tick time? Y Update RT Linux time Pending Linux 8254 time IRQ Task Yield if IOB task Check Delayed Task Choose New Task Y Set new yield time if new IOB task One shot timer mode? Reset Next Interrupt Time Updates interrupt time with shortest time from IOB and Delayed Task Switch to real time task context. Linux context housekeeping Update periodic Interrupt time Reprogram 8254 interrupt timer chip Update Task Execution Time Real Time Task Context Switching Y New Task? N Task Signal Handling (After switch back)

50 Scheduler External Interfaces One primary design principle is to minimize changes to the existing external RTAI interface, especially, low-level support functional interfaces. Separation of the scheduler functions from other external functional facilities would simplify the work of our scheduler design and save numerous extra coding efforts, and we can still use numerous existing supported tools without any additional changes. External interfaces have identified as follows. Figure 3-10 Scheduler external interface 1. RT Application 4. RT IPC Scheduler 2. Linux Kernel 3. RTHAL support 1. This is the scheduler application interface for application designer and has been enhanced with our new scheduler. 2. Our new scheduler maintains the same interface with Linux kernel. The Linux continues to run as the lowest priority real time task. The task switch between real time and Linux task, the interrupt emulation and system time update etc., remain the same as in RTAI. 3. Our new scheduler still relies on RTAI RTHAL for interrupt management and hardware abstraction related functions. 4. IPC is an essential part for the real time task communication. IPC can cause priority inversion, and destroy the enforced predictability. RTAI provides priority inheritance and bounded response (timed) IPC interface. Our scheduler has been carefully designed to use these facilities. In appendix 8.10, we provide some details regarding bounded response IPC and priority inheritance.

51 Predictable Networking IO Transmission Our primary goal is to use real time network IO in our kernel to test our IO bandwidth based scheduling in a networking environment, and to investigate a practical approach to achieve predictable network communication associated with our design. As demonstrated in later sections, basic functional test of our IO bandwidth task scheduling does not necessarily require network support. However, predictable network IO communication needs the support from an RTOS and its scheduler. In addition, it also needs a supporting platform of a predictable networking infrastructure. The latter requirement is not the focus of our work. The criteria of a predictable network IO transmission are: 1. bandwidth based IO buffering configuration mechanism for network IO tasks, and 2. priority configuration, queuing and IO transmission. We found that these mechanisms are basically supported at the RTNET TDMA MAC layer of the RTNET protocol framework. However, TDMA requires a predictable master and client network environment that makes it difficult to achieve for our project. Further studies about RTNET indicate that a practical design enhancement and implementation with very minimum RTNET change is feasible without RTNET TDMA RTMAC layer support. We present our related proposals in sections followed. RTNET provide a real time socket interface and support UDP/IP protocol network communication for kernel space real time tasks. The socket interface provides rt_ioctl () function to tune socket parameters dynamically or at initialization time. Figure 3-11 depicts the architecture of two proposed solutions that differ from each other only at the interface with Ethernet hardware driver. In approach A as marked with dotted lines, it directly transmits network packets to Ethernet transmission driver. With the support of our predictable IO bandwidth scheduler, and the DMA

52 44 capable Ethernet driver, the IO bandwidth transmission task is running under the IO task context and this approach would work well. Practically, approach A does not require actual changes to RTNET and related RTNET socket functions can be directly called by our Class 3, IO bandwidth tasks. In approach B, the task puts network packets into a priority based outgoing queue and gets back to applications. A separated task in the protocol stack is responsible for the actual transmission. Approach B would work without DMA support and for non-io bandwidth tasks as well, but with the expense of an extra protocol stack manager and priority queues. We discuss the generic mechanisms below. Figure 3-11 Proposed Predictable Network IO Transmission Architecture RT Socket Interface 1 2 Socket Context Socket Context Socket Buffer UDP Protocol Context Socket Buffer Stack Manager Outgoing Priority Queue Approach B IP Protocol Context RTNET Device RT Routing Table Approach A 1 2 RT Ethernet Driver Two functions are required. First, the socket interface needs to support the configuration of its IO buffer pool size and the socket priority that should be consistent with an IO task. Secondly, the network protocol stack needs to maintain a priority queue, and create a protocol stack manager that transmits outgoing packets based on priority.

53 Real Time Socket Interface RTNET has designed RTDM, a real time driver model for RTAI in order to support the networking programming for example, using BSD socket. Several data structures are maintained for each opened socket context. Figure 3-12 Real time socket interface fd rtdm_dev_context fd rtdm_dev_context rt_socket rt_socket rtdm_devvice rtdm_operation When a socket is created initially, a buffer pool with a default size, 16, is allocated for each socket. The buffer queue is maintained in the rt_socket data structure. By using the existing rt_ioctl () interface, socket parameter such as the socket buffer size, can be dynamically configured and the value can be based on the IO bandwidth requirement and maximum available network IO buffer. Also instead maintaining and allocating the buffer pool itself at socket creation time, an alternative approach is to maintain only some related count parameters based on bandwidth requirement, and buffer can be actually assigned at run time when it is used, (for example when rt_sendto() or rt_bind() is called) from a global shared IO buffer pool. This would improve the efficiency of the socket buffer usages. A socket priority parameter configurable via rt_ioctl () interface is already available in rt_socket, although, only the TDMA RTMAC layer distinguishes and use it. Configurable priority parameter is only required in approach B, and used by the stack transmission manager. In approach A, this has been enforced in the IO task context and

54 46 scheduling. Due to limitation of the maximum number of priority queue list, the task s priority or rank attribute cannot be used directly, a mapping is required Priority Queuing and Transmission As illustrated in Figure 3-13, we define 32 priority based linked lists, and each list has a dedicated priority. This implies that only 32 socket priorities are distinguishable at the protocol level. Each linked list contains all outgoing socket data buffer (rtskb) at the same priority. The usage element in rt_tx_prio_queue structure records whether there are outgoing packets waiting in that particular priority linked list. The rtskb_queue points to the header and the tail of each linked list. As a reference, we present the content definition of rtskb, the RTNET real time socket buffer in Appendix 8.3. Figure 3-13 Priority Outgoing Queue Data Structure and Relationship struct rt_tx_prio_queue { Rtos_spinlock_t lock; U32 usage; Struct rtskb_queue queue[32]; } struct rtskb_queue { rtos_spinlock_t lock; struct rtskb *first; struct rtskb *last; } 0 rtskb rtskb rtskb rtskb rtskb rtskb rtskb 31 Since there are only 32 priority queues for network transmission, and the task priority or rank cannot be used directly. Furthermore, besides the application data, the protocol stack itself could have urgent network data traffic.

Embedded Systems. 6. Real-Time Operating Systems

Embedded Systems. 6. Real-Time Operating Systems Embedded Systems 6. Real-Time Operating Systems Lothar Thiele 6-1 Contents of Course 1. Embedded Systems Introduction 2. Software Introduction 7. System Components 10. Models 3. Real-Time Models 4. Periodic/Aperiodic

More information

RTAI. Antonio Barbalace antonio.barbalace@unipd.it. (modified by M.Moro 2011) RTAI

RTAI. Antonio Barbalace antonio.barbalace@unipd.it. (modified by M.Moro 2011) RTAI Antonio Barbalace antonio.barbalace@unipd.it (modified by M.Moro 2011) Real Time Application Interface by Dipartimento di Ingegneria Aereospaziale dell Università di Milano (DIAPM) It is not a complete

More information

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 26 Real - Time POSIX. (Contd.) Ok Good morning, so let us get

More information

Operating Systems, 6 th ed. Test Bank Chapter 7

Operating Systems, 6 th ed. Test Bank Chapter 7 True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one

More information

Predictable response times in event-driven real-time systems

Predictable response times in event-driven real-time systems Predictable response times in event-driven real-time systems Automotive 2006 - Security and Reliability in Automotive Systems Stuttgart, October 2006. Presented by: Michael González Harbour mgh@unican.es

More information

Lecture 3 Theoretical Foundations of RTOS

Lecture 3 Theoretical Foundations of RTOS CENG 383 Real-Time Systems Lecture 3 Theoretical Foundations of RTOS Asst. Prof. Tolga Ayav, Ph.D. Department of Computer Engineering Task States Executing Ready Suspended (or blocked) Dormant (or sleeping)

More information

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what

More information

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1 Module 8 Industrial Embedded and Communication Systems Version 2 EE IIT, Kharagpur 1 Lesson 37 Real-Time Operating Systems: Introduction and Process Management Version 2 EE IIT, Kharagpur 2 Instructional

More information

Tasks Schedule Analysis in RTAI/Linux-GPL

Tasks Schedule Analysis in RTAI/Linux-GPL Tasks Schedule Analysis in RTAI/Linux-GPL Claudio Aciti and Nelson Acosta INTIA - Depto de Computación y Sistemas - Facultad de Ciencias Exactas Universidad Nacional del Centro de la Provincia de Buenos

More information

Enhancing the Monitoring of Real-Time Performance in Linux

Enhancing the Monitoring of Real-Time Performance in Linux Master of Science Thesis Enhancing the Monitoring of Real-Time Performance in Linux Author: Nima Asadi nai10001@student.mdh.se Supervisor: Mehrdad Saadatmand mehrdad.saadatmand@mdh.se Examiner: Mikael

More information

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts

More information

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run

More information

OPERATING SYSTEMS SCHEDULING

OPERATING SYSTEMS SCHEDULING OPERATING SYSTEMS SCHEDULING Jerry Breecher 5: CPU- 1 CPU What Is In This Chapter? This chapter is about how to get a process attached to a processor. It centers around efficient algorithms that perform

More information

CPU Scheduling Outline

CPU Scheduling Outline CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different

More information

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann Andre.Brinkmann@uni-paderborn.de Universität Paderborn PC² Agenda Multiprocessor and

More information

SYSTEM ecos Embedded Configurable Operating System

SYSTEM ecos Embedded Configurable Operating System BELONGS TO THE CYGNUS SOLUTIONS founded about 1989 initiative connected with an idea of free software ( commercial support for the free software ). Recently merged with RedHat. CYGNUS was also the original

More information

Performance Comparison of RTOS

Performance Comparison of RTOS Performance Comparison of RTOS Shahmil Merchant, Kalpen Dedhia Dept Of Computer Science. Columbia University Abstract: Embedded systems are becoming an integral part of commercial products today. Mobile

More information

REAL TIME OPERATING SYSTEMS. Lesson-10:

REAL TIME OPERATING SYSTEMS. Lesson-10: REAL TIME OPERATING SYSTEMS Lesson-10: Real Time Operating System 1 1. Real Time Operating System Definition 2 Real Time A real time is the time which continuously increments at regular intervals after

More information

Lecture 25 Symbian OS

Lecture 25 Symbian OS CS 423 Operating Systems Design Lecture 25 Symbian OS Klara Nahrstedt Fall 2011 Based on slides from Andrew S. Tanenbaum textbook and other web-material (see acknowledgements) cs423 Fall 2011 1 Overview

More information

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented

More information

Chapter 6, The Operating System Machine Level

Chapter 6, The Operating System Machine Level Chapter 6, The Operating System Machine Level 6.1 Virtual Memory 6.2 Virtual I/O Instructions 6.3 Virtual Instructions For Parallel Processing 6.4 Example Operating Systems 6.5 Summary Virtual Memory General

More information

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operatin g Systems: Internals and Design Principle s Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Bear in mind,

More information

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1 Module 6 Embedded System Software Version 2 EE IIT, Kharagpur 1 Lesson 30 Real-Time Task Scheduling Part 2 Version 2 EE IIT, Kharagpur 2 Specific Instructional Objectives At the end of this lesson, the

More information

10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details Thomas Fahrig Senior Developer Hypervisor Team Hypervisor Architecture Terminology Goals Basics Details Scheduling Interval External Interrupt Handling Reserves, Weights and Caps Context Switch Waiting

More information

4. Fixed-Priority Scheduling

4. Fixed-Priority Scheduling Simple workload model 4. Fixed-Priority Scheduling Credits to A. Burns and A. Wellings The application is assumed to consist of a fixed set of tasks All tasks are periodic with known periods This defines

More information

Operating Systems 4 th Class

Operating Systems 4 th Class Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science

More information

Real-Time Scheduling 1 / 39

Real-Time Scheduling 1 / 39 Real-Time Scheduling 1 / 39 Multiple Real-Time Processes A runs every 30 msec; each time it needs 10 msec of CPU time B runs 25 times/sec for 15 msec C runs 20 times/sec for 5 msec For our equation, A

More information

White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux

White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux White Paper Real-time Capabilities for Linux SGI REACT Real-Time for Linux Abstract This white paper describes the real-time capabilities provided by SGI REACT Real-Time for Linux. software. REACT enables

More information

Introduction. Scheduling. Types of scheduling. The basics

Introduction. Scheduling. Types of scheduling. The basics Introduction In multiprogramming systems, when there is more than one runable (i.e., ready), the operating system must decide which one to activate. The decision is made by the part of the operating system

More information

QoS and Communication Performance Management

QoS and Communication Performance Management Using a Real-Time, QoS-based ORB to Intelligently Manage Communications Bandwidth in a Multi-Protocol Environment Bill Beckwith Objective Interface Systems, Inc. OMG Embedded Workshop The Nature of CORBA

More information

The Design and Implementation of Real-Time Schedulers in RED-Linux

The Design and Implementation of Real-Time Schedulers in RED-Linux The Design and Implementation of Real-Time Schedulers in RED-Linux KWEI-JAY LIN, SENIOR MEMBER, IEEE AND YU-CHUNG WANG Invited Paper Researchers in the real-time system community have designed and studied

More information

OpenMosix Presented by Dr. Moshe Bar and MAASK [01]

OpenMosix Presented by Dr. Moshe Bar and MAASK [01] OpenMosix Presented by Dr. Moshe Bar and MAASK [01] openmosix is a kernel extension for single-system image clustering. openmosix [24] is a tool for a Unix-like kernel, such as Linux, consisting of adaptive

More information

System Software Integration: An Expansive View. Overview

System Software Integration: An Expansive View. Overview Software Integration: An Expansive View Steven P. Smith Design of Embedded s EE382V Fall, 2009 EE382 SoC Design Software Integration SPS-1 University of Texas at Austin Overview Some Definitions Introduction:

More information

Real Time Operating System for Embedded DSP Applications

Real Time Operating System for Embedded DSP Applications Real Time Operating System for Embedded DSP Applications By P.S.Dey Lecturer Computer Science & Engg. Dept. I.I.T. Kharagpur Organization of the Talk 1. Real Time Systems an Overview. 2. Key Features of

More information

What is an RTOS? Introduction to Real-Time Operating Systems. So what is an RTOS?(contd)

What is an RTOS? Introduction to Real-Time Operating Systems. So what is an RTOS?(contd) Introduction to Real-Time Operating Systems Mahesh Balasubramaniam What is an RTOS? An RTOS is a class of operating systems that are intended for real time-applications What is a real time application?

More information

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2) RTOSes Part I Christopher Kenna September 24, 2010 POSIX Portable Operating System for UnIX Application portability at source-code level POSIX Family formally known as IEEE 1003 Originally 17 separate

More information

What is best for embedded development? Do most embedded projects still need an RTOS?

What is best for embedded development? Do most embedded projects still need an RTOS? RTOS versus GPOS: What is best for embedded development? Do most embedded projects still need an RTOS? It is a good question, given the speed of today s high-performance processors and the availability

More information

The Microsoft Windows Hypervisor High Level Architecture

The Microsoft Windows Hypervisor High Level Architecture The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its

More information

An Implementation Of Multiprocessor Linux

An Implementation Of Multiprocessor Linux An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

Real-Time Operating Systems. http://soc.eurecom.fr/os/

Real-Time Operating Systems. http://soc.eurecom.fr/os/ Institut Mines-Telecom Ludovic Apvrille ludovic.apvrille@telecom-paristech.fr Eurecom, office 470 http://soc.eurecom.fr/os/ Outline 2/66 Fall 2014 Institut Mines-Telecom Definitions What is an Embedded

More information

Network Scanning: A New Feature for Digital Copiers

Network Scanning: A New Feature for Digital Copiers Network Scanning: A New Feature for Digital Copiers Abstract Introduction The method of implementing electronic document capture and distribution, known as network scanning, into the traditional copier/printer

More information

Real- Time Scheduling

Real- Time Scheduling Real- Time Scheduling Chenyang Lu CSE 467S Embedded Compu5ng Systems Readings Ø Single-Processor Scheduling: Hard Real-Time Computing Systems, by G. Buttazzo. q Chapter 4 Periodic Task Scheduling q Chapter

More information

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses Overview of Real-Time Scheduling Embedded Real-Time Software Lecture 3 Lecture Outline Overview of real-time scheduling algorithms Clock-driven Weighted round-robin Priority-driven Dynamic vs. static Deadline

More information

Chapter 11 I/O Management and Disk Scheduling

Chapter 11 I/O Management and Disk Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ 2008, Prentice Hall I/O Devices Roadmap Organization

More information

theguard! ApplicationManager System Windows Data Collector

theguard! ApplicationManager System Windows Data Collector theguard! ApplicationManager System Windows Data Collector Status: 10/9/2008 Introduction... 3 The Performance Features of the ApplicationManager Data Collector for Microsoft Windows Server... 3 Overview

More information

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS CPU SCHEDULING CPU SCHEDULING (CONT D) Aims to assign processes to be executed by the CPU in a way that meets system objectives such as response time, throughput, and processor efficiency Broken down into

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

Operating System Aspects. Real-Time Systems. Resource Management Tasks

Operating System Aspects. Real-Time Systems. Resource Management Tasks Operating System Aspects Chapter 2: Basics Chapter 3: Multimedia Systems Communication Aspects and Services Multimedia Applications and Communication Multimedia Transfer and Control Protocols Quality of

More information

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Operating Systems Concepts: Chapter 7: Scheduling Strategies Operating Systems Concepts: Chapter 7: Scheduling Strategies Olav Beckmann Huxley 449 http://www.doc.ic.ac.uk/~ob3 Acknowledgements: There are lots. See end of Chapter 1. Home Page for the course: http://www.doc.ic.ac.uk/~ob3/teaching/operatingsystemsconcepts/

More information

Embedded & Real-time Operating Systems

Embedded & Real-time Operating Systems Universität Dortmund 12 Embedded & Real-time Operating Systems Peter Marwedel, Informatik 12 Germany Application Knowledge Structure of this course New clustering 3: Embedded System HW 2: Specifications

More information

Chapter 5 Linux Load Balancing Mechanisms

Chapter 5 Linux Load Balancing Mechanisms Chapter 5 Linux Load Balancing Mechanisms Load balancing mechanisms in multiprocessor systems have two compatible objectives. One is to prevent processors from being idle while others processors still

More information

Real-Time Component Software. slide credits: H. Kopetz, P. Puschner

Real-Time Component Software. slide credits: H. Kopetz, P. Puschner Real-Time Component Software slide credits: H. Kopetz, P. Puschner Overview OS services Task Structure Task Interaction Input/Output Error Detection 2 Operating System and Middleware Applica3on So5ware

More information

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/ Operating Systems Institut Mines-Telecom III. Scheduling Ludovic Apvrille ludovic.apvrille@telecom-paristech.fr Eurecom, office 470 http://soc.eurecom.fr/os/ Outline Basics of Scheduling Definitions Switching

More information

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

Analysis of Open Source Drivers for IEEE 802.11 WLANs

Analysis of Open Source Drivers for IEEE 802.11 WLANs Preprint of an article that appeared in IEEE conference proceeding of ICWCSC 2010 Analysis of Open Source Drivers for IEEE 802.11 WLANs Vipin M AU-KBC Research Centre MIT campus of Anna University Chennai,

More information

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods

More information

Aperiodic Task Scheduling

Aperiodic Task Scheduling Aperiodic Task Scheduling Jian-Jia Chen (slides are based on Peter Marwedel) TU Dortmund, Informatik 12 Germany Springer, 2010 2014 年 11 月 19 日 These slides use Microsoft clip arts. Microsoft copyright

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

ICS 143 - Principles of Operating Systems

ICS 143 - Principles of Operating Systems ICS 143 - Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian nalini@ics.uci.edu Note that some slides are adapted from course text slides 2008 Silberschatz. Some

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS? 18-345: Introduction to Telecommunication Networks Lectures 20: Quality of Service Peter Steenkiste Spring 2015 www.cs.cmu.edu/~prs/nets-ece Overview What is QoS? Queuing discipline and scheduling Traffic

More information

CPU Scheduling. Core Definitions

CPU Scheduling. Core Definitions CPU Scheduling General rule keep the CPU busy; an idle CPU is a wasted CPU Major source of CPU idleness: I/O (or waiting for it) Many programs have a characteristic CPU I/O burst cycle alternating phases

More information

Operating System Tutorial

Operating System Tutorial Operating System Tutorial OPERATING SYSTEM TUTORIAL Simply Easy Learning by tutorialspoint.com tutorialspoint.com i ABOUT THE TUTORIAL Operating System Tutorial An operating system (OS) is a collection

More information

Real Time Programming: Concepts

Real Time Programming: Concepts Real Time Programming: Concepts Radek Pelánek Plan at first we will study basic concepts related to real time programming then we will have a look at specific programming languages and study how they realize

More information

Security Overview of the Integrity Virtual Machines Architecture

Security Overview of the Integrity Virtual Machines Architecture Security Overview of the Integrity Virtual Machines Architecture Introduction... 2 Integrity Virtual Machines Architecture... 2 Virtual Machine Host System... 2 Virtual Machine Control... 2 Scheduling

More information

Linux Driver Devices. Why, When, Which, How?

Linux Driver Devices. Why, When, Which, How? Bertrand Mermet Sylvain Ract Linux Driver Devices. Why, When, Which, How? Since its creation in the early 1990 s Linux has been installed on millions of computers or embedded systems. These systems may

More information

Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems

Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems Carsten Emde Open Source Automation Development Lab (OSADL) eg Aichhalder Str. 39, 78713 Schramberg, Germany C.Emde@osadl.org

More information

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology 3. The Lagopus SDN Software Switch Here we explain the capabilities of the new Lagopus software switch in detail, starting with the basics of SDN and OpenFlow. 3.1 SDN and OpenFlow Those engaged in network-related

More information

Linux 2.4. Linux. Windows

Linux 2.4. Linux. Windows Linux 2.4 Non-preemptible kernel A system call might take long time to complete Coarse timer resolution Tasks can be released only with 10ms precision Virtual memory Introduces unpredictable amount of

More information

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS Quality of Service in the Internet Problem today: IP is packet switched, therefore no guarantees on a transmission is given (throughput, transmission delay, ): the Internet transmits data Best Effort But:

More information

CPU Scheduling. CPU Scheduling

CPU Scheduling. CPU Scheduling CPU Scheduling Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & APPS 1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling

More information

1 Organization of Operating Systems

1 Organization of Operating Systems COMP 730 (242) Class Notes Section 10: Organization of Operating Systems 1 Organization of Operating Systems We have studied in detail the organization of Xinu. Naturally, this organization is far from

More information

Fitting Linux Device Drivers into an Analyzable Scheduling Framework

Fitting Linux Device Drivers into an Analyzable Scheduling Framework Fitting Linux Device Drivers into an Analyzable Scheduling Framework [Extended Abstract] Theodore P. Baker, An-I Andy Wang, Mark J. Stanovich Florida State University Tallahassee, Florida 32306-4530 baker@cs.fsu.edu,

More information

174: Scheduling Systems. Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 TIMING ANALYSIS IN NETWORKED MEASUREMENT CONTROL SYSTEMS

174: Scheduling Systems. Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 TIMING ANALYSIS IN NETWORKED MEASUREMENT CONTROL SYSTEMS 174: Scheduling Systems Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 Timing Analysis in Networked Measurement Control Systems 1 2 Introduction to Scheduling Systems 2 3 Scheduling Theory

More information

Operating Systems. Lecture 03. February 11, 2013

Operating Systems. Lecture 03. February 11, 2013 Operating Systems Lecture 03 February 11, 2013 Goals for Today Interrupts, traps and signals Hardware Protection System Calls Interrupts, Traps, and Signals The occurrence of an event is usually signaled

More information

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture Last Class: OS and Computer Architecture System bus Network card CPU, memory, I/O devices, network card, system bus Lecture 3, page 1 Last Class: OS and Computer Architecture OS Service Protection Interrupts

More information

Linux Process Scheduling Policy

Linux Process Scheduling Policy Lecture Overview Introduction to Linux process scheduling Policy versus algorithm Linux overall process scheduling objectives Timesharing Dynamic priority Favor I/O-bound process Linux scheduling algorithm

More information

Software Tracing of Embedded Linux Systems using LTTng and Tracealyzer. Dr. Johan Kraft, Percepio AB

Software Tracing of Embedded Linux Systems using LTTng and Tracealyzer. Dr. Johan Kraft, Percepio AB Software Tracing of Embedded Linux Systems using LTTng and Tracealyzer Dr. Johan Kraft, Percepio AB Debugging embedded software can be a challenging, time-consuming and unpredictable factor in development

More information

PERFORMANCE TUNING ORACLE RAC ON LINUX

PERFORMANCE TUNING ORACLE RAC ON LINUX PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database

More information

Resource Utilization of Middleware Components in Embedded Systems

Resource Utilization of Middleware Components in Embedded Systems Resource Utilization of Middleware Components in Embedded Systems 3 Introduction System memory, CPU, and network resources are critical to the operation and performance of any software system. These system

More information

Quality of Service su Linux: Passato Presente e Futuro

Quality of Service su Linux: Passato Presente e Futuro Quality of Service su Linux: Passato Presente e Futuro Luca Abeni luca.abeni@unitn.it Università di Trento Quality of Service su Linux:Passato Presente e Futuro p. 1 Quality of Service Time Sensitive applications

More information

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm PURPOSE Getting familiar with the Linux kernel source code. Understanding process scheduling and how different parameters

More information

Virtual Private Systems for FreeBSD

Virtual Private Systems for FreeBSD Virtual Private Systems for FreeBSD Klaus P. Ohrhallinger 06. June 2010 Abstract Virtual Private Systems for FreeBSD (VPS) is a novel virtualization implementation which is based on the operating system

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems Based on original slides by Silberschatz, Galvin and Gagne 1 Basic Concepts CPU I/O Burst Cycle Process execution

More information

Comparison between scheduling algorithms in RTLinux and VxWorks

Comparison between scheduling algorithms in RTLinux and VxWorks Comparison between scheduling algorithms in RTLinux and VxWorks Linköpings Universitet Linköping 2006-11-19 Daniel Forsberg (danfo601@student.liu.se) Magnus Nilsson (magni141@student.liu.se) Abstract The

More information

Linux for Embedded and Real-Time Systems

Linux for Embedded and Real-Time Systems Linux for Embedded and Real-Time Systems Kaiserslautern 9 June 2005 Samir Amiry (samir.amiry@iese.fhg.de) Fraunhofer IESE Institut Experimentelles Software Engineering Outlines Introduction. Linux: the

More information

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems Chapter 19: Real-Time Systems System Characteristics Features of Real-Time Systems Chapter 19: Real-Time Systems Implementing Real-Time Operating Systems Real-Time CPU Scheduling VxWorks 5.x 19.2 Silberschatz,

More information

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Intel Data Direct I/O Technology (Intel DDIO): A Primer > Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Operating System: Scheduling

Operating System: Scheduling Process Management Operating System: Scheduling OS maintains a data structure for each process called Process Control Block (PCB) Information associated with each PCB: Process state: e.g. ready, or waiting

More information

Hard Real-Time Linux

Hard Real-Time Linux Hard Real-Time Linux (or: How to Get RT Performances Using Linux) Andrea Bastoni University of Rome Tor Vergata System Programming Research Group bastoni@sprg.uniroma2.it Linux Kernel Hacking Free Course

More information

Exercises : Real-time Scheduling analysis

Exercises : Real-time Scheduling analysis Exercises : Real-time Scheduling analysis Frank Singhoff University of Brest June 2013 Exercise 1 : Fixed priority scheduling and Rate Monotonic priority assignment Given a set of tasks defined by the

More information

COS 318: Operating Systems. Virtual Machine Monitors

COS 318: Operating Systems. Virtual Machine Monitors COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have

More information

Red Hat Linux Internals

Red Hat Linux Internals Red Hat Linux Internals Learn how the Linux kernel functions and start developing modules. Red Hat Linux internals teaches you all the fundamental requirements necessary to understand and start developing

More information

4003-440/4003-713 Operating Systems I. Process Scheduling. Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu)

4003-440/4003-713 Operating Systems I. Process Scheduling. Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu) 4003-440/4003-713 Operating Systems I Process Scheduling Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu) Review: Scheduling Policy Ideally, a scheduling policy should: Be: fair, predictable

More information

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.

More information

Operating Systems. www.fetac.ie. Module Descriptor

Operating Systems. www.fetac.ie. Module Descriptor The Further Education and Training Awards Council (FETAC) was set up as a statutory body on 11 June 2001 by the Minister for Education and Science. Under the Qualifications (Education & Training) Act,

More information

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

How To Monitor And Test An Ethernet Network On A Computer Or Network Card 3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel

More information

Chapter 1: Operating System Models 1 2 Operating System Models 2.1 Introduction Over the past several years, a number of trends affecting operating system design are witnessed and foremost among them is

More information