Real-Time Database Systems: Concepts and Design

Size: px
Start display at page:

Download "Real-Time Database Systems: Concepts and Design"

Transcription

1 Real-Time Database Systems: Concepts and Design Saud A. Aldarmi Department of Computer Science The University of York April 1998

2 Abstract This qualifying dissertation is intended to review the state-of-the-art of Real-Time Database Systems under a uniprocessor and centralized environments. Due to the heterogeneity of the issues, the large amounts of information, and space limitation, we limit our presentation to the most important issues to the overall design, construction, and advancement of Real-Time Database Systems. Such topics are believed to include Transaction Scheduling, Admission Control, Memory Management, and Disk Scheduling. Furthermore, Transaction Scheduling consists of Concurrency Control Protocols, Conflict Resolution Protocols, and Deadlocks. Out of these issues, the most emphasis is placed on Concurrency Control and Conflict Resolution protocols due to their severe role on the overall systems performance. Other important issues that were not included in our presentation include Fault Tolerance and Failure Recovery, Predictability, and most important of all, Minimizing Transaction Support; i.e., Relaxing Atomicity and Serializability. Various solutions to many of the included topics are listed in chronological order along with their advantages, disadvantages, and limitations. While we took the liberty to debate some solutions, we list the debates of other researchers as well. The presentation concludes with the identification of five research areas, all of which are believed to be very important to the advancement of Real-Time Database Systems. 2

3 Contents Introduction i 1 Fundamentals in Real-Time Systems 1.1. Introduction System Models and Timing Scheduling Priority Based Scheduling Synchronization Priority Inheritance Priority Ceiling Overload 11 2 Overview of Real-Time Database Systems 2.1. Introduction The Concept of Transactions and Serializability Time-Critical Systems vs. Database Requirements Real-Time Database System Model Scheduling Real-Time Database Transactions Concurrency Control Conflict Resolution Deadlocks Admission Control Memory Management Disk Scheduling 32 3

4 3 Concurrency Control 3.1. Introduction Locking Concurrency Control Synchronizing RTDB Transactions in Locking-based Protocols Optimistic Concurrency Control Speculative Concurrency Control Multiversion Concurrency Control Dynamic Adjustment of Serialization Order 50 4 Open Problems and Future Plan 54 4

5 Introduction In general, data in a real-time system is managed on individual basis by every task within the system. However, with the advancement of technology, many applications are requiring large amounts of information to be handled and managed in a timely manner. Thus, a substantial number of real-time applications are becoming more data-intensive. Such lager amounts of information had produced an interdependency relationship among real-time applications. Therefore, in various application domains, data can no longer be treated and managed on individual basis, rather it is becoming a vital resource requiring an efficient data management mechanism. Meanwhile, database management systems are designed around such a concept; that is, with the sole goal of managing data as a resource. Hence, the principles and techniques of transaction management in Database Management Systems need to be applied to real-time applications for efficient storage and manipulation of information. In an attempt to achieve the advantages of both systems, real-time and database, continuous efforts are directed towards the integration of the two technologies. Such an integration of the two technologies resulted in combined systems known as Real-Time Database Systems. Real-Time Database Systems emerged with the publication of a special issue in the ACM SIGMOD Record in March Today, many applications require such systems, i.e., information retrieval systems, airline reservation systems, stock market, banking, aircraft and spacecraft control, robotics, factory automation, and computer-integrated manufacturing, and the list is vast. The engineering of data-intensive real-time applications can be improved through adaptation of the techniques and principles of database management systems (DBMS), which implies a corresponding reduction in the cost of construction and maintenance. Database systems encapsulate data as a resource, and therefore provide a central control of data. Consequently, instead of managing data in an application-dependent manner, database systems offer a more structured management of data, which offers the following advantages: Elimination of redundancy there is only one set of data shared by different applications as opposed to each application maintaining its own version of the data; thus, better utilization of storage. Maintenance and integrity controls erroneous values can be rejected before being permanently recorded in the database, thereby eliminating corruption of information. That is, individual applications do not have to expend the extra effort in managing and maintaining the integrity of such information; rather, it becomes the system s responsibility. More importantly, database systems allow the separation of policy vs. mechanism and dataabstraction. An application only specifies its desired operations disregarding the underlying implementations and structural characteristics of the required data items. It becomes the sole responsibility of the DBMS to actually specify the storage structure and maintain it. 5

6 This qualifying dissertation is intended to review Real-Time Database Systems as part of our research effort to the advancement of such systems. Due to the heterogeneity of the area and space limitations, we had to limit our presentation to only a subset of the involved design and research issues. Our choice of inclusion/exclusion of various topics is based on our view of Real-Time Database Systems and the associated importance of the issues in the overall construction and advancement of such systems. More importantly, our choice is influenced by our doctoral research intentions. Although we intend to research Real-Time Database Systems, it is very crucial to understand the fundamentals of the underlying conventional non-database real-time systems. Therefore, chapter one of this review is intended to discuss such fundamentals. The chapter starts with the basic definitions of time-critical systems and their timing models. These definitions are followed by a presentation of general scheduling issues that include priority-based scheduling policies; i.e., Rate-Monotonic, Most-Critical-First, Earliest-Deadline-First, Value-Density, and combined Criticalness-Deadline. Synchronization issues such as the priority-inversion problem is discussed in more details than the preceding fundamentals, and two protocols are presented; i.e., the Priority Inheritance and the Priority Ceiling, both of which have been proposed in the literature to reduce the negative effects of priority inversion. The chapter concludes with a detailed discussion of overloads, outlining the impact of such operating conditions on the overall performance of the system and their theoretical limitations. Chapter two covers the vast area of Real-Time Database Systems and identifies the subset of topics that we decided to include/exclude in our presentation. In the same vein as understanding the underlying fundamentals of conventional non-database real-time systems, we believe that it is also very crucial to understand the fundamentals of conventional non-real-time database systems. Therefore, chapter two starts with an introduction to the concepts of transactions and serializability as the notion of correctness of conventional database systems. Next, we identify the differences between time-critical systems and conventional non-real-time database systems, which is followed by a model of a Real-Time Database System to outline the heterogeneity of the involved resources and to indicate the components for which priority inclusion might be of great concern. The rest of chapter two is divided into several sections, each of which is intended to discuss a separate component of the model. These sections include transaction scheduling, admission control, memory management, and disk scheduling. Due to the impact and severity of concurrency control on the overall performance of Real-Time Database Systems, chapter three of this review is mainly dedicated for a detailed discussion of such an issue. The chapter starts with a presentation of locking techniques as the most basic form of concurrency control, and most commercially implemented in conventional database systems. As in conventional real-time systems, the priorityinversion problem is revisited under different assumptions, limitations, and solutions. Locking is followed by a discussion of optimistic (restart-based) concurrency control, followed by a detailed comparison of the two techniques due to the substantial amount of controversy that can be found in the literature regarding their performance under various environments. Next, we present a very recent technique in the arena of concurrency control, known as Speculative protocol. The technique emerged from a detailed comparison of locking-based 6

7 and restart-based protocols. It combines their advantages, while avoiding their drawbacks and shortcomings. The rest of chapter three discusses two concurrency control protocols; i.e., Multiversion, and Dynamic Adjustment of Serialization Order. These protocols are very powerful schemes; both of which can be designed and tailored in a variety of ways to suite such constraints and objectives as can be found in Real-Time Database Systems. This review is concluded with the identification of five research areas, all of which are believed to be very important to the advancement of Real-Time Database Systems. Having identified such open problems, we precisely state our future plans and research intentions. 7

8 1 Fundamentals of Real-Time Systems 1.1. Introduction Real-time systems can be defined as those computing systems that are designed to operate in a timely manner. That is, performing certain actions within specific timing constraints; e.g., producing results while meeting predefined deadlines. Hence, the notion of correctness of a real-time system is contingent upon the logical correctness of the produced results as well as the timing at which such results are produced 88b]. [PAN93, STA Typical real-time systems consist of a controlled system (the underlying application) and a controlling system (a computer monitoring the state of the environment, as well as supplying it with the appropriate driving signals). The controlling system interacts with its environment based on the data available about the environment. Therefore, it is important that the state of the environment, as perceived by the controlling system, be consistent with the actual state of the environment. Otherwise, the effects of the controlling system activities may be inappropriate. The need of maintaining consistency between the actual state of the environment and the state as reflected or perceived by the system leads to the notion of temporal-consistency. Therefore, the specification of real-time systems includes timing constraints, which must be met in addition to the desired computations. Such timing constraints are usually defined in the form of deadlines associated with the various operations of the computing system. In addition, such timing constraints introduce a notion of periodicity, where certain tasks must be initiated at specific instants and must be executed within specific time intervals [AUD90, GRA92, PAN93, RAM93, and STA 88b]. The need to handle explicit deadlines and periodicity that are associated with activities requires employing time-cognizant protocols [RAM93]. Such time-driven management policies should be applied on a system-wide basis; e.g., processor, memory, I/O, and communications resources (data and channels). Thus, for a set of tasks to meet their prescribed deadlines, precedence constraints must be established, satisfied, and resources must be available in time for each task. Abrupt delays at any stage of the process can disrupt the system s behavior and objectives; i.e., delayed production of results [PAN93, STA 88a]. Scheduling decisions are guided by various metrics that depend on the application domain. The variety of metrics suggested for real-time systems indicates the different types of real-time systems that exist in the real world, as well as the type of requirements imposed on them. Different execution requirements of firm deadlines and soft deadlines lead to different system objectives, and hence, different performance metrics in comparative studies. In a real-time system that deals with firm deadlines; hence discarding tardy-tasks 1, the objective is simply to minimize the number of tasks missing their deadlines. Thus, a single metric, Miss-Percentage 2, is 1 Tardy tasks are those that do not complete their execution by their prescribed deadlines. 2 Miss-Percentage is the percentage of tasks that do not complete by their deadlines. 8

9 sufficient to characterize the system s performance. On the other hand, a system dealing with soft deadlines, an additional metric, Average-Tardy-Time, is required to capture the degree of lateness (tardiness) of tardy-tasks [STA93]. When tasks with different priorities access shared resources in an exclusive mode, a problem known as the priority-inversion can occur, and one must take some corrective measures. Such corrective measures are not only required to manage priority inversions, but also to cope with any overload that might occur due to unanticipated system activities and/or emergencies. The rest of this chapter is outlined as follows: the chapter will start with a discussion of various system models and their corresponding timing behavior. Next, a brief discussion of scheduling will be presented, starting with static policies and progressively moving towards the more complex dynamic policies. Due to the amount of literature dedicated to the priority-inversion problem and its severe impact on the overall performance of a system, the problem is presented and discussed in details in a separate section of this chapter. Finally, we conclude the chapter with a detailed discussion of overload conditions. The discussion of overload will address its impact and theoretical limitations on the system s behavior. The purpose of the chapter is to present the fundamentals of real-time systems and to address the real-time issues that are most relevant to the construction of Real-Time Database (RTDB) systems. The chapter outlines various issues in the domain of real-time systems. However, such issues reoccur in RTDB systems, yet require different solutions due to the differences between the two domains System Models and Timing Real-time applications can be modeled as a set of tasks, where each task can be classified according to its timing requirements as hard, firm, or soft. A hard real-time task is the one whose timely and logically correct execution is considered to be critical for the operation of the entire system. The deadline associated with a hard real-time task is conventionally termed a hard-deadline. Missing a hard-deadline can result in catastrophic consequences such systems are known as safety-critical. Thus, the design of a hard real-time system requires that a number of performance and reliability trade-off issues to be carefully evaluated [AUD90, PAN93, and STA 88b]. On the other hand, a soft real-time application is characterized by a soft-deadline whose adherence is desirable, although not critical, for the functioning of the system. That is, missing a soft-deadline does not cause a system failure or compromises the system s integrity. There may still be some (diminishing) value 3 for completing an application after its deadline, without any catastrophic consequences resulting from missing such a deadline [PAN93, STA 88b]. Finally, a firm real-time task, like a soft real-time task, is characterized by a firm-deadline whose adherence is desirable, although not critical, for the functioning of the system. However, unlike a soft real-time task, 3 Gained values will be defined in the discussion of value-functions later in the chapter. 9

10 a firm real-time task is not executed after its deadline and no value is gained by the system from firm tasks that miss their deadlines. An interesting comparative study of soft vs. firm deadline behavior is presented in [LEE96]. The study showed that the miss-percentage of soft-deadlines increases exponentially with the arrival rate of tasks. Meanwhile, in firm-deadlines, where the population in the system is regulated automatically by discarding tardy tasks, the miss-percentage increases only polynomially as the arrival rate increases. There are two general paradigms for the design of real-time operating systems known as Time-Triggered (TT) and Event-Triggered (ET) architectures, both of which are explained next. Systems activities in TT are initiated as predefined instants, and therefore TT architectures require the assessment of resource requirements and resource availability prior to the execution of each application task. Each task s needed resources and the length of time over which these resources will be used can be computed off-line in a resource requirement matrix. If these requirements cannot be anticipated, then worstcase resource and execution time estimates are used. Thus, TT is prone to waste resources and lowering systems utilization under pessimistic estimates (overestimates). However, TT architecture can provide predictable behavior due to its pre-planed execution pattern [BUC89, PAN93]. System activities in ET are initiated in response to the occurrence of particular events that are possibly caused by the environment. In ET architectures, an excessive number of possible behaviors must be carefully analyzed in order to establish their predictability, because resource needs and availability may vary at run-time. Thus, the resource-need assessment in ET architecture is usually probabilistic. Although, ET is not as reliable as TT architecture, it provides more flexibility and ideal for more classes of applications, which do not lend themselves to predetermination of resource requirements [PAN93]. As a direct consequence of these architectures along with the timing requirements we mentioned earlier, application tasks can be classified as periodic, aperiodic, or sporadic tasks [AUD90, PAN93]. 1. Periodic tasks are those tasks that execute at regular intervals of time; i.e., every T time units corresponding to TT architectures. These tasks typically tend to have hard deadlines, characterized by their period(s) and their required execution time per period [LIU73], which is usually given, by a worst-case execution time. 2. Aperiodic tasks are those tasks whose execution time cannot be priori anticipated. That is, the activation of aperiodic tasks is essentially a random event caused by a trigger corresponding to ET architectures. Such a behavior does not allow for worst-case analysis, and therefore, aperiodic tasks tend to have soft deadlines. 3. Sporadic tasks are those tasks that are aperiodic in nature, but they have hard deadlines. Such tasks can be used to handle emergency conditions and/or exceptional situations. Due to the nature of hard deadlines, 10

11 worst-case calculations may be facilitated by a schedulability-constraint, which defines a minimum period between any two sporadic events from the same source. There is a large body of literature on mixing periodic, aperiodic, and sporadic tasks within one system and there are various techniques on scheduling such a mix, each with its own advantages, disadvantages, and limitations. However, we do not intend to discuss such an issue in our current review nor do we intend to investigate such an issue in our future research. The interested reader may refer to [CHE90, HOM94, SPR88, and SPU95] Scheduling A scheduler in general is an algorithm or a policy for ordering the execution of the outstanding processes (tasks) on a processor according to some predefined criteria. Each task within a real-time system has a deadline, an arrival time, and possibly an estimated worst-case execution time. A task s execution time can be derived from the time each resource is required and the precedence constraints among sub-tasks. Execution time information can be given in terms of deterministic, worst-case, or probabilistic estimates. The responsibility of a real-time system scheduler is to determine an order of execution of the tasks that are feasible 4. Typically, a scheduler is optimal if it can schedule all task sets that other schedulers can [AUD 90, BUC89, and PAN93]. A scheduler may be preemptive or non-preemptive. A preemptive scheduler can arbitrarily suspend and resume the execution of a task without affecting its behavior. Preemption is used to control priority-driven scheduling. Typically, preemption occurs when a higher-priority task becomes runnable while a lower-priority task is executing. On the other hand, in a non-preemptive scheduler, a task must run without interruption until completion [PAN93, LIU73]. Simulation studies in [PAN93] showed that the use of preemption is more appropriate for scheduling real-time systems. Finally, a hybrid approach is a preemptive scheduler, but preemption is only allowed at certain points within the code of each task. Real-time scheduling algorithms can be classified as either static or dynamic [NAT92, BUC89, and PAN93]. A static approach also known as a fixed-priority, where priorities are computed off-line, assigned to each task, and maintained unaltered during the entire lifetime of the task and the system. A static scheduler requires complete priori knowledge of the real-time environment in which it is deployed. A table is generated that contains all the scheduling decisions during run-time. Therefore, it requires little run-time overhead. Aside from the many disadvantages of static scheduling [NAT92], it is rather inflexible, because the scheme is workable only if all the tasks are effectively periodic. Jensen et al. [JEN85] stated that fixed priority scheduling could work only for relatively simple systems, and results in a real-time system that is extremely fragile in the presence of changing requirements. It was shown in [JEN85, LOC86] that fixed-priority schedulers perform inconsistently, particularly as the load increases. On the other hand, dynamic scheduling techniques assume unpredictable task-arrival times and attempt to schedule tasks dynamically upon arrival. That is, a dynamic scheduling algorithm dynamically computes and 11

12 assigns a priority value to each task, which can change at run-time. The decisions are based on both task characteristics and the current state of the system, thereby furnishing a more flexible scheduler that can deal with unpredictable events. The computational complexity of a scheduling algorithm is of great concern in time-driven systems. Scheduling algorithms with exponential complexities are clearly undesirable for on-line scheduling schemes. Audsley and Burns [AUD90] stated that the computational complexity is concerned with computability and decidability. Computability is concerned with whether a given schedule can meet the timing constraints of a set of tasks, a problem that can be determined within a polynomial time. The computability problem is also known as the schedulability problem [KUO91]. Decidability is concerned with whether a feasible schedule for a set of tasks exists, a problem that is shown NP-Complete [GAR79]. The decidability problem is also known as the feasibility problem [KUO91]. Due to the intractability of the scheduling problem, dynamic on-line scheduling techniques are based primarily on heuristics, which entails higher run-time costs. Dynamic on-line scheduling policies can adapt to changes in the environment and could result in greater processor utilization. In addition, dynamic methods are most applicable for aperiodic applications, most applicable to error-recovery, and most appropriate for applications that lack a worst case upper limit on resource and execution requirements. Audsley and Burns [AUD90] argued that no event should be unpredictable and that schedulability should be guaranteed before execution in a safety-critical system, which implies the use of static scheduling methods for such systems. Tasks whose progress is not dependent upon the progress of other task(s), excluding the competition for processor time between tasks, are termed independent. On the other hand, interdependent tasks can interact in many ways including communication and precedence relationships [AUD90] Priority-Based Scheduling CPU scheduling is the most significant of all system scheduling in improving the performance of real-time systems [HUA89, STA91]. Conventional scheduling algorithms employed by most operating systems aim at balancing the number of CPU-bound and I/O-bound jobs in order to maximize system utilization and throughput, in addition to fairness as a major design issue. On the other hand, real-time tasks need to be scheduled according to their criticalness 5 and timeliness, even if it is at the expense of sacrificing some of the conventional design goals [STA 88b]. Therefore, real-time scheduling algorithms establish a form of priority ordering among the various tasks within the system. Priorities are either assigned statically during system design time as a measure of the task s 4 If a task set can be scheduled to meet given pre-conditions, the set is termed feasible. That is, a scheduling algorithm is feasible if the requests of all tasks can be fulfilled before their respective deadlines. 12

13 importance to the system, or can be expressed as a function of time and dynamically evaluated by the scheduler [BUC89]. Such priorities are related to the attributes of the tasks. Since different applications have different attributes and characteristics, different scheduling algorithms also tend to differ in their priority assignment regimes. For example, priorities can be based on criticalness, deadlines, slack time, required/expected computation time, amount of finished/unfinished work, age, and/or a combination of such attributes [KAO95]. The objective of priority scheduling is to provide preferential treatment to tasks with higher priorities over the ones with lower-priorities. Therefore, a priority-driven scheduler prioritizes the scheduling (ready)-queue in order to service requests according to their priorities, either non-preemptively, or preemptively as we discussed earlier. Consequently, the system can ensure that the progress of higher-priority tasks (ideally) is never hindered by lower-priority tasks. In the rest of this section, we briefly discuss the various methods used in constructing different prioritydriven schedulers for real-time systems. Rate-Monotonic (RM) The Rate-Monotonic (RM) policy [LIU73] is a preemptive policy in which priorities are assigned to tasks according to their request rates (periodicity), independent of their run-times. All tasks are statically allocated a priority according to their period. The shorter the task s period the higher its assigned priority. The scheme is simple, because the priorities remain fixed, resulting in a straightforward implementation. The RM policy was shown an optimal fixed-priority scheduling policy for periodic tasks [LIU73]. Most-Critical-First (MCF) The MCF policy [JEN85] is very simple. It divides the set of tasks and assigns a certain priority level to each task based on its functionality and importance to the system. The difficulty in the MCF priority assignment comes when new functions are added to the system. Such a modification in the functionality of the system might require one to adjust all other priority assignments to reflect the new additions and modifications. The policy can significantly degrade the performance of the system if the most critical tasks tend to require the most resources or tend to have longer execution times. However, the nice property of MCF is that it can produce reliable schedulers in the sense that it strives to meet the deadlines of the top most critical tasks, regardless of the systems load. The alternative to assigning priorities statically is to derive them dynamically at run-time. Several dynamic on-line scheduling algorithms are presented next. Earliest-Deadline-First (EDF) The EDF policy is a preemptive priority-based scheduling scheme. It uses the deadlines of tasks as its primary heuristic. That is, the task with the current closest (earliest) deadline is assigned the highest priority in 5 Criticalness represents a task s importance to the overall functionality of the system. 13

14 the system and therefore is executed next. For a given set of tasks, the EDF policy is feasible (if and only if) iff: (C 1 /T 1 ) + (C 2 /T 2 ) + + (C n /T n ) 1, where C i and T i represent the amount of computation and period (submission rate) of task i. That is, the EDF policy could achieve full processor utilization based on the above given bound. The policy is also optimal in the sense that if a set of tasks can be scheduled by any algorithm under the load limits given above, it can also be scheduled by the EDF policy [LIU73]. A major weakness of this policy is that under an overload condition, it assigns the highest priority to the task that has already missed and/or about to miss its deadline. The scheme can be made time cognizant by assigning the highest priority to the task with the earliest-feasible deadline. A deadline is feasible if the remaining computation time (deadline - current time) [ABB 88a]. In addition, it was shown in a study conducted by Huang et al. [HUA89] regarding the sensitivity of scheduling algorithms to deadline distributions that the EDF is most sensitive among other scheduling policies to deadline settings. The performance of the EDF policy was shown to worsen, as the deadlines become tighter 6. Value-Functions Jensen et al. [JEN85] introduced the concept of value-functions. A value-function is more than just a deadline in the sense that a deadline represents only one discrete instant in time, whereas a value-function models a task s requirements over a window of continuous time frame. The essential idea is that the completion of each task has a value to the system upon the task s successful completion, which is expressed as a function of time. Thus, the time taken to execute a task is mapped against a value that this task has to the system. Consequently, the scheduler is required to assign priorities as well as defining the system values of completing each task at any instant in time. The system s objective is to maximize the cumulative sum of the collected values from the complete and successful execution of a given set of tasks [JEN85, ABB88]. A value-function can include a discontinuity to represent a deadline. For example, depending on the type of discontinuity, a value-function can represent hard, firm, and soft deadlines as shown Figure (1.1). The value may directly correlate to the criticalness of a task, or it may be a time varying function of a task s attributes. As can be seen from Figure (1.1), a hard deadline can be modeled so that a task can impart a full value if executed before the expiration of its deadline. However, a tardy task will impart a negative value to the system. A firm deadline task will have a value up to its deadline, and such value drops to zero after the deadline along with discarding the task. A soft deadline can be modeled by including a decay function after the deadline so that the task will still impart a positively diminishing value to the system even after its deadline is passed [JEN85, ABB88]. Value-Density (VD) A value density function (VD) is defined as VD = value computation time. This scheme tends to select the tasks that earn more value per time unit that they consume. Thus, the task with the greatest value density 6 A deadline becomes tighter as [deadline (current time + computation time)] becomes smaller. 14

15 receives the highest priority [JEN85, ABB 88a]. The VD policy is a greedy technique in the sense that it always schedules the task that has the highest expected value within the shortest possible time unit. The simulation conducted in [JEN85] showed that the performance of the VD policy is variable depending on the value-function chosen and on the systems load. Because it is a greedy algorithm, it picks up a value early rather than waits to get a higher value; thus, it (unnecessarily) misses many opportunities to meet time constraints. Deadline Deadline Deadline Value Hard Firm Figure (1.1) Soft Time Combined Criticalness-Deadline Scheduling real-time tasks is priority-driven, where priorities are based on some characteristics of the corresponding tasks; e.g., deadline and/or criticalness. Biayabani et al. [BIY88] argued that scheduling based on priorities where priorities are derived from deadlines or criticalness (separately) is not adequate, because tasks with very short deadlines might not be very critical, and vice versa. An important point that is addressed in the literature is that criticalness and deadlines are two separate independent characteristics that do not correlate in one-to-one relationship [HUA89, KAO95]. Based on this observation, many attempts have been made to combine the two attributes into the scheduling decision. In the rest of this section, we present a few of such attempts. Biayabani et al. [BIY88] introduced two scheduling algorithms ALG1 and ALG2, both of which integrate deadlines and criticalness in deriving the corresponding priorities. The two algorithms attempt to schedule an incoming task according to its deadline, ignoring its criticalness. If scheduling the task is feasible, then scheduling is successful. However, if the newly incoming task is not schedulable due to having too many tasks already in the system, the algorithms attempt to schedule the incoming task on the expense of the less critical previous tasks in the system. The two algorithms differ only in how they remove the lower critical tasks from the system. ALG1 removes lower critical tasks one at a time from low to high criticalness order. ALG2 also removes lower critical tasks one at a time, but starting from the tasks with the least criticalness and furthest deadlines. Note that Biayabani et al. [BIY88] relocates the removed task(s) to another processor, a point that we do not address in this review. Both algorithms apply EDF for under-load conditions. However, in overload situations, ALG1 switches to MCF, whereas ALG2 switches to another policy that is an artificial combination of EDF and MCF. If scheduling was based on EDF and MCF together at the same time then it is a natural combination. However, since the scheduling decision under an overload is based on MCF first, then on EDF, the two policies are not actually integrated into one measure and therefore we believe it is an artificial combination of the two policies. 15

16 The simulation conducted by Biayabani et al. [BIY88] revealed that at low loads, deadline-based algorithms tend to perform better than criticalness-based algorithms. On the other hand, at high loads, the situation is reversed and the criticalness-based algorithms outperform deadline-based algorithms. Furthermore, combining the deadlines and criticalness together in one policy; e.g., (ALG1 and ALG2) can outperform both deadlinebased and criticalness-based algorithms. Huang et al. [HUA89] proposed an on-line scheduling algorithm called Criticalness-Deadline First (CDF) in which each task is assigned a priority at the time of its arrival, based on its relative-deadline 7 divided by its criticalness. Huang et al. [HUA89] showed that CPU scheduling based on the CDF policy significantly improves the overall performance of the system over techniques that consider deadlines or criticalness as separate parameters. Furthermore, the CDF policy was shown to achieve a good performance for the more critical tasks at the cost of losing the less critical tasks. This trade-off reflects the nature of real-time processing that is based on criticalness and timing constraints. Thus, to get the best performance, both criticalness and deadlines should be used for CPU scheduling. The study conducted in [STA91] concluded the following points. First, in a CPU-bound system, the CPU scheduling algorithm has a significant impact on the performance of a real-time system, and dominates all of the other types of protocols. Second, in order to obtain good CPU scheduling performance, both criticalness and deadlines of a task should be considered in priority assignment. Buttazzo et al. [BUT95] proposed another value-deadline combined technique known as a weighted Earliest Deadline Value Density First (EDVDF). We defer the discussion of the EDVDF policy to the overload section, towards the end of this chapter Synchronization Real-time tasks interact in order to satisfy system wide requirements. Such interactions range from simple synchronization to mutual exclusion protection of non-sharable resources. To calculate the execution time for a task requires knowledge of how long it will be blocked on any synchronization primitive it uses. Ideally, a higher-priority task, T H, should be able to preempt a lower-priority task, T L, immediately upon request. However, to maintain consistency of a shared resource, the access must be serialized. If T H gains access first, then the proper priority order is maintained. On the other hand, if T L gains access first followed by a request from T H to access the shared resource, T H is blocked until T L completes its access to the shared resource. The primary difficulty with blocking mechanisms is that a higher-priority task can be blocked by a lowerpriority task possibly for an unbounded number of times and for unbounded periods a phenomenon known as the priority-inversion problem. Unfortunately, T H, mentioned above, is not only blocked by T L, but it ends up waiting for any medium priority task, T M, that wishes to execute during that period. Task T M will preempt T L 7 A relative deadline = absolute deadline arrival time. 16

17 and hence further delay T H, whose progress became dependent on that of T L. Such priority inversion could mature into a serious problem in real-time systems due to its role in lowering both, schedulability as well as predictability of the system [SHA90]. There are various methods that can be integrated with the scheduler to reduce the negative effect(s) of the priority inversion problem, two of which are presented in the following subsequent sections The Priority Inheritance Protocol Under the priority-inheritance protocol [SHA90], when a task blocks one or more higher-priority task(s), it inherits the highest priority level of all the tasks it blocks and executes its resource (critical section) at such an elevated priority level. After exiting its critical section, it returns to its original priority level. Consequently, a lower-priority task T L directly blocks a higher-priority task T H (temporarily only for the duration of the critical section). Such blocking is necessary to ensure mutual exclusion and the consistency of critical section execution. Furthermore, a medium priority task T M will also be blocked by the elevated priority task to avoid having T M preempt T L, thereby indirectly preempting or delaying T 8 H. The priority inheritance criterion is transitive. That is, given T 1, T 2, and T 3 as three tasks with T 1 having the highest priority and T 3 having the lowest priority. If T 2 blocks T 1, and T 3 blocks T 2, then T 3 inherits T 1 s priority through T 2 s inheritance. Effectively, T 1 is blocked by both lower-priority tasks T 2 and T 3. In addition, when a task inherits a higher-priority, it uses the elevated priority in competing for all its resource needs during the duration at which its priority is elevated The Priority Ceiling Protocol The priority ceiling protocol [SHA90] extends the priority inheritance protocol in order to prevent the formation of deadlocks as well as chained-blocking 9, both of which are experienced by the priority-inheritance protocol presented above. The priority ceiling protocol is as follows: Assign a priority level; i.e., a ceiling, to every critical section (CS). The ceiling is set equal to the highest priority level that may use or access this CS. A task wishes to access a CS can simply do so if there are no suspended tasks within their CSs. If a task is suspended while executing within a CS due to preemption by a higher-priority task, then the priority ceiling comes into effect. If the higher-priority preempting task, T H, has a priority that is higher than all ceilings of all currently preempted tasks, then it can access its CS. Otherwise, T H is suspended, the lower-priority task, T L, inherits the priority of T H, and T L resumes execution at the elevated priority level. When a task exits its CS, it returns to its original priority if it had inherited any higher priority during its execution. 8 When an elevated priority task blocks a medium priority task, it is called push-through blocking [SHA90]. 17

18 The priority ceiling protocol achieves what it is designed for; that is, prevent transitive blocking (chainedblocking) and prevents deadlocks as well. However, it has the following problems. The performance of the algorithm is very sensitive to the size of the critical section(s) [GRA92]. Consider two tasks T 1 and T 2 where T 1 has a higher priority and may wish to access many critical sections. However, T 2 accesses only a single critical section and it happened to be in common with T 1. If T 2 accesses the common critical section first, then T 1 cannot access any of its other critical sections. Although, the common critical section, could be embedded within a conditional statement and may never actually be accessed by T 1. If T 1 and T 3 have a critical section in common and therefore the priority ceiling of the critical section is set to the priority of T 1. Assume that T 3 accesses the critical section, and T 2 arrives and preempts T 3, while T 1 has not even arrived. T 2 cannot access any of its critical sections due to the push-through blocking effect. The main reason behind push-through blocking is to prevent T 2 from indirectly blocking T 1. However, as a side effect to it, T 2 will block even if T 1 is not in the system yet. Imagine the same scenario between T 1 T 100, with T 100 having the lowest priority in the system. T 100 can actually block T 1 T 99, directly and via push-through blocking, which could cause T 1 T 99 to miss their deadlines due to being blocked by the absolute lowest priority task, in a priority-driven system! The interested reader may find a detailed presentation of the priority-inheritance and the priority-ceiling protocols in [RAJ91, and RAJ95] Overload A system is under-loaded if there is a schedule that will meet the deadline of every task. We have presented several on-line scheduling algorithms for a uniprocessor environments. However, none of the presented algorithms actually makes any performance guarantees when the system is overloaded. Practical systems are prone to intermittent overloading caused by any of the following factors: A cascading of exceptional situations, often corresponding to emergencies. Effective scheduling decisions require complete knowledge of the execution time of a task. Meanwhile, execution times are generally stochastic in many systems and environments [BAR 91a]. Using worst-case execution times to schedule a set of tasks could reduce the processor utilization under normal operating conditions. On the other hand, scheduling using less than worst execution times introduces the possibility of an overload. If worst-case calculations were too optimistic, or the hardware fails to perform as anticipated. A practical on-line scheduling algorithm should not only be optimal under normal circumstances, but also respond appropriately to overload conditions [BAR 91a]. 9 Chained-blocking refers to a chain of lower-priority tasks blocking a higher-priority task. 18

19 An on-line scheduling algorithm is said to have a competitive-factor (r) on a set of tasks, iff it is guaranteed to achieve a cumulative value r of the value achievable by a clairvoyant scheduling algorithm on the same set of tasks where 0 r 1. A clairvoyant scheduling algorithm is the one that knows the arrival time, value, execution time, and deadline of all future task requests [BAR 91a, BAR 91b]. An optimal on-line scheduling algorithm such as EDF has been shown to achieve r = 1, when the loadingfactor (f) 1. For a uniprocessor environment, Baruah et al. [BAR 91a, and BAR 91b] have proven that no on-line scheduling algorithm can offer r > 0.25 (or ¼) under f 2 + ε, where ε is an arbitrary small positive number. This implies, in contrast to EDF whose competitive factor crumbles at f > 1, that there could exist an on-line scheduling algorithm whose performance can be optimal for 0 f 2. However, for 1< f 2, Baruah et al. [BAR 91a, BAR 91b] showed that an on-line scheduling algorithm may not obtain more than [1 (1+ k ) 2 ] of the value obtainable by an off-line clairvoyant algorithm; where k is the ratio of the highest value density to the lowest value density of the tasks within the system. Such a bound rapidly drops below 0.25 as the value density differs among the competing tasks within the system. In contrast to the theoretical bound presented above, Buttazzo et al. [BUT95] argued that such an upper bound has only a theoretical validity, because it is achieved under very restrictive (almost unrealistic) set of assumptions. For example, tasks have zero laxity, task s execution time can be very short (epsilon short which Baruah et al. called baits), and each task s value is equal to its computation time. Buttazzo et al. [BUT95] conducted a comparative study for four scheduling policies: EDF, MCF, VD, and a weighted Earliest Deadline Value Density First (EDVDF). Where the priority P i derived by EDVDF is computed as follows: P i = (α) VD i (1-α) d i, where P i, VD i, and d i represent the priority, the value density, and deadline of task i, respectively. The four policies were further extended to manage overload in two different manners. Either simply reject the incoming task(s) or remove the task(s) with the least value or criticalness (depending on the priority assignment policy being used) until the removal of the overload. Note that for the MCF policy, the value of a task correlates directly to its criticalness. The first method was called the guaranteed-class, while the second was called the robust-class. In addition, the later was also equipped with a queue to hold all rejected (removed) tasks, which will be processed only if active tasks finish their execution earlier than was anticipated. The simulation conducted in [BUT95] showed the following results: In the presence of an overload and without any mechanism to deal with an overload, the VD policy was the most effective policy among the simulated four algorithms. In addition, the VD policy degrades gracefully and less sensitive to the tasks parameters. In the presence of an overload and with an overload management mechanism, whether employing the robust or the guaranteed overload management strategy, the EDF policy seems to be the most effective. The robust class was found to outperform the guaranteed class. 19

20 The conclusion of Buttazzo et al. [BUT95] was that scheduling by deadline before an overload and by value during an overload works best in most practical conditions. However, with respect to the second point reported by Buttazzo et al. [BUT95] above, the reader should note that there is no correlation between the deadlines and the values of the removed tasks. That is, if tasks are removed based on their values in order to bring the load down to the feasible systems capacity, the removed tasks could actually be the set of tasks with the furthest deadlines. On the other hand, the remaining most-value tasks could be the ones with tight deadlines and subjected to abortion. In fact, the remaining most-value tasks could have infeasible deadlines. The behavior of EDF policy, which we have reported earlier, depends heavily on the distribution of the deadlines [HUA89]. The results reported in [BUT95] must be based on a distribution opposite to our argument above. That is, Buttazzo et al. [BUT95] must have had a distribution where the most valuable tasks happened to have the furthest (loosest) deadlines. Buttazzo s et al. skewed distribution could result from many parameters in a simulator, for example, the random number generator and its seed(s). We wish to conclude this section with the following comment: When a real-time system is overloaded, not all tasks can be completed by their deadlines. Therefore, many researchers state that the objective of a scheduling algorithm should be to schedule at least the most important tasks. However, we have not come across any research that actually indicates what constitutes the most important tasks. That is, many researchers seem to echo the above statement, yet, to the best of our knowledge, no one had taken the initiative to actually define what constitutes the most important tasks within a system. We believe that the lack of a precise definition of the tasks importance, known as criticalness, is a gap in the knowledge of the real-time community. One needs to precisely define whether criticalness is a static or a dynamic value that varies in correspondence to the system s mode, operating environment and conditions, the tasks present and past behavior, in addition to the tasks requirements as well as many other parameters. However, until a concise definition of the most important tasks is formulated, scheduling the most important tasks is an incomplete claim. In the next chapters of this review, various design issues will be addressed along with the proposed solutions to every strategy. We will list many of the important issues but we limit our discussion and presentation to only a subset of such issues. In presenting various solutions, we intend to list such solutions in chronological order, which will lead us through the advancement of RTDB systems over the last decade. Such advancement leads in turn to our future research intentions and identifies the general areas that we wish to investigate. 20

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example Real-Time Scheduling (Part 1) (Working Draft) Insup Lee Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania www.cis.upenn.edu/~lee/ CIS 41,

More information

Priority-Driven Scheduling

Priority-Driven Scheduling Priority-Driven Scheduling Advantages of Priority-Driven Scheduling Priority-driven scheduling is easy to implement. It does not require the information on the release times and execution times of the

More information

3. Scheduling issues. Common approaches /1. Common approaches /2. Common approaches /3. 2012/13 UniPD / T. Vardanega 23/01/2013. Real-Time Systems 1

3. Scheduling issues. Common approaches /1. Common approaches /2. Common approaches /3. 2012/13 UniPD / T. Vardanega 23/01/2013. Real-Time Systems 1 Common approaches /1 3. Scheduling issues Clock-driven (time-driven) scheduling Scheduling decisions are made beforehand (off line) and carried out at predefined time instants The time instants normally

More information

Lecture 3 Theoretical Foundations of RTOS

Lecture 3 Theoretical Foundations of RTOS CENG 383 Real-Time Systems Lecture 3 Theoretical Foundations of RTOS Asst. Prof. Tolga Ayav, Ph.D. Department of Computer Engineering Task States Executing Ready Suspended (or blocked) Dormant (or sleeping)

More information

Real Time Database Systems

Real Time Database Systems Real Time Database Systems Jan Lindström Solid, an IBM Company Itälahdenkatu 22 B 00210 Helsinki, Finland jan.lindstrom@fi.ibm.com March 25, 2008 1 Introduction A real-time database system (RTDBS) is a

More information

Predictable response times in event-driven real-time systems

Predictable response times in event-driven real-time systems Predictable response times in event-driven real-time systems Automotive 2006 - Security and Reliability in Automotive Systems Stuttgart, October 2006. Presented by: Michael González Harbour mgh@unican.es

More information

4. Fixed-Priority Scheduling

4. Fixed-Priority Scheduling Simple workload model 4. Fixed-Priority Scheduling Credits to A. Burns and A. Wellings The application is assumed to consist of a fixed set of tasks All tasks are periodic with known periods This defines

More information

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration 1 Harish H G, 2 Dr. R Girisha 1 PG Student, 2 Professor, Department of CSE, PESCE Mandya (An Autonomous Institution under

More information

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum Scheduling Yücel Saygın These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum 1 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods

More information

Real Time Scheduling Basic Concepts. Radek Pelánek

Real Time Scheduling Basic Concepts. Radek Pelánek Real Time Scheduling Basic Concepts Radek Pelánek Basic Elements Model of RT System abstraction focus only on timing constraints idealization (e.g., zero switching time) Basic Elements Basic Notions task

More information

N. Audsley. A. Burns Department of Computer Science, University of York, UK. ABSTRACT

N. Audsley. A. Burns Department of Computer Science, University of York, UK. ABSTRACT REAL-TIME SYSTEM SCHEDULING N Audsley A Burns Department of Computer Science, University of York, UK ABSTRACT Recent results in the application of scheduling theory to dependable real-time systems are

More information

Real-Time Databases and Data Services

Real-Time Databases and Data Services Real-Time Databases and Data Services Krithi Ramamritham, Indian Institute of Technology Bombay, krithi@cse.iitb.ac.in Sang H. Son, University of Virginia, son@cs.virginia.edu Lisa Cingiser DiPippo, University

More information

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses Overview of Real-Time Scheduling Embedded Real-Time Software Lecture 3 Lecture Outline Overview of real-time scheduling algorithms Clock-driven Weighted round-robin Priority-driven Dynamic vs. static Deadline

More information

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1 Module 6 Embedded System Software Version 2 EE IIT, Kharagpur 1 Lesson 30 Real-Time Task Scheduling Part 2 Version 2 EE IIT, Kharagpur 2 Specific Instructional Objectives At the end of this lesson, the

More information

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010 Real-Time Software Basic Scheduling and Response-Time Analysis René Rydhof Hansen 21. september 2010 TSW (2010e) (Lecture 05) Real-Time Software 21. september 2010 1 / 28 Last Time Time in a real-time

More information

Common Approaches to Real-Time Scheduling

Common Approaches to Real-Time Scheduling Common Approaches to Real-Time Scheduling Clock-driven time-driven schedulers Priority-driven schedulers Examples of priority driven schedulers Effective timing constraints The Earliest-Deadline-First

More information

Aperiodic Task Scheduling

Aperiodic Task Scheduling Aperiodic Task Scheduling Jian-Jia Chen (slides are based on Peter Marwedel) TU Dortmund, Informatik 12 Germany Springer, 2010 2014 年 11 月 19 日 These slides use Microsoft clip arts. Microsoft copyright

More information

4003-440/4003-713 Operating Systems I. Process Scheduling. Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu)

4003-440/4003-713 Operating Systems I. Process Scheduling. Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu) 4003-440/4003-713 Operating Systems I Process Scheduling Warren R. Carithers (wrc@cs.rit.edu) Rob Duncan (rwd@cs.rit.edu) Review: Scheduling Policy Ideally, a scheduling policy should: Be: fair, predictable

More information

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operatin g Systems: Internals and Design Principle s Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Bear in mind,

More information

Operating System Aspects. Real-Time Systems. Resource Management Tasks

Operating System Aspects. Real-Time Systems. Resource Management Tasks Operating System Aspects Chapter 2: Basics Chapter 3: Multimedia Systems Communication Aspects and Services Multimedia Applications and Communication Multimedia Transfer and Control Protocols Quality of

More information

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS CPU SCHEDULING CPU SCHEDULING (CONT D) Aims to assign processes to be executed by the CPU in a way that meets system objectives such as response time, throughput, and processor efficiency Broken down into

More information

Periodic Task Scheduling

Periodic Task Scheduling Periodic Task Scheduling Radek Pelánek Motivation and Assumptions Examples of Periodic Tasks sensory data acquisition control loops action planning system monitoring Motivation and Assumptions Simplifying

More information

Real- Time Scheduling

Real- Time Scheduling Real- Time Scheduling Chenyang Lu CSE 467S Embedded Compu5ng Systems Readings Ø Single-Processor Scheduling: Hard Real-Time Computing Systems, by G. Buttazzo. q Chapter 4 Periodic Task Scheduling q Chapter

More information

Introduction. Scheduling. Types of scheduling. The basics

Introduction. Scheduling. Types of scheduling. The basics Introduction In multiprogramming systems, when there is more than one runable (i.e., ready), the operating system must decide which one to activate. The decision is made by the part of the operating system

More information

174: Scheduling Systems. Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 TIMING ANALYSIS IN NETWORKED MEASUREMENT CONTROL SYSTEMS

174: Scheduling Systems. Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 TIMING ANALYSIS IN NETWORKED MEASUREMENT CONTROL SYSTEMS 174: Scheduling Systems Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 Timing Analysis in Networked Measurement Control Systems 1 2 Introduction to Scheduling Systems 2 3 Scheduling Theory

More information

Commonly Used Approaches to Real-Time Scheduling

Commonly Used Approaches to Real-Time Scheduling Integre Technical Publishing Co., Inc. Liu January 13, 2000 8:46 a.m. chap4 page 60 C H A P T E R 4 Commonly Used Approaches to Real-Time Scheduling This chapter provides a brief overview of three commonly

More information

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2 Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2 Lecture Outline Scheduling periodic tasks The rate monotonic algorithm Definition Non-optimality Time-demand analysis...

More information

Scheduling Sporadic Tasks with Shared Resources in Hard-Real-Time Systems

Scheduling Sporadic Tasks with Shared Resources in Hard-Real-Time Systems Scheduling Sporadic Tasks with Shared Resources in Hard-Real- Systems Kevin Jeffay * University of North Carolina at Chapel Hill Department of Computer Science Chapel Hill, NC 27599-3175 jeffay@cs.unc.edu

More information

CPU Scheduling. Core Definitions

CPU Scheduling. Core Definitions CPU Scheduling General rule keep the CPU busy; an idle CPU is a wasted CPU Major source of CPU idleness: I/O (or waiting for it) Many programs have a characteristic CPU I/O burst cycle alternating phases

More information

Scheduling Real-time Tasks: Algorithms and Complexity

Scheduling Real-time Tasks: Algorithms and Complexity Scheduling Real-time Tasks: Algorithms and Complexity Sanjoy Baruah The University of North Carolina at Chapel Hill Email: baruah@cs.unc.edu Joël Goossens Université Libre de Bruxelles Email: joel.goossens@ulb.ac.be

More information

An Efficient Non-Preemptive Real-Time Scheduling

An Efficient Non-Preemptive Real-Time Scheduling An Efficient Non-Preemptive Real-Time Scheduling Wenming Li, Krishna Kavi and Robert Akl Department of Computer Science and Engineering The University of North Texas Denton, Texas 7623, USA {wenming, kavi,

More information

The simple case: Cyclic execution

The simple case: Cyclic execution The simple case: Cyclic execution SCHEDULING PERIODIC TASKS Repeat a set of aperiodic tasks at a specific rate (cycle) 1 2 Periodic tasks Periodic tasks (the simplified case) Scheduled to run Arrival time

More information

Chapter 5 Process Scheduling

Chapter 5 Process Scheduling Chapter 5 Process Scheduling CPU Scheduling Objective: Basic Scheduling Concepts CPU Scheduling Algorithms Why Multiprogramming? Maximize CPU/Resources Utilization (Based on Some Criteria) CPU Scheduling

More information

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking. CPU Scheduling The scheduler is the component of the kernel that selects which process to run next. The scheduler (or process scheduler, as it is sometimes called) can be viewed as the code that divides

More information

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/

Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/ Operating Systems Institut Mines-Telecom III. Scheduling Ludovic Apvrille ludovic.apvrille@telecom-paristech.fr Eurecom, office 470 http://soc.eurecom.fr/os/ Outline Basics of Scheduling Definitions Switching

More information

Today. Intro to real-time scheduling Cyclic executives. Scheduling tables Frames Frame size constraints. Non-independent tasks Pros and cons

Today. Intro to real-time scheduling Cyclic executives. Scheduling tables Frames Frame size constraints. Non-independent tasks Pros and cons Today Intro to real-time scheduling Cyclic executives Scheduling tables Frames Frame size constraints Generating schedules Non-independent tasks Pros and cons Real-Time Systems The correctness of a real-time

More information

An Essay on Real-Time Databases

An Essay on Real-Time Databases An Essay on Real-Time Databases Raul Barbosa Department of Computer Science and Engineering Chalmers University of Technology SE-412 96 Göteborg, Sweden rbarbosa@ce.chalmers.se 1 Introduction Data management

More information

ICS 143 - Principles of Operating Systems

ICS 143 - Principles of Operating Systems ICS 143 - Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian nalini@ics.uci.edu Note that some slides are adapted from course text slides 2008 Silberschatz. Some

More information

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput Import Settings: Base Settings: Brownstone Default Highest Answer Letter: D Multiple Keywords in Same Paragraph: No Chapter: Chapter 5 Multiple Choice 1. Which of the following is true of cooperative scheduling?

More information

CPU Scheduling Outline

CPU Scheduling Outline CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different

More information

A Periodic Events - For the Non- Scheduling Server

A Periodic Events - For the Non- Scheduling Server 6. Aperiodic events 6.1 Concepts and definitions 6.2 Polling servers 6.3 Sporadic servers 6.4 Analyzing aperiodic tasks 6.5 Modelling aperiodic events GRUPO DE COMPUTADORES Y TIEMPO REAL REAL-TIME SYSTEMS

More information

Multi-core real-time scheduling

Multi-core real-time scheduling Multi-core real-time scheduling Credits: Anne-Marie Déplanche, Irccyn, Nantes (many slides come from her presentation at ETR, Brest, September 2011) 1 Multi-core real-time scheduling! Introduction: problem

More information

LAB 5: Scheduling Algorithms for Embedded Systems

LAB 5: Scheduling Algorithms for Embedded Systems LAB 5: Scheduling Algorithms for Embedded Systems Say you have a robot that is exploring an area. The computer controlling the robot has a number of tasks to do: getting sensor input, driving the wheels,

More information

Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff

Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff Process Scheduling CS 241 February 24, 2012 Copyright University of Illinois CS 241 Staff 1 Announcements Mid-semester feedback survey (linked off web page) MP4 due Friday (not Tuesday) Midterm Next Tuesday,

More information

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run

More information

Performance Comparison of RTOS

Performance Comparison of RTOS Performance Comparison of RTOS Shahmil Merchant, Kalpen Dedhia Dept Of Computer Science. Columbia University Abstract: Embedded systems are becoming an integral part of commercial products today. Mobile

More information

CPU Scheduling. CPU Scheduling

CPU Scheduling. CPU Scheduling CPU Scheduling Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & APPS 1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling

More information

Linux Process Scheduling Policy

Linux Process Scheduling Policy Lecture Overview Introduction to Linux process scheduling Policy versus algorithm Linux overall process scheduling objectives Timesharing Dynamic priority Favor I/O-bound process Linux scheduling algorithm

More information

Lec. 7: Real-Time Scheduling

Lec. 7: Real-Time Scheduling Lec. 7: Real-Time Scheduling Part 1: Fixed Priority Assignment Vijay Raghunathan ECE568/CS590/ECE495/CS490 Spring 2011 Reading List: RM Scheduling 2 [Balarin98] F. Balarin, L. Lavagno, P. Murthy, and A.

More information

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing www.ijcsi.org 227 Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing Dhuha Basheer Abdullah 1, Zeena Abdulgafar Thanoon 2, 1 Computer Science Department, Mosul University,

More information

A REPLICATION STRATEGY FOR DISTRIBUTED REAL-TIME OBJECT ORIENTED DATABASES PRAVEEN PEDDI A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE

A REPLICATION STRATEGY FOR DISTRIBUTED REAL-TIME OBJECT ORIENTED DATABASES PRAVEEN PEDDI A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE A REPLICATION STRATEGY FOR DISTRIBUTED REAL-TIME OBJECT ORIENTED DATABASES BY PRAVEEN PEDDI A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN COMPUTER

More information

!! #!! %! #! & ((() +, %,. /000 1 (( / 2 (( 3 45 (

!! #!! %! #! & ((() +, %,. /000 1 (( / 2 (( 3 45 ( !! #!! %! #! & ((() +, %,. /000 1 (( / 2 (( 3 45 ( 6 100 IEEE TRANSACTIONS ON COMPUTERS, VOL. 49, NO. 2, FEBRUARY 2000 Replica Determinism and Flexible Scheduling in Hard Real-Time Dependable Systems Stefan

More information

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems Based on original slides by Silberschatz, Galvin and Gagne 1 Basic Concepts CPU I/O Burst Cycle Process execution

More information

Partition Scheduling in APEX Runtime Environment for Embedded Avionics Software

Partition Scheduling in APEX Runtime Environment for Embedded Avionics Software Partition Scheduling in APEX Runtime Environment for Embedded Avionics Software Yang-Hang Lee CISE Department, University of Florida Gainesville, FL 32611 Phone: (352) 392-1536 Fax: (352) 392-1220 Email:

More information

Resource Reservation & Resource Servers. Problems to solve

Resource Reservation & Resource Servers. Problems to solve Resource Reservation & Resource Servers Problems to solve Hard-deadline tasks may be Periodic or Sporadic (with a known minimum arrival time) or Non periodic (how to deal with this?) Soft-deadline tasks

More information

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d Comp 204: Computer Systems and Their Implementation Lecture 12: Scheduling Algorithms cont d 1 Today Scheduling continued Multilevel queues Examples Thread scheduling 2 Question A starvation-free job-scheduling

More information

Real-Time Component Software. slide credits: H. Kopetz, P. Puschner

Real-Time Component Software. slide credits: H. Kopetz, P. Puschner Real-Time Component Software slide credits: H. Kopetz, P. Puschner Overview OS services Task Structure Task Interaction Input/Output Error Detection 2 Operating System and Middleware Applica3on So5ware

More information

Udai Shankar 2 Deptt. of Computer Sc. & Engineering Madan Mohan Malaviya Engineering College, Gorakhpur, India

Udai Shankar 2 Deptt. of Computer Sc. & Engineering Madan Mohan Malaviya Engineering College, Gorakhpur, India A Protocol for Concurrency Control in Real-Time Replicated Databases System Ashish Srivastava 1 College, Gorakhpur. India Udai Shankar 2 College, Gorakhpur, India Sanjay Kumar Tiwari 3 College, Gorakhpur,

More information

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010. Road Map Scheduling Dickinson College Computer Science 354 Spring 2010 Past: What an OS is, why we have them, what they do. Base hardware and support for operating systems Process Management Threads Present:

More information

Real-Time Scheduling 1 / 39

Real-Time Scheduling 1 / 39 Real-Time Scheduling 1 / 39 Multiple Real-Time Processes A runs every 30 msec; each time it needs 10 msec of CPU time B runs 25 times/sec for 15 msec C runs 20 times/sec for 5 msec For our equation, A

More information

Chapter 10. Backup and Recovery

Chapter 10. Backup and Recovery Chapter 10. Backup and Recovery Table of Contents Objectives... 1 Relationship to Other Units... 2 Introduction... 2 Context... 2 A Typical Recovery Problem... 3 Transaction Loggoing... 4 System Log...

More information

Operating Systems, 6 th ed. Test Bank Chapter 7

Operating Systems, 6 th ed. Test Bank Chapter 7 True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one

More information

HARD REAL-TIME SCHEDULING: THE DEADLINE-MONOTONIC APPROACH 1. Department of Computer Science, University of York, York, YO1 5DD, England.

HARD REAL-TIME SCHEDULING: THE DEADLINE-MONOTONIC APPROACH 1. Department of Computer Science, University of York, York, YO1 5DD, England. HARD REAL-TIME SCHEDULING: THE DEADLINE-MONOTONIC APPROACH 1 N C Audsley A Burns M F Richardson A J Wellings Department of Computer Science, University of York, York, YO1 5DD, England ABSTRACT The scheduling

More information

Scheduling. Monday, November 22, 2004

Scheduling. Monday, November 22, 2004 Scheduling Page 1 Scheduling Monday, November 22, 2004 11:22 AM The scheduling problem (Chapter 9) Decide which processes are allowed to run when. Optimize throughput, response time, etc. Subject to constraints

More information

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis Mark Stanovich Florida State University, USA Contents 1 Introduction 2 2 Scheduling Theory 3 2.1 Workload

More information

Improved Handling of Soft Aperiodic Tasks in Offline Scheduled Real-Time Systems using Total Bandwidth Server

Improved Handling of Soft Aperiodic Tasks in Offline Scheduled Real-Time Systems using Total Bandwidth Server Improved Handling of Soft Aperiodic Tasks in Offline Scheduled Real-Time Systems using Total Bandwidth Server Gerhard Fohler, Tomas Lennvall Mälardalen University Västeras, Sweden gfr, tlv @mdh.se Giorgio

More information

Aperiodic Task Scheduling

Aperiodic Task Scheduling Aperiodic Task Scheduling Gerhard Fohler Mälardalen University, Sweden gerhard.fohler@mdh.se Real-Time Systems Gerhard Fohler 2005 Non Periodic Tasks So far periodic events and tasks what about others?

More information

Hardware Task Scheduling and Placement in Operating Systems for Dynamically Reconfigurable SoC

Hardware Task Scheduling and Placement in Operating Systems for Dynamically Reconfigurable SoC Hardware Task Scheduling and Placement in Operating Systems for Dynamically Reconfigurable SoC Yuan-Hsiu Chen and Pao-Ann Hsiung National Chung Cheng University, Chiayi, Taiwan 621, ROC. pahsiung@cs.ccu.edu.tw

More information

Sustainability in Real-time Scheduling

Sustainability in Real-time Scheduling Sustainability in Real-time Scheduling Alan Burns The University of York burns@cs.york.ac.uk Sanjoy Baruah The University of North Carolina baruah@cs.unc.edu A scheduling policy or a schedulability test

More information

Scheduling Aperiodic and Sporadic Jobs in Priority- Driven Systems

Scheduling Aperiodic and Sporadic Jobs in Priority- Driven Systems Scheduling Aperiodic and Sporadic Jobs in Priority- Driven Systems Ingo Sander ingo@kth.se Liu: Chapter 7 IL2212 Embedded Software 1 Outline l System Model and Assumptions l Scheduling Aperiodic Jobs l

More information

On-line Scheduling of Real-time Services for Cloud Computing

On-line Scheduling of Real-time Services for Cloud Computing On-line Scheduling of Real-time Services for Cloud Computing Shuo Liu Gang Quan Electrical and Computer Engineering Department Florida International University Miami, FL, 33174 {sliu5, gang.quan}@fiu.edu

More information

Performance of Algorithms for Scheduling Real-Time Systems with Overrun and Overload

Performance of Algorithms for Scheduling Real-Time Systems with Overrun and Overload Published in the Proceedings of the Eleventh Euromicro Conference on Real-Time Systems, 9-11 June 1999, held at University of York, England Performance of Algorithms for Scheduling Real-Time Systems with

More information

OPERATING SYSTEMS SCHEDULING

OPERATING SYSTEMS SCHEDULING OPERATING SYSTEMS SCHEDULING Jerry Breecher 5: CPU- 1 CPU What Is In This Chapter? This chapter is about how to get a process attached to a processor. It centers around efficient algorithms that perform

More information

Processor Scheduling. Queues Recall OS maintains various queues

Processor Scheduling. Queues Recall OS maintains various queues Processor Scheduling Chapters 9 and 10 of [OS4e], Chapter 6 of [OSC]: Queues Scheduling Criteria Cooperative versus Preemptive Scheduling Scheduling Algorithms Multi-level Queues Multiprocessor and Real-Time

More information

Aperiodic Task Scheduling for Real-Time Systems. Brinkley Sprunt

Aperiodic Task Scheduling for Real-Time Systems. Brinkley Sprunt Aperiodic Task Scheduling for Real-Time Systems Ph.D. Dissertation Brinkley Sprunt Department of Electrical and Computer Engineering Carnegie Mellon University August 199 Ph.D. Dissertation Submitted in

More information

W4118 Operating Systems. Instructor: Junfeng Yang

W4118 Operating Systems. Instructor: Junfeng Yang W4118 Operating Systems Instructor: Junfeng Yang Outline Introduction to scheduling Scheduling algorithms 1 Direction within course Until now: interrupts, processes, threads, synchronization Mostly mechanisms

More information

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize? Overview of Presentation (Greek to English dictionary) introduction to : elements, purpose, goals, metrics lambda request arrival rate (e.g. 200/second) non-preemptive first-come-first-served, shortest-job-next

More information

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 26 Real - Time POSIX. (Contd.) Ok Good morning, so let us get

More information

Efficient overloading techniques for primary-backup scheduling in real-time systems

Efficient overloading techniques for primary-backup scheduling in real-time systems ARTICLE IN PRESS J. Parallel Distrib. Comput. 64 (24) 629 648 Efficient overloading techniques for primary-backup scheduling in real-time systems R. Al-Omari, a, Arun K. Somani, b and G. Manimaran b, *

More information

CRASH RECOVERY FOR REAL-TIME MAIN MEMORY DATABASE SYSTEMS

CRASH RECOVERY FOR REAL-TIME MAIN MEMORY DATABASE SYSTEMS CRASH RECOVERY FOR REAL-TIME MAIN MEMORY DATABASE SYSTEMS Jing Huang Le Gruenwald School of Computer Science The University of Oklahoma Norman, OK 73019 Email: gruenwal@mailhost.ecn.uoknor.edu Keywords:

More information

PART III. OPS-based wide area networks

PART III. OPS-based wide area networks PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity

More information

Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification

Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification Introduction Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification Advanced Topics in Software Engineering 1 Concurrent Programs Characterized by

More information

On-line scheduling algorithm for real-time multiprocessor systems with ACO

On-line scheduling algorithm for real-time multiprocessor systems with ACO International Journal of Intelligent Information Systems 2015; 4(2-1): 13-17 Published online January 28, 2015 (http://www.sciencepublishinggroup.com/j/ijiis) doi: 10.11648/j.ijiis.s.2015040201.13 ISSN:

More information

Effective Scheduling Algorithm and Scheduler Implementation for use with Time-Triggered Co-operative Architecture

Effective Scheduling Algorithm and Scheduler Implementation for use with Time-Triggered Co-operative Architecture http://dx.doi.org/10.5755/j01.eee.20.6.7282 ELEKTRONIKA IR ELEKTROTECHNIKA, ISSN 1392 1215, VOL. 20, NO. 6, 2014 Effective Scheduling Algorithm and Scheduler Implementation for use with Time-Triggered

More information

What is best for embedded development? Do most embedded projects still need an RTOS?

What is best for embedded development? Do most embedded projects still need an RTOS? RTOS versus GPOS: What is best for embedded development? Do most embedded projects still need an RTOS? It is a good question, given the speed of today s high-performance processors and the availability

More information

A Comparative Study of Scheduling Algorithms for Real Time Task

A Comparative Study of Scheduling Algorithms for Real Time Task , Vol. 1, No. 4, 2010 A Comparative Study of Scheduling Algorithms for Real Time Task M.Kaladevi, M.C.A.,M.Phil., 1 and Dr.S.Sathiyabama, M.Sc.,M.Phil.,Ph.D, 2 1 Assistant Professor, Department of M.C.A,

More information

Real-time scheduling algorithms, task visualization

Real-time scheduling algorithms, task visualization Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 2006 Real-time scheduling algorithms, task visualization Kevin Churnetski Follow this and additional works at:

More information

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems Chapter 19: Real-Time Systems System Characteristics Features of Real-Time Systems Chapter 19: Real-Time Systems Implementing Real-Time Operating Systems Real-Time CPU Scheduling VxWorks 5.x 19.2 Silberschatz,

More information

Tasks Schedule Analysis in RTAI/Linux-GPL

Tasks Schedule Analysis in RTAI/Linux-GPL Tasks Schedule Analysis in RTAI/Linux-GPL Claudio Aciti and Nelson Acosta INTIA - Depto de Computación y Sistemas - Facultad de Ciencias Exactas Universidad Nacional del Centro de la Provincia de Buenos

More information

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Operating Systems Concepts: Chapter 7: Scheduling Strategies Operating Systems Concepts: Chapter 7: Scheduling Strategies Olav Beckmann Huxley 449 http://www.doc.ic.ac.uk/~ob3 Acknowledgements: There are lots. See end of Chapter 1. Home Page for the course: http://www.doc.ic.ac.uk/~ob3/teaching/operatingsystemsconcepts/

More information

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances: Scheduling Scheduling Scheduling levels Long-term scheduling. Selects which jobs shall be allowed to enter the system. Only used in batch systems. Medium-term scheduling. Performs swapin-swapout operations

More information

Attaining EDF Task Scheduling with O(1) Time Complexity

Attaining EDF Task Scheduling with O(1) Time Complexity Attaining EDF Task Scheduling with O(1) Time Complexity Verber Domen University of Maribor, Faculty of Electrical Engineering and Computer Sciences, Maribor, Slovenia (e-mail: domen.verber@uni-mb.si) Abstract:

More information

On Admission Control Policy for Multi-tasking Live-chat Service Agents Research-in-progress Paper

On Admission Control Policy for Multi-tasking Live-chat Service Agents Research-in-progress Paper On Admission Control Policy for Multi-tasking Live-chat Service Agents Research-in-progress Paper Paulo Goes Dept. of Management Information Systems Eller College of Management, The University of Arizona,

More information

Database Replication with Oracle 11g and MS SQL Server 2008

Database Replication with Oracle 11g and MS SQL Server 2008 Database Replication with Oracle 11g and MS SQL Server 2008 Flavio Bolfing Software and Systems University of Applied Sciences Chur, Switzerland www.hsr.ch/mse Abstract Database replication is used widely

More information

Today s topic: So far, we have talked about. Question. Task models

Today s topic: So far, we have talked about. Question. Task models Overall Stucture of Real Time Systems So far, we have talked about Task...... Task n Programming Languages to implement the Tasks Run-TIme/Operating Systems to run the Tasks RTOS/Run-Time System Hardware

More information

Modular Real-Time Linux

Modular Real-Time Linux Modular Real-Time Linux Shinpei Kato Department of Information and Computer Science, Keio University 3-14-1 Hiyoshi, Kohoku, Yokohama, Japan shinpei@ny.ics.keio.ac.jp Nobuyuki Yamasaki Department of Information

More information

OS OBJECTIVE QUESTIONS

OS OBJECTIVE QUESTIONS OS OBJECTIVE QUESTIONS Which one of the following is Little s formula Where n is the average queue length, W is the time that a process waits 1)n=Lambda*W 2)n=Lambda/W 3)n=Lambda^W 4)n=Lambda*(W-n) Answer:1

More information

Master thesis. Title: Real-Time Scheduling methods for High Performance Signal Processing Applications on Multicore platform

Master thesis. Title: Real-Time Scheduling methods for High Performance Signal Processing Applications on Multicore platform Master thesis School of Information Science, Computer and Electrical Engineering Master report, IDE 1260, August 2012 Subject: Master s Thesis in Embedded and Intelligent Systems Title: Real-Time Scheduling

More information

Operating System Resource Management. Burton Smith Technical Fellow Microsoft Corporation

Operating System Resource Management. Burton Smith Technical Fellow Microsoft Corporation Operating System Resource Management Burton Smith Technical Fellow Microsoft Corporation Background Resource Management (RM) is a primary operating system responsibility It lets competing applications

More information

Hierarchical Real-Time Scheduling and Synchronization

Hierarchical Real-Time Scheduling and Synchronization Mälardalen University Press Licentiate Thesis No.94 Hierarchical Real-Time Scheduling and Synchronization Moris Behnam October 2008 School of Innovation, Design and Engineering Mälardalen University Västerås,

More information