An Asymmetric Model for Real-Time and Load Balancing on Linux SMP
|
|
|
- Ami Dennis
- 10 years ago
- Views:
Transcription
1 LIFL Report # An Asymmetric Model for Real-Time and Load Balancing on Linux SMP Philippe MARQUET [email protected] Julien SOULA [email protected] Éric PIEL [email protected] Jean-Luc DEKEYSER [email protected] Laboratoire d informatique fondamentale de Lille Université des sciences et technologies de Lille France April 2004 Abstract We propose the ARTiS system a real-time extension of GNU/Linux dedicated to SMP (Symmetric Multi-Processors) systems. ARTiS exploits the SMP architecture to guarantee the possible preemption of a processor when the system has to schedule a real-time task. The basic idea of ARTiS is to assign a selected set of processors to real-time operations. A migration mechanism of non-preemptible tasks insures a latency level on these real-time processors. Furthermore specific load-balancing strategies allows ARTiS to benefit of the full power of the SMP systems: The real-time reservation while guaranteed is not exclusive and does not imply a waste of resources. Simulations of the ARTiS performances have been conducted. The level of observed latency comfort the model proposition. A first implementation of ARTiS while incomplete also shows significant improvements compared to the standard Linux kernel. 1 Linux SMP and Real-Time Nowadays there is a need for real-time guaranties in general purpose operating systems. Soft real-time application democratization is the trend: Everyone wants to play a video while burning a CD. Operating systems such as Linux take this request into account and consider some latency issues in place of the sole fairness of the traditional Unix. Nevertheless several application domains require hard real-time support of the operating system: The application contains tasks that expect to communicate with dedicated hardware in a time constrained protocol for example to insure real-time acquisition. Those same real-time applications require large amount of computational power: For example in the spectrum radio surveillance applications used to analyze the waveform signatures the communications and the coverage have an increasing need of power with the apparition of the UMTS (greater bandwidth and more complex algorithms). This work is partially supported by the ITEA project HYADES 1
2 The usage of SMP (Symmetric Multi-Processors) to face this computational power need is a well known and effective solution. It has already been experimented in the real-time context [1 2]. The real-time operating systems market is full of proprietary solutions. Despite the definition of standard POSIX interface for real-time applications [10] each vendor comes with a dedicated API. The lack of a major actor in the real-time community results in a segmented market and we are persuaded that the definition of an Open Source real-time operating system may encounter a success. Our expectation is then to find an Open Source full operating system for real-time on SMP platforms: An Open Source system to gain conformance with a standard A full operating system to allow the cohabitation of real-time and general purpose tasks in the system Running on SMP platforms to face the intensive computing aspects of the applications. Four main categories of operating systems are able to compete for the system we are looking for: Dedicated real-time operating systems (such as VxWorks) GNU/Linux and especially the new Linux with its so-called preemptible kernel Existing real-time extensions for the Linux kernel Operating systems based on the Asymmetric Multi-Processing approach. Dedicated real-time operating systems are readily available and extensively tested systems that deliver excellent hard real-time performances. Nevertheless these systems mostly target embedded applications and the fact for example that a system such as VxWorks does not provide a full memory protection [6] makes it poorly suited for large applications (despite the announce of a new memory protection scheme in the upcoming VxWorks 6.0). Furthermore these systems claim to support SMP architectures but consider them as a multi mono-processor architecture. One instance of the operating system is running on each processor the application tasks must use synchronization or communication primitives based on a message-passing interface. This approach complicates the programming and deprives the SMP architecture of one of its most interesting features the scalability. The standard GNU/Linux system is an Open Source operating system its availability on SMP platforms is now mature [5] and the system has excellent non real-time performances. Nevertheless the architecture of the Linux kernel is by construction unable to guarantee any latency neither at the interrupt level nor at the user level: The Linux kernel is not preemptible and some of the works associated to the latency are deferred to the end of the ongoing system call. Many attempts to improve the Linux kernel latencies have been proposed. The embedded Linux vendor MontaVista has introduced a rather simple and systematic patch of the Linux kernel [14] to ensure some preemption points in the kernel and doing so to reduce the kernel latency. This patch maintained by Robert Love has been adopted recently by the mainstream Linux kernel [15] mainly because it also implies a reduction of the latency targeted by multimedia applications. Another and complementary approach is the so-called low-latency patch [20] of Ingo Molnar and Andrew Morton which adds some fixed preemption points into the kernel. If it reduces the kernel latency the maintaining of this patch against the constant evolution of the kernel is an heavy job and the worst case latency evolution with the kernel improvements is still an affair of kernel experts. Bernard Khun recently proposed a real-time interrupt patch [11] that introduces a notion of interrupt priority allowing to reduce the worst case latency of the Linux kernel. Still the extension of this mechanism to SMP systems is not obvious: A high priority interrupt execution may be delayed because it shares a lock with a low priority interrupt execution on an other CPU. A well known solution that claims to add real-time capabilities to the Linux kernel is the so-called co-kernel approach. These Linux extensions consist in a small real-time kernel that 2
3 provides the real-time services and that runs the standard Linux kernel as a low priority task when no real-time task is eligible. The interrupts are rerouted to the Linux kernel by the realtime kernel; this virtualization of the Linux kernel interrupts allows the co-kernel to preempt the Linux kernel when needed. RTLinux [9 21] and RTAI [7] are two famous systems based on this principle. If the recent versions of RTLinux also target SMP systems [22] RTLinux comes with its Open RTLinux Patent License or with a commercial license and uses a FSM Labs patent that may prevent its usage and adoption despite its current success. This co-kernel approach suffers from providing a dualistic platform to the developer: Realtime processes do not benefit from the services of the Linux kernel and Linux processes do not benefit from real-time enhancements. This is a major drawback even if RTLinux supports communications between the real-time processes and the Linux processes through real-time FI- FOs [8] or if RTAI provides an unique API to the developer through a kernel module called LXRT which exports real-time services to the Linux processes. Another approach that exploits the SMP architecture relies on the shielded processors or asymmetric multiprocessing principle. On a multiprocessor machine the processors are specialized to real-time or not: Real-time processors will execute real-time tasks while non-realtime processors will execute non-real-time tasks. Concurrent Computer Corporation RedHawk Linux variant [4 3] and SGI REACT/pro a real-time add-on for IRIX [18] follow this principle. However since only real-time tasks are allowed to run on shielded CPUs if those tasks are not consuming all the available power then there are some CPU resources which are wasted. In previous works we had evaluated the effectiveness of this approach [13]. Our proposition of ARTiS enhances this basic concept of asymmetric real-time processing by allowing resource sharing between the real-time and non-real-time tasks. 2 ARTiS: Asymmetric Real-Time Scheduler Our proposition is a contribution to the definition of a real-time Linux extension that targets SMPs. Furthermore the programming model we promote is based on a user-space programming of the real-time tasks: The programmer uses the usual POSIX and/or Linux API to define his applications. These tasks are real-time in the sense that they are identified with a high priority and are not perturbed by any non real-time activities. For these tasks we are targeting a maximum response time below 300µs. To take advantage of SMP architecture an operating system needs to take into account the shared memory facility the migration and load-balancing between processors and the communication patterns between tasks. The complexity of such an operating system makes it looking more like a general purpose operating system than a proprietary real-time operating system (RTOS). A RTOS on SMP machines must implement all these mechanisms and consider how they interfere with the hard real-time constraints. This may explain why RTOS s are almost mono-processor dedicated. On the other hand the Linux kernel is able to efficiently manage SMP platforms but everybody agrees that the Linux kernel has not been designed as a RTOS. Technically only soft real-time tasks are supported via the two scheduling policies: FIFO and round-robin. The ARTiS solution keeps both interests by establishing from the SMP platform an Asymmetric Real-Time Scheduler in Linux. We want to keep the full Linux facilities for each process and the SMP Linux properties but we want to improve the real-time behavior too. The core of the ARTiS solution is based on a strong distinction between real-time and non-real-time processors and also on migrating tasks which attempt to disable the preemption on a real-time processor. To provide this system we propose: The partition of the processors in two sets. A NRT CPU set (Non-Real-Time) and a RT CPU set (Real-Time). Each one has a particular scheduling policy. The purpose is to insure the best interrupt latency for particular processes running in the RT CPU set. 3
4 Two classes of RT processes. They are all standard RT Linux processes. They just differ in their mapping: Each RT CPU has just one bound RT Linux task called RT0 (a real-time task of highest priority). Each of these tasks has the guaranty that its RT CPU will stay entirely available to it. Only these user tasks are allowed to become non-preemptible on their corresponding RT CPU. This property insures a latency as low as possible for all the RT0 tasks. The RT0 tasks are the hard real-time tasks of ARTiS. Each RT CPU can run other RT Linux tasks but only in a preemptible state. These tasks are called RT1 (real-time tasks of priority 1 and below). They can use CPU resources efficiently if RT0 does not consume all the CPU time. To keep a low latency for RT0 the RT1 processes are automatically migrated to a NRT CPU by the ARTiS scheduler when they are on the way of becoming non-preemptible (when they call preempt_disable() or local_irq_disable()). The RT1 tasks are the soft realtime tasks of ARTiS. They have no firm guaranties but their requirements are taken into account by a best effort policy. They are also the main support of the intensive processing parts of the targeted applications. The other non-real-time tasks are named Linux tasks in the ARTiS terminology. They are not related to any real-time requirements. They could coexist with real-time tasks and are eligible as long as the real-time tasks do not require the CPU. As for the RT1 the Linux tasks will automatically migrate away from a RT CPU if they try to enter in non-preemptible code section on such a CPU. The NRT CPUs mainly run Linux tasks. They also run RT1 tasks when these are in a non-preemptible state. To insure the load-balancing of the system all these tasks can migrate to a RT CPU but only in a preemptible state. When a RT1 task runs on a NRT CPU it keeps its high priority above the Linux tasks. A particular migration mechanism. This migration aims at insuring a low latency to the RT0 tasks. All the RT1 and Linux tasks running on a RT CPU are automatically migrated toward a NRT CPU when they try to disable the preemption. One of the main changes which is required from the original Linux load-balancing mechanism is the removal of inter- CPU locks. To effectively migrate the tasks a NRT CPU and a RT CPU have to communicate via queues. We implement an asymmetric lock-free FIFO with one reader and one writer to avoid any active wait of the ARTiS scheduler [19]. An efficient load-balancing policy. It will allow to benefit from the full power of the SMP machine. Usually the load-balancing mechanism aims to move the running tasks across the CPUs in order to insure that no CPU is idle while some tasks are waiting to be scheduled on the other ones. Our case is more complicated because of the specificities of the ARTiS tasks. The RT0 tasks will never migrate by definition. The RT1 tasks should migrate quicker than Linux tasks to RT CPUs: The RT CPUs offer latency warranties that the NRT CPUs do not. To minimize the latency on RT CPUs and to provide the best performances to the global system particular asymmetric load-balancing algorithms have been defined [17]. Asymmetric communication mechanisms. On SMP machines tasks exchange data by read/write mechanisms on the shared memory. To insure the coherence critical sections are needed. Those critical sections are protected from simultaneous concurrent access by lock/unlock mechanisms. This communication scheme is no suited to our particular case: An exchange of data between a RT0 task and a RT1 will involve the migration of the RT1 task before this later takes the lock to avoid entering in a non-preemptible state on a RT CPU. Therefore an asymmetric communication pattern should use lock free FIFO in a onereader/one-writer context. ARTiS supports three different levels of real-time processing: RT0 RT1 and Linux. RT0 tasks are implemented in order to minimize the jitter due to non-preemptible execution on the same CPU but these tasks are still user-space Linux tasks. RT1 are soft real-time tasks but they are able to take advantage of the SMP architecture in particular for intensive computing. They are also able to trigger asymmetric communications that avoid inappropriate migrations 4
5 V i d e c a m RT0 RT1 RT / / / / / / / / / / / / / / / / / / / / $# &% (' *) : : : : : : : : : : : : : : : : Linux cluster Figure 1: Example of a task architecture in an ARTiS application. due to lock/unlock calls. A migration mechanism insures a good load-balance between all the processors. Eventually Linux tasks can run without intrusion on the RT CPUs. Then they can use the full resources of the SMP machines. 3 Application Deployment over ARTiS A real-time application on a SMP machine exhibits its full meaning when real-time constraints are combined with intensive computing. ARTiS is dedicated to this kind of applications. Real-time constraints are satisfied by RT0 tasks. Communications between RT0 and RT1 provide intensive computing with data flow. Linux tasks insure additional processing. The load-balancing insures a dynamic mapping of the different tasks on the CPUs. The implementation of an application on ARTiS requires to identify a specific level of realtime for each task of the algorithm and to map these tasks on the CPUs of the multiprocessor. To illustrate this we use a real-time manufacturing quality management: For example defect identification on a continuous production line running on a four CPUs SMP three CPUs for the RT set and one for the NRT set. Several tasks may be identified their communications and mapping are illustrated on the figure 1: Some videocam and/or sensors receive data periodically. Up to three RT0 tasks can manage the data acquisition with a latency compatible with real-time. Each of these tasks is assigned to a RT CPU. Directly connected to those tasks intensive data processing with regular data structures has to be done for image processing. A static number of RT1 tasks are dedicated to this dataparallel processing (à la OpenMP). They should communicate with RT0 and with other RT1 tasks without inappropriate migration. They are also mostly bound to a RT CPU but will migrate to the NRT CPU if they encounter a non-preemptible code. They make the most of the SMP facilities. Then defect identifications have to be achieved with irregular data structures concerning subsets of default parts. A dynamic number of RT2 are created. They have to communicate with RT1 and together. Because the number of tasks is dynamic the load-balancing policy of ARTiS is very necessary: These tasks will run on the CPU dynamically chosen by the system to uniformly load the different CPUs. Finally some defect fitting can be done with a local database or an external database (acceded via MPI) to produce statistics... This is no more RT processing and Linux tasks can take care of that. They are mainly mapped on the NRT CPUs but they can also use RT CPUs 5
6 load balancing migration load balancing # # # # $ $ $ $ # # # # $ $ $ $ # # # # $ $ $ $ # # # # $ $ $ $ Linux RT2 cluster RT1 RT0 NRT CPU RT CPU RT CPU RT CPU Figure 2: Mapping of the tasks on the RT and NRT processors. when idle. The figure 2 shows a possible mapping of those tasks on the two sets of processors: The RT and NRT. 4 Performance Evaluation Before implementing the ARTiS kernel itself we have conducted some experiments in order to evaluate the potential benefits of the approach in term of interrupt latency. We distinguished two types of latency one associated to the kernel and the other one associated to the user tasks. Measurement method The experiment consisted in measuring the elapsed time between the hardware generation of an interrupt and the execution of the code concerning this interrupt. The experimentation protocol was written with the wish to stay as close as possible to the common mechanisms employed by real-time tasks. The measurement task sets up the hardware so it generates the interrupt at a precisely known time then it gets unscheduled and wait for the interrupt information to occur. Once the information is sent the task is woken up and the current time is saved and the next measurement starts. This scheme is typical from the real-time applications waiting for an hardware event to happen processing data according to the new parameters sending new information and returning to waiting mode. For one interrupt there are five associated times corresponding to different locations in the executed code (Figure 3): t 0 the interrupt programming t 0 the interrupt emission it is chosen at the time the interrupt is launched t 1 the entrance in the interrupt handler specific to this interrupt t 2 the end of the interrupt handler t 3 the entrance in the user-space RT task. We conducted the experiments on a 4-way Itanium II 1.3Ghz machine. It ran instrumented Linux kernel versions and later Globally we could notice improvements along the versions in particular when kernel preemption was activated. Hence we are only presenting the result from the latest tested version. The timer on which are based all the measurements is the itc (a processor register counting the cycles) and the interrupt was generated with a cycle accurate precision by the PMU (a debugging unit available in each processor [16]). Even with a high load of the computer bad cases leading to long latencies are very unusual. Thus a large number of measures are necessary. In our case each test is composed of 300 mil- 6
7 others tasks monitoring task t 3 kernel interuption hardware t 0 t 0 t 1 t 2 time Figure 3: Chronogram of the tasks involved in the measurement code. lions of measures making each test about 8 hours long. With such a length the results are reproducible. Interrupt latency types From the 4 measurement locations two interesting values can be calculated. Their interest comes from the ability to associate them to common programming methods and also from the significant differences along the tested configurations. Those two kinds of latencies can be described as follow: The kernel latency t 1 t 0 is the elapsed time between the interrupt generation and the entrance inside the interrupt handler function (pfm_interrupt_handler() in our case). This is the latency that a driver would have if it was written as a kernel module following the usual design method. The user latency t 3 t 0 is the elapsed time between the interrupt generation and the execution of the associated code in the user-space real-time task. This is the latency a realtime application would have if it was written entirely in the user-space. In POSIX systems there are two ways for the application to be notified of the interrupt: Either by handling a signal or by returning from a blocking system call. The signal handling gave a minimum user latency of 4µs instead of 2µs but the maximum values were very similar. The blocking call method is more efficient so all the exposed results are only based on a blocking system call (a read()). The real-time tasks designed to run in user-space are programmed using the usual and standard POSIX interface. This is one of the main advantage that ARTiS provides. Therefore within the ARTiS context the user latency is the most important one to study and analyze. One could notice that the time t 2 was not cited. Actually t 2 t 1 represents the jitter of the interrupt handler execution time. However it is fairly stable in regard to the configurations and is always very small compared to the other two intervals (approximately 1µs). The ideal ARTiS configuration In order to estimate the performance of the ARTiS system before its full completion we had to simulate it. So based on a vanilla Linux kernel we set up a configuration which is asymmetric in the same way that ARTiS is. It consists in a manual binding of each task on the CPU on which ARTiS would execute it: The measurement task equivalent to a RT0 process is run on a specific CPU (a RT CPU) The tasks doing CPU load exclusively in user-space equivalent to the RT1 or NRT tasks never disabling preemption are run on each CPU All the other tasks equivalent to the NRT tasks disabling preemption are run on another specific CPU (a NRT CPU). Additionally the interrupts which are not used by the measurement task are bound to a NRT CPU. The repartition was pre-selected and non-modifiable during the tests. 7
8 This environment is purely static and is equivalent to a snapshot of the ARTiS system with those specific tasks running. Excepted a slight modification of the kernel to prevent kernel threads to run on RT CPU all the configuration has been done from user-space. From the measurement point of view this configuration can be considered as an ideal ARTiS because there is no latency overhead due to the migrations or the load-balancing. Measurement conditions The measures have been conducted under different conditions. We have identified three unrelated parameters which affect the interrupt latencies: The load. The machine can be either idle (without any load) or highly loaded (all the programs described below are executed concurrently). The kernel preemption. When activated this new feature of the 2.6 Linux kernel allows to re-schedule tasks even if kernel code is being executed. This configuration of the Linux kernel correspond to the so-called preemptible Linux kernel. The CPU partitioning can be either symmetric the standard configuration or asymmetric as defined in the ideal ARTiS configuration. The eight configurations that can be obtained from the combination of those three parameters were measured during short runs (10 minutes). From the results four combinations were selected for their particular interest. They can be built by incrementally switching the parameters: A vanilla kernel without and with load a loaded kernel with preemption activated and a loaded ideal ARTiS configuration. We restrained the set because idle configurations would not show any particular long latency and an asymmetric configuration without kernel preemption would not make sense. In the experiments the system load consisted of busying the processors by user computation and triggering a lot of different interruptions in order to maximally activate the inter-locking and the preemption mechanisms. To achieve this goal four types of program corresponding to four loading methods were used: Computing load: A task that executes an endless loop without any system call is pinned on each processor simulating a computational task. Input/output load: The iodisk program reads and writes continuously on the disk. Network load: The ionet program floods the network interface by doing ICMP echo/reply. Locking load: The ioctl program calls the ioctl() function that embeds a big kernel lock. Observed latencies The tables 1 and 2 summarize the measures for the different tested configurations. Three values are associated to each kind of latency type (kernel and user). Maximum is the highest latency noticed along the 8 hours. The two other columns display the maximum latency of the % (respectively %) best measures. For this experiment this is equivalent to not counting respectively the 3000 (resp. 3) worse case latencies. Although the study of an idle configuration does not bring very much informations by itself it gives some comparison points when confronted to the results of the loaded systems. The kernel latencies are nearly unaffected by the load. However the user latencies are several orders bigger. This is the typical problem with Linux simply because it was not designed with real-time constraints in mind. The kernel preemption does not change the latencies at the kernel level. This was expected as the modifications focus only in scheduling faster user tasks nothing is changed to react faster in the kernel side. However considering user-space latencies a significant improvement can be noticed in the number of observed high latencies: % of the latencies are under 457µs instead of 2.829ms. Unfortunately the maximum value of these user-space latencies are very similar in the order of 40ms. This enhancement permits soft real-time with better results than the standard kernel but in no way it allows hard real-time for which even one latency over the threshold is unacceptable. 8
9 Table 1: Kernel latencies of the different configurations. Kernel Configurations % % Maximum standard Linux idle 78µs 94µs 94µs standard Linux loaded 77µs 98µs 101µs Linux with kernel preemption loaded 76µs 98µs 101µs ideal ARTiS loaded 3µs 8µs 9µs Table 2: User latencies of the different configurations. User Configurations % % Maximum standard Linux idle 82µs 174µs 220µs standard Linux loaded 2.829ms 41ms 42ms Linux with kernel preemption loaded 457µs 29ms 47ms ideal ARTiS loaded 8µs 27µs 28µs Eventually in the ARTiS simulated environment both kind of latencies are very significantly lowered. It can be surprising that the kernel latencies are even better than with the idle configuration. This is due to the fact all the interrupts are redirected to another processor and much less use of code is disabling the interrupts. Concerning the user latencies the improvement is even better (dropping from a maximum around 40ms to about 30µs). With the limit we have fixed to 300µs the system can be considered as a hard real-time one insuring real-time applications very low interrupt response. The standard Linux kernel with more than 3000 response times longer than 2ms can not be considered for real-time. The gain from the kernel preemption may permit soft real-time. However only the simulation of ARTiS has small variance between latencies and can insure latencies always under 300µs. It is the only configuration on which a hard real-time system can be based. 5 ARTiS Current Implementation A basic ARTiS API has been defined. It allows to deploy applications on the current implementation of the ARTiS model defined as a modification of the 2.6 Linux kernel. A user defines its ARTiS application with a configuration of the CPUs an identification of the real-time tasks and their processor affinity via a basic /proc interface and some functions (sched_setscheduler()...). The current implementation mainly consists in the migration mechanism that ensures only preemptible code is executed on a RT CPU. This migration relies on a task FIFO implemented with lock-free access [19]: One writer on the RT CPU and one reader on the NRT CPU. Despite that this first implementation is not yet complete especially in regards with the loadbalancing algorithms and the interprocess communication mechanisms the first performance elements show good results. The point is that the ARTiS implementation only adds limited non-preemptible code and that this code is independant from the other CPUs. For instance it does not share a lock with another CPU thus not introducing any unbounded latency. We have measured the maximum latency penalty of the ARTiS implementation over the ideal ARTiS configuration in the order of 10µs. Consequently this qualifies the ARTiS system as a hard real-time operating system. Although all the ARTiS mechanisms are not yet implemented the system is already usable. It effectively guarantees the possible preemption on a RT CPU excepted when this CPU runs the 9
10 Linux load-balancing. The main evolution of this implementation concerns the load-balancing but we are also investigating the kernel threads comportment and their influence on the latencies. Obviously some of them may migrate or be bound to a NRT CPU (such are kjournald that deals with journalized file systems). Other may have to be replaced as it is the case with the per-cpu load-balancing thread. 6 Conclusion We have identified applications that may benefit from an Open Source real-time systems that is able to exploit the full power of multiprocessors and that let real-time tasks cohabit with other and general purpose tasks. We have proposed ARTiS a system based on a partition of the multiprocessor CPUs between RT processors where tasks are protected from jitter on the expected latencies and NRT processors where all the code that may lead to a jitter is executed. This partition does not exclude a load-balancing of the tasks on the whole machine it only implies that some tasks are automatically migrated when they are on the way of being non-preemptible. The ARTiS model has been evaluated on a 4-way IA-64 and a maximum user latency as low as 30µs can be guaranteed (against latencies in the 40ms range for the standard 2.6 Linux kernel an improvement factor of 1000). ARTiS is also currently implemented as a modification of the Linux kernel. Despite not being yet completed the implementation is already usable. The ARTiS patches against the Linux 2.6 kernel are available for Intel i386 and IA-64 architectures from the ARTiS web page [12]. References [1] Gregory E. Allen and Brian L. Evans. Real-time sonar beamforming on workstations using process networks and POSIX threads. IEEE Transactions on Signal Processing pages March [2] Gregory Eugene Allen. Real-time sonar beamforming on a symmetric multiprocessing UNIX workstation using process networks and POSIX pthreads. M.Sc. Thesis The University of Texas at Austin Austin TX August [3] Stephen Brosky. Symmetric multiprocessing and real-time in PowerMAX OS. White paper Concurrent Computer Corporation Fort Lauderdale FL [4] Steve Brosky and Steve Rotolo. Shielded processors: Guaranteeing sub-millisecond response in standard Linux. In Workshop on Parallel and Distributed Real-Time Systems WP- DRTS 03 Nice France April [5] Ray Bryant Bill Hartner Qi He and Ganesh Venkitachalam. SMP scalability comparisons of Linux kernels and In 4th Annual Linux Showcase and Conference Atlanta GA October [6] Paul Chen. Implementing basic memory protection in VxWorks: A best practices guide. White paper Wind River Alameda CA [7] Pierre Cloutier Paolo Montegazza Steve Papacharalambous Ian Soanes Stuart Hughes and Karim Yaghmour. DIAPM-RTAI position paper. In Second Real Time Linux Workshop Orlando FL November [8] Cort Dougan and Matt Sherer. RTLinux POSIX API for IO on real-time FIFOs and shared memory. White paper Finite State Machine Labs Inc [9] Finite State Machine Labs Inc. RealTime Linux (RTLinux). 10
11 [10] Bill Gallmeister. POSIX.4 Programming for the Real World. O Reilly & Associates [11] Bernhard Kuhn. Interrupt priorisation in the Linux kernel January t-online.de/home/bernhard_kuhn/rtirq/ /rtirq.html. [12] Laboratoire d informatique fondamentale de Lille Université des sciences et technologies de Lille. ARTiS home page. [13] Momtchil Momtchev and Philippe Marquet. An asymmetric real-time scheduling for Linux. In Tenth International Workshop on Parallel and Distributed Real-Time Systems Fort Lauderdale FL April [14] Kevin Morgan. Linux for real-time systems: Strategies and solutions. White paper MontaVista Software Inc [15] Kevin Morgan. Preemptible Linux: A reality check. White paper MontaVista Software Inc [16] David Mosberger and Stéphane Eranian. IA-64 Linux Kernel : Design and Implementation. Prentice-Hall [17] Éric Piel Philippe Marquet Julien Soula and Jean-Luc Dekeyser. Load-balancing for a real-time system based on asymmetric multi-processing. Research Report Laboratoire d informatique fondamentale de Lille Université des sciences et technologies de Lille France April [18] Sillicon Graphics Inc. REACT: Real-time in IRIX. Technical report Sillicon Graphics Inc. Mountain View CA [19] John D. Valois. Implementing lock-free queues. In Proceedings of the Seventh International Conference on Parallel and Distributed Computing Systems Las Vegas NV October [20] Clark Williams. Linux scheduler latency. LinuxDevices.com March linuxdevices.com/articles/at html. [21] Victor Yodaiken. The RTLinux manifesto. In Proc. of the 5th Linux Expo Raleigh NC March [22] Victor Yodaiken. RTLinux beyond version 3. In Third Real-Time Linux Workshop Milano Italy November
Asymmetric Scheduling and Load Balancing for Real-Time on Linux SMP
Asymmetric Scheduling and Load Balancing for Real-Time on Linux SMP Éric Piel, Philippe Marquet, Julien Soula, and Jean-Luc Dekeyser {Eric.Piel,Philippe.Marquet,Julien.Soula,Jean-Luc.Dekeyser}@lifl.fr
Load-Balancing for a Real-Time System Based on Asymmetric Multi-Processing
LIFL Report # 2004-06 Load-Balancing for a Real-Time System Based on Asymmetric Multi-Processing Éric PIEL [email protected] Philippe MARQUET [email protected] Julien SOULA [email protected]
White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux
White Paper Real-time Capabilities for Linux SGI REACT Real-Time for Linux Abstract This white paper describes the real-time capabilities provided by SGI REACT Real-Time for Linux. software. REACT enables
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6
Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6 Winter Term 2008 / 2009 Jun.-Prof. Dr. André Brinkmann [email protected] Universität Paderborn PC² Agenda Multiprocessor and
Hard Real-Time Linux
Hard Real-Time Linux (or: How to Get RT Performances Using Linux) Andrea Bastoni University of Rome Tor Vergata System Programming Research Group [email protected] Linux Kernel Hacking Free Course
Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run
SFWR ENG 3BB4 Software Design 3 Concurrent System Design 2 SFWR ENG 3BB4 Software Design 3 Concurrent System Design 11.8 10 CPU Scheduling Chapter 11 CPU Scheduling Policies Deciding which process to run
Lecture 3 Theoretical Foundations of RTOS
CENG 383 Real-Time Systems Lecture 3 Theoretical Foundations of RTOS Asst. Prof. Tolga Ayav, Ph.D. Department of Computer Engineering Task States Executing Ready Suspended (or blocked) Dormant (or sleeping)
Predictable response times in event-driven real-time systems
Predictable response times in event-driven real-time systems Automotive 2006 - Security and Reliability in Automotive Systems Stuttgart, October 2006. Presented by: Michael González Harbour [email protected]
Enhancing the Monitoring of Real-Time Performance in Linux
Master of Science Thesis Enhancing the Monitoring of Real-Time Performance in Linux Author: Nima Asadi [email protected] Supervisor: Mehrdad Saadatmand [email protected] Examiner: Mikael
Real-Time Scheduling 1 / 39
Real-Time Scheduling 1 / 39 Multiple Real-Time Processes A runs every 30 msec; each time it needs 10 msec of CPU time B runs 25 times/sec for 15 msec C runs 20 times/sec for 5 msec For our equation, A
What is best for embedded development? Do most embedded projects still need an RTOS?
RTOS versus GPOS: What is best for embedded development? Do most embedded projects still need an RTOS? It is a good question, given the speed of today s high-performance processors and the availability
Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur
Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 26 Real - Time POSIX. (Contd.) Ok Good morning, so let us get
Comparison between scheduling algorithms in RTLinux and VxWorks
Comparison between scheduling algorithms in RTLinux and VxWorks Linköpings Universitet Linköping 2006-11-19 Daniel Forsberg ([email protected]) Magnus Nilsson ([email protected]) Abstract The
Real-time Process Network Sonar Beamformer
Real-time Process Network Sonar Gregory E. Allen Applied Research Laboratories [email protected] Brian L. Evans Dept. Electrical and Computer Engineering [email protected] The University of Texas
Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings
Operatin g Systems: Internals and Design Principle s Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings Operating Systems: Internals and Design Principles Bear in mind,
RTAI. Antonio Barbalace [email protected]. (modified by M.Moro 2011) RTAI
Antonio Barbalace [email protected] (modified by M.Moro 2011) Real Time Application Interface by Dipartimento di Ingegneria Aereospaziale dell Università di Milano (DIAPM) It is not a complete
Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:
Scheduling Scheduling Scheduling levels Long-term scheduling. Selects which jobs shall be allowed to enter the system. Only used in batch systems. Medium-term scheduling. Performs swapin-swapout operations
CPU Scheduling Outline
CPU Scheduling Outline What is scheduling in the OS? What are common scheduling criteria? How to evaluate scheduling algorithms? What are common scheduling algorithms? How is thread scheduling different
Embedded Systems. 6. Real-Time Operating Systems
Embedded Systems 6. Real-Time Operating Systems Lothar Thiele 6-1 Contents of Course 1. Embedded Systems Introduction 2. Software Introduction 7. System Components 10. Models 3. Real-Time Models 4. Periodic/Aperiodic
159.735. Final Report. Cluster Scheduling. Submitted by: Priti Lohani 04244354
159.735 Final Report Cluster Scheduling Submitted by: Priti Lohani 04244354 1 Table of contents: 159.735... 1 Final Report... 1 Cluster Scheduling... 1 Table of contents:... 2 1. Introduction:... 3 1.1
2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput
Import Settings: Base Settings: Brownstone Default Highest Answer Letter: D Multiple Keywords in Same Paragraph: No Chapter: Chapter 5 Multiple Choice 1. Which of the following is true of cooperative scheduling?
CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS
CPU SCHEDULING CPU SCHEDULING (CONT D) Aims to assign processes to be executed by the CPU in a way that meets system objectives such as response time, throughput, and processor efficiency Broken down into
SYSTEM ecos Embedded Configurable Operating System
BELONGS TO THE CYGNUS SOLUTIONS founded about 1989 initiative connected with an idea of free software ( commercial support for the free software ). Recently merged with RedHat. CYGNUS was also the original
Performance Comparison of RTOS
Performance Comparison of RTOS Shahmil Merchant, Kalpen Dedhia Dept Of Computer Science. Columbia University Abstract: Embedded systems are becoming an integral part of commercial products today. Mobile
Client/Server Computing Distributed Processing, Client/Server, and Clusters
Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the
How to control Resource allocation on pseries multi MCM system
How to control Resource allocation on pseries multi system Pascal Vezolle Deep Computing EMEA ATS-P.S.S.C/ Montpellier FRANCE Agenda AIX Resource Management Tools WorkLoad Manager (WLM) Affinity Services
10.04.2008. Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details
Thomas Fahrig Senior Developer Hypervisor Team Hypervisor Architecture Terminology Goals Basics Details Scheduling Interval External Interrupt Handling Reserves, Weights and Caps Context Switch Waiting
Comparing Power Saving Techniques for Multi cores ARM Platforms
Comparing Power Saving Techniques for Multi cores ARM Platforms Content Why to use CPU hotplug? Tests environment CPU hotplug constraints What we have / What we want How to
Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses
Overview of Real-Time Scheduling Embedded Real-Time Software Lecture 3 Lecture Outline Overview of real-time scheduling algorithms Clock-driven Weighted round-robin Priority-driven Dynamic vs. static Deadline
Embedded & Real-time Operating Systems
Universität Dortmund 12 Embedded & Real-time Operating Systems Peter Marwedel, Informatik 12 Germany Application Knowledge Structure of this course New clustering 3: Embedded System HW 2: Specifications
An Implementation Of Multiprocessor Linux
An Implementation Of Multiprocessor Linux This document describes the implementation of a simple SMP Linux kernel extension and how to use this to develop SMP Linux kernels for architectures other than
evm Virtualization Platform for Windows
B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400
W4118 Operating Systems. Instructor: Junfeng Yang
W4118 Operating Systems Instructor: Junfeng Yang Outline Advanced scheduling issues Multilevel queue scheduling Multiprocessor scheduling issues Real-time scheduling Scheduling in Linux Scheduling algorithm
Operating Systems. III. Scheduling. http://soc.eurecom.fr/os/
Operating Systems Institut Mines-Telecom III. Scheduling Ludovic Apvrille [email protected] Eurecom, office 470 http://soc.eurecom.fr/os/ Outline Basics of Scheduling Definitions Switching
Real Time Programming: Concepts
Real Time Programming: Concepts Radek Pelánek Plan at first we will study basic concepts related to real time programming then we will have a look at specific programming languages and study how they realize
Why Relative Share Does Not Work
Why Relative Share Does Not Work Introduction Velocity Software, Inc March 2010 Rob van der Heij rvdheij @ velocitysoftware.com Installations that run their production and development Linux servers on
Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1
Module 8 Industrial Embedded and Communication Systems Version 2 EE IIT, Kharagpur 1 Lesson 37 Real-Time Operating Systems: Introduction and Process Management Version 2 EE IIT, Kharagpur 2 Instructional
Linux for Embedded and Real-Time Systems
Linux for Embedded and Real-Time Systems Kaiserslautern 9 June 2005 Samir Amiry ([email protected]) Fraunhofer IESE Institut Experimentelles Software Engineering Outlines Introduction. Linux: the
Operating Systems Concepts: Chapter 7: Scheduling Strategies
Operating Systems Concepts: Chapter 7: Scheduling Strategies Olav Beckmann Huxley 449 http://www.doc.ic.ac.uk/~ob3 Acknowledgements: There are lots. See end of Chapter 1. Home Page for the course: http://www.doc.ic.ac.uk/~ob3/teaching/operatingsystemsconcepts/
OpenMosix Presented by Dr. Moshe Bar and MAASK [01]
OpenMosix Presented by Dr. Moshe Bar and MAASK [01] openmosix is a kernel extension for single-system image clustering. openmosix [24] is a tool for a Unix-like kernel, such as Linux, consisting of adaptive
Quality of Service su Linux: Passato Presente e Futuro
Quality of Service su Linux: Passato Presente e Futuro Luca Abeni [email protected] Università di Trento Quality of Service su Linux:Passato Presente e Futuro p. 1 Quality of Service Time Sensitive applications
Optimizing Shared Resource Contention in HPC Clusters
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
CHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION 1.1 MOTIVATION OF RESEARCH Multicore processors have two or more execution cores (processors) implemented on a single chip having their own set of execution and architectural recourses.
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
Control 2004, University of Bath, UK, September 2004
Control, University of Bath, UK, September ID- IMPACT OF DEPENDENCY AND LOAD BALANCING IN MULTITHREADING REAL-TIME CONTROL ALGORITHMS M A Hossain and M O Tokhi Department of Computing, The University of
CPU Scheduling. Core Definitions
CPU Scheduling General rule keep the CPU busy; an idle CPU is a wasted CPU Major source of CPU idleness: I/O (or waiting for it) Many programs have a characteristic CPU I/O burst cycle alternating phases
Validating Java for Safety-Critical Applications
Validating Java for Safety-Critical Applications Jean-Marie Dautelle * Raytheon Company, Marlborough, MA, 01752 With the real-time extensions, Java can now be used for safety critical systems. It is therefore
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Operating Systems 4 th Class
Operating Systems 4 th Class Lecture 1 Operating Systems Operating systems are essential part of any computer system. Therefore, a course in operating systems is an essential part of any computer science
Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:
Tools Page 1 of 13 ON PROGRAM TRANSLATION A priori, we have two translation mechanisms available: Interpretation Compilation On interpretation: Statements are translated one at a time and executed immediately.
Processor Scheduling. Queues Recall OS maintains various queues
Processor Scheduling Chapters 9 and 10 of [OS4e], Chapter 6 of [OSC]: Queues Scheduling Criteria Cooperative versus Preemptive Scheduling Scheduling Algorithms Multi-level Queues Multiprocessor and Real-Time
ICS 143 - Principles of Operating Systems
ICS 143 - Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian [email protected] Note that some slides are adapted from course text slides 2008 Silberschatz. Some
Contributions to Gang Scheduling
CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,
Cloud Operating Systems for Servers
Cloud Operating Systems for Servers Mike Day Distinguished Engineer, Virtualization and Linux August 20, 2014 [email protected] 1 What Makes a Good Cloud Operating System?! Consumes Few Resources! Fast
Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems
Chapter 19: Real-Time Systems System Characteristics Features of Real-Time Systems Chapter 19: Real-Time Systems Implementing Real-Time Operating Systems Real-Time CPU Scheduling VxWorks 5.x 19.2 Silberschatz,
System Software Integration: An Expansive View. Overview
Software Integration: An Expansive View Steven P. Smith Design of Embedded s EE382V Fall, 2009 EE382 SoC Design Software Integration SPS-1 University of Texas at Austin Overview Some Definitions Introduction:
EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun
EECS 750: Advanced Operating Systems 01/28 /2015 Heechul Yun 1 Recap: Completely Fair Scheduler(CFS) Each task maintains its virtual time V i = E i 1 w i, where E is executed time, w is a weight Pick the
A Survey of Parallel Processing in Linux
A Survey of Parallel Processing in Linux Kojiro Akasaka Computer Science Department San Jose State University San Jose, CA 95192 408 924 1000 [email protected] ABSTRACT Any kernel with parallel processing
Real-time KVM from the ground up
Real-time KVM from the ground up KVM Forum 2015 Rik van Riel Red Hat Real-time KVM What is real time? Hardware pitfalls Realtime preempt Linux kernel patch set KVM & qemu pitfalls KVM configuration Scheduling
Real-Time Operating Systems. http://soc.eurecom.fr/os/
Institut Mines-Telecom Ludovic Apvrille [email protected] Eurecom, office 470 http://soc.eurecom.fr/os/ Outline 2/66 Fall 2014 Institut Mines-Telecom Definitions What is an Embedded
An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide
Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
Chapter 1: Introduction. What is an Operating System?
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real -Time Systems Handheld Systems Computing Environments
Steps to Migrating to a Private Cloud
Deploying and Managing Private Clouds The Essentials Series Steps to Migrating to a Private Cloud sponsored by Introduction to Realtime Publishers by Don Jones, Series Editor For several years now, Realtime
Multi-core Programming System Overview
Multi-core Programming System Overview Based on slides from Intel Software College and Multi-Core Programming increasing performance through software multi-threading by Shameem Akhter and Jason Roberts,
CPU Scheduling. CPU Scheduling
CPU Scheduling Electrical and Computer Engineering Stephen Kim ([email protected]) ECE/IUPUI RTOS & APPS 1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling
3 - Introduction to Operating Systems
3 - Introduction to Operating Systems Mark Handley What is an Operating System? An OS is a program that: manages the computer hardware. provides the basis on which application programs can be built and
Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:
Chapter 7 OBJECTIVES Operating Systems Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the concept of virtual memory. Understand the
Completely Fair Scheduler and its tuning 1
Completely Fair Scheduler and its tuning 1 Jacek Kobus and Rafał Szklarski 1 Introduction The introduction of a new, the so called completely fair scheduler (CFS) to the Linux kernel 2.6.23 (October 2007)
Multilevel Load Balancing in NUMA Computers
FACULDADE DE INFORMÁTICA PUCRS - Brazil http://www.pucrs.br/inf/pos/ Multilevel Load Balancing in NUMA Computers M. Corrêa, R. Chanin, A. Sales, R. Scheer, A. Zorzo Technical Report Series Number 049 July,
How To Understand The History Of An Operating System
7 Operating Systems 7.1 Source: Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: 7.2 Understand the role of the operating system.
Virtual Platforms Addressing challenges in telecom product development
white paper Virtual Platforms Addressing challenges in telecom product development This page is intentionally left blank. EXECUTIVE SUMMARY Telecom Equipment Manufacturers (TEMs) are currently facing numerous
Multi-core and Linux* Kernel
Multi-core and Linux* Kernel Suresh Siddha Intel Open Source Technology Center Abstract Semiconductor technological advances in the recent years have led to the inclusion of multiple CPU execution cores
REAL TIME OPERATING SYSTEMS. Lesson-10:
REAL TIME OPERATING SYSTEMS Lesson-10: Real Time Operating System 1 1. Real Time Operating System Definition 2 Real Time A real time is the time which continuously increments at regular intervals after
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
Capacity Estimation for Linux Workloads
Capacity Estimation for Linux Workloads Session L985 David Boyes Sine Nomine Associates 1 Agenda General Capacity Planning Issues Virtual Machine History and Value Unique Capacity Issues in Virtual Machines
Effective Computing with SMP Linux
Effective Computing with SMP Linux Multi-processor systems were once a feature of high-end servers and mainframes, but today, even desktops for personal use have multiple processors. Linux is a popular
Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association
Making Multicore Work and Measuring its Benefits Markus Levy, president EEMBC and Multicore Association Agenda Why Multicore? Standards and issues in the multicore community What is Multicore Association?
Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne [email protected]. Proseminar KVBK Linux Scheduler Valderine Kom
Konzepte von Betriebssystem-Komponenten Linux Scheduler Kenmegne [email protected] 1 Contents: 1. Introduction 2. Scheduler Policy in Operating System 2.1 Scheduling Objectives 2.2 Some Scheduling
The Truth Behind IBM AIX LPAR Performance
The Truth Behind IBM AIX LPAR Performance Yann Guernion, VP Technology EMEA HEADQUARTERS AMERICAS HEADQUARTERS Tour Franklin 92042 Paris La Défense Cedex France +33 [0] 1 47 73 12 12 [email protected] www.orsyp.com
Real-Time Operating Systems for MPSoCs
Real-Time Operating Systems for MPSoCs Hiroyuki Tomiyama Graduate School of Information Science Nagoya University http://member.acm.org/~hiroyuki MPSoC 2009 1 Contributors Hiroaki Takada Director and Professor
General Overview of Shared-Memory Multiprocessor Systems
CHAPTER 2 General Overview of Shared-Memory Multiprocessor Systems Abstract The performance of a multiprocessor system is determined by all of its components: architecture, operating system, programming
GPU File System Encryption Kartik Kulkarni and Eugene Linkov
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
Notes and terms of conditions. Vendor shall note the following terms and conditions/ information before they submit their quote.
Specifications for ARINC 653 compliant RTOS & Development Environment Notes and terms of conditions Vendor shall note the following terms and conditions/ information before they submit their quote. 1.
Symmetric Multiprocessing
Multicore Computing A multi-core processor is a processing system composed of two or more independent cores. One can describe it as an integrated circuit to which two or more individual processors (called
Linux Scheduler. Linux Scheduler
or or Affinity Basic Interactive es 1 / 40 Reality... or or Affinity Basic Interactive es The Linux scheduler tries to be very efficient To do that, it uses some complex data structures Some of what it
CHAPTER 15: Operating Systems: An Overview
CHAPTER 15: Operating Systems: An Overview The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint
The Microsoft Windows Hypervisor High Level Architecture
The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its
OPERATING SYSTEMS SCHEDULING
OPERATING SYSTEMS SCHEDULING Jerry Breecher 5: CPU- 1 CPU What Is In This Chapter? This chapter is about how to get a process attached to a processor. It centers around efficient algorithms that perform
CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study
CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what
CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems
Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems Based on original slides by Silberschatz, Galvin and Gagne 1 Basic Concepts CPU I/O Burst Cycle Process execution
Performance analysis of a Linux based FTP server
Performance analysis of a Linux based FTP server A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Technology by Anand Srivastava to the Department of Computer Science
Applying Multi-core and Virtualization to Industrial and Safety-Related Applications
White Paper Wind River Hypervisor and Operating Systems Intel Processors for Embedded Computing Applying Multi-core and Virtualization to Industrial and Safety-Related Applications Multi-core and virtualization
Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat
Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat Why Computers Are Getting Slower The traditional approach better performance Why computers are
CPU Scheduling. CSC 256/456 - Operating Systems Fall 2014. TA: Mohammad Hedayati
CPU Scheduling CSC 256/456 - Operating Systems Fall 2014 TA: Mohammad Hedayati Agenda Scheduling Policy Criteria Scheduling Policy Options (on Uniprocessor) Multiprocessor scheduling considerations CPU
Chapter 1 Computer System Overview
Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides
