Multiprocessor and Real-Time Scheduling

Similar documents
CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

ICS Principles of Operating Systems

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Real-Time Scheduling 1 / 39

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Lecture 3 Theoretical Foundations of RTOS

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Operating System Aspects. Real-Time Systems. Resource Management Tasks

Operating Systems. III. Scheduling.

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Linux Process Scheduling Policy

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

CPU Scheduling. CPU Scheduling

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Performance Comparison of RTOS

Introduction. Scheduling. Types of scheduling. The basics

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

REAL TIME OPERATING SYSTEMS. Lesson-10:

CPU Scheduling Outline

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

W4118 Operating Systems. Instructor: Junfeng Yang

Real Time Programming: Concepts

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Survey on Job Schedulers in Hadoop Cluster

OPERATING SYSTEMS SCHEDULING

Chapter 5 Process Scheduling

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Chapter 2: OS Overview

REAL TIME OPERATING SYSTEMS. Lesson-18:

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Operating System: Scheduling

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

Predictable response times in event-driven real-time systems

Embedded Systems. 6. Real-Time Operating Systems

OS OBJECTIVE QUESTIONS

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Resource Reservation & Resource Servers. Problems to solve

Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Real Time Scheduling Basic Concepts. Radek Pelánek

LAB 5: Scheduling Algorithms for Embedded Systems

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Job Scheduling Model

Overview and History of Operating Systems

A Comparative Study of CPU Scheduling Algorithms

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Real- Time Scheduling

Processes and Non-Preemptive Scheduling. Otto J. Anshus

The simple case: Cyclic execution

Improvement of Scheduling Granularity for Deadline Scheduler

CHAPTER 1 INTRODUCTION

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

How To Understand And Understand An Operating System In C Programming

Today. Intro to real-time scheduling Cyclic executives. Scheduling tables Frames Frame size constraints. Non-independent tasks Pros and cons

Chapter 1: Introduction. What is an Operating System?

Periodic Task Scheduling

Aperiodic Task Scheduling

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

Task Scheduling in Hadoop

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Processor Scheduling. Queues Recall OS maintains various queues

CPU Scheduling. Core Definitions

Hard Real-Time Linux

Aperiodic Task Scheduling

Operating Systems Lecture #6: Process Management

4. Fixed-Priority Scheduling

Analysis and Comparison of CPU Scheduling Algorithms

Operating System Resource Management. Burton Smith Technical Fellow Microsoft Corporation

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Operating System Tutorial

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis

Quiz for Chapter 6 Storage and Other I/O Topics 3.10

174: Scheduling Systems. Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 TIMING ANALYSIS IN NETWORKED MEASUREMENT CONTROL SYSTEMS

Operating Systems, 6 th ed. Test Bank Chapter 7

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Partition Scheduling in APEX Runtime Environment for Embedded Avionics Software

Contents. Chapter 1. Introduction

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Priority Inversion Problem and Deadlock Situations

Contributions to Gang Scheduling

On-line scheduling algorithm for real-time multiprocessor systems with ACO

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest

Scheduling Algorithms

Transcription:

Mutliprocessor and Real-Time Scheduling Institute for Computing and Information Sciences Radboud University Nijmegen The Netherlands julien@cs.ru.nl June 15, 2008

Part I Mutliprocessor Scheduling

Multiprocessor Systems Loosely coupled multiprocessor, or cluster: autonomous systems, each processor has its own main memory and I/O channels Functionally specialized processors: e.g. I/O processor. Slaves used by a master (e.g. general-purpose CPU). Tightly couple multiprocessing: processors share a common main memory, they are under the control of an operating system Chapter 10.1 deals with the last category of systems

Synchronization Granularity Fine: Parallelism inherent in a single instruction stream (sync. interval <20 instructions) Medium: Parallel processing or multitasking within a single application (sync. interval 20 200 inst.) Coarse: Mutliprocessing of concurrent processes in a multiprogramming environment (sync. inter. 200 2000) Very Coarse: Distributed processing across network nodes to form a single computing environment (sync. inter. 2000 1M) Independent: Multiple unrelated processes (see previous lecture) Granulatity = important parameter when designing selection functions Mainly consider (Coarse) Medium

Basic Problem Given a number of threads (or processes), and a number of CPUs, assign threads to CPUs Same issues as for uniprocessor scheduling: Response time, fairness, starvation, overhead,... New issues: Ready queue implementation Load balancing Processor affinity

Single Shared Ready Queue Ready Queue Implementation Load Balancing Processor Affinity CPU0... CPU1 CPU2 CPU3 Global queue CPU picks one process when ready Pros Queue can be reorganized (e.g. priorities,... see previous lecture) Load evently distributed Cons Synchronization (mutual exclusion of queue accesses) Overhead (caching, context switch,...)

Per-CPU Ready Queue Ready Queue Implementation Load Balancing Processor Affinity... CPU0...... CPU1 CPU2 One queue per CPU... CPU3 Pros Simple/ no synchronization needed Strong affinity Cons Where put new threads? Load balancing

Load Balancing Ready Queue Implementation Load Balancing Processor Affinity Try to keep processors as busy as possible Global approaches Push model Kernel daemon checks queue lengths periodically, moves threads to balance Pull model CPU notices its queue is empty and steals threads from other queues Do both! Load sharing Gang-scheduling Dedicated processor assignment Dynamic scheduling

Processor Affinity Ready Queue Implementation Load Balancing Processor Affinity States of executed threads in the cache of the CPU Repeated execution on the same CPU may reuse the cache Execution on a different CPU: Requires to load state in the cache Try to keep thread CPU pairs constant... CPU0... CPU1 1 thread bound to 1 processor

Processor Affinity Ready Queue Implementation Load Balancing Processor Affinity States of executed threads in the cache of the CPU Repeated execution on the same CPU may reuse the cache Execution on a different CPU: Requires to load state in the cache Try to keep thread CPU pairs constant... CPU0... CPU1 1 thread bound to 1 processor

Processor Affinity Ready Queue Implementation Load Balancing Processor Affinity States of executed threads in the cache of the CPU Repeated execution on the same CPU may reuse the cache Execution on a different CPU: Requires to load state in the cache Try to keep thread CPU pairs constant... CPU0... CPU1 Previous state stored in cache

Processor Affinity Ready Queue Implementation Load Balancing Processor Affinity States of executed threads in the cache of the CPU Repeated execution on the same CPU may reuse the cache Execution on a different CPU: Requires to load state in the cache Try to keep thread CPU pairs constant... CPU0... CPU1 No (less) cache misses

Processor Affinity Ready Queue Implementation Load Balancing Processor Affinity States of executed threads in the cache of the CPU Repeated execution on the same CPU may reuse the cache Execution on a different CPU: Requires to load state in the cache Try to keep thread CPU pairs constant... CPU0 CPU1 Need to load cache!

Job Scheduling Space Sharing Time Sharing Dynamic Scheduling Job = a set of processes (or threads) that work together (to solve some problem or provide some service) Performance depends on scheduling of job components Two major strategies Space sharing Time sharing

Why it matters? Space Sharing Time Sharing Dynamic Scheduling!!! Threads in a job are not independent!!! Synchronize on shared variables Cause/effect relationship e.g. Consumer/Producer problem Consumer is waiting for data but Producer which is not running Synchronizing phases of execution (barriers) Entire job proceeds at pace of slowest thread

Space Sharing Space Sharing Time Sharing Dynamic Scheduling Define groups of processors Fixed, variable, or adaptive Assign one job to one group of processors Ideal: one CPU/thread in job Pros Low context switch Strong affinity All runnable threads execute at same time Cons One partition may have pending threads/jobs while another is idle Hard to deal with dynamically-changing job sizes

Time Sharing Space Sharing Time Sharing Dynamic Scheduling Divide one processor time between several jobs Each CPU may execute threads from different jobs Key: keep awareness of jobs Pros Allow gang-scheduling Easier to deal with dynamically-changing Cons Filling available CPU slots with runnable jobs equiv. to the bin packing problem Heuristic based (bad worst case)

Gang-Scheduling (1) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together CPUs execute threads from different jobs (time sharing) Thread of one job bound to one processor (space sharing) Strong affinity... CPU0......... CPU1 CPU2 CPU3 First execute blue for t b seconds, which is enough to complete the job Green bound to CPU2 and CPU3, Pink to CPU0 and CPU2

Gang-Scheduling (1) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together CPUs execute threads from different jobs (time sharing) Thread of one job bound to one processor (space sharing) Strong affinity... CPU0......... CPU1 CPU2 CPU3 First execute blue for t b seconds, which is enough to complete the job Green bound to CPU2 and CPU3, Pink to CPU0 and CPU2

Gang-Scheduling (1) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together CPUs execute threads from different jobs (time sharing) Thread of one job bound to one processor (space sharing) Strong affinity... CPU0......... CPU1 CPU2 CPU3 Then, execute magenta for t m seconds, and put magenta back in the queue Green bound to CPU2 and CPU3, Pink to CPU0 and CPU2

Gang-Scheduling (1) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together CPUs execute threads from different jobs (time sharing) Thread of one job bound to one processor (space sharing) Strong affinity... CPU0...... CPU1 CPU2 Execute red job... CPU3 Green bound to CPU2 and CPU3, Pink to CPU0 and CPU2

Gang-Scheduling (1) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together CPUs execute threads from different jobs (time sharing) Thread of one job bound to one processor (space sharing) Strong affinity... CPU0......... CPU1 CPU2 CPU3 Execute green job, pink job blocked by green Green bound to CPU2 and CPU3, Pink to CPU0 and CPU2

Gang-Scheduling (1) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together CPUs execute threads from different jobs (time sharing) Thread of one job bound to one processor (space sharing) Strong affinity... CPU0...... CPU1 CPU2 Execute pink job... CPU3 Green bound to CPU2 and CPU3, Pink to CPU0 and CPU2

Gang-scheduling (2) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together Execute only all threads of one job Weak affinity but strong usage CPU0 CPU1 CPU2 Execute blue. CPU3 No fixed thread/processor assignment

Gang-scheduling (2) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together Execute only all threads of one job Weak affinity but strong usage CPU0 CPU1 CPU2 Execute blue. CPU3 No fixed thread/processor assignment

Gang-scheduling (2) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together Execute only all threads of one job Weak affinity but strong usage CPU0 CPU1 CPU2 Execute Magenta CPU3 No fixed thread/processor assignment

Gang-scheduling (2) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together Execute only all threads of one job Weak affinity but strong usage CPU0 CPU1 CPU2 Not enough CPUs for green. CPU3 No fixed thread/processor assignment

Gang-scheduling (2) Space Sharing Time Sharing Dynamic Scheduling CPUs perform context switch together Execute only all threads of one job Weak affinity but strong usage CPU0 CPU1 CPU2 Enough CPUs for green AND pink CPU3 No fixed thread/processor assignment

Dynamic Scheduling Space Sharing Time Sharing Dynamic Scheduling Number of threads can be altered dynamically by applications O/S adjust the load to improve utilization Assign idle processors New arrivals may be assigned to a processor that is used by a job currently using more than one processor Hold request until processor is available Assign processor a job in the list that currently has no processor (i.e., to all waiting new arrivals)

Real-Time Systems Real-Time Scheduling Part II Real-Time Scheduling

Real-Time Systems Real-Time Scheduling Real-Time Systems (1) Background Characteristics Features Scheduling Correct executions depend not only on computation results but also on the time when the results are available Events occur in real-time Tasks reaction/control w.r.t. events that take place in the outside world Dynamic process, talks must keep up with these events

Real-Time Systems Real-Time Scheduling Real-Time Systems (2) Background Characteristics Features Scheduling Control of laboratory experiments Process control in industrial plants Robotics Air traffic control Telecommunications Military command and control systems

Real-Time Tasks Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Tasks have deadlines (to start or finish) Hard vs. soft deadlines hard real-time tasks must meet them deadlines Space shuttle rendez-vous, Nuclear powerplants,... soft real-time tasks may not meet their deadline, this has no dramatic consequences Execution of the tasks even after its deadline! Periodic vs. aperiodic Aperiodic: fixed deadline that must (or may) be met. Periodic: once per period T or exactly T units appart

Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Characteristics of Real-Time Operating Systems (1) Determinism Operations are performed at fixed, predetermined times, or within predetermined time intervals Concerned with maximum delay before interrupt acknowledgment and the capacity to handle all the requests within the required time

Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Characteristics of Real-Time Operating Systems (2) Responsiveness Delay after acknowledgment to service the interrupt Includes time to begin the execution of the interrupt Includes time to perform the interrupt Effect of interrupt nesting Response time to external events = determinism + responsiveness

Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Characteristics of Real-Time Operating Systems (3) User control User specified priorities User specified paging What processes must always reside in main memory User specified disk algorithms User specified processes rights

Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Characteristics of Real-Time Operating Systems (4) Reliability Degradation of performance may have catastophic consequences (e.g. nuclear meltdowns) Fail-soft operation Fail in such a way as to preserve capability and data Stability: deadlines of most critical tasks always met

Features of RTOS (1) Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Fast process or thread switch Small size (minimal functionality) Quick response to interrupts Multitasking with interprocess communication tools such as semaphores, signals, and events

Features of RTOS (2) Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Use of specifal sequential files that can accumulate data at fast rate Preemptive scheduling based on priority Minimization of intervals during which interrupts are disabled Delay tasks for fixed amount of time Special alarms and timeouts

Scheduling (1) Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Request from RT process P1 Round-robin preemptive scheduling RT process added to run queue P2 scheduling time Pn RTP Priority-driven non-preemptive scheduler Request from RT process RT process to head of run queue P1 P1 blocked or completed scheduling time RTP P2 RT to run queue to await next time slice Scheduling time unacceptable for RT apps RT process to head of run queue Issue if P1 low prior. and slow

Scheduling (2) Real-Time Systems Real-Time Scheduling Background Characteristics Features Scheduling Request from RT process Priority-driven, preemptive at preemption points wait for next preemption point P1 RTP scheduling time P2 Immediate preemptive Request from RT process RTP preempts P1 RTP executes immediately P1 RTP scheduling time RT preempts current process Wait until next preemption point Which may come before end of P1 RTP preempts current process RTP is executed immediately

Real-Time Scheduling Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Static table-driven Static analysis if feasible schedules Determines at run time when a task starts Static priority-driven preemptive Analysis used to assign priority to tasks Traditional priority-driven preemptive scheduler Dynamic planning-based Feasibility determined at run time Dynamic best effort No feasibility analysis is done

Deadline Scheduling Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Important metrics: meet deadlines (not too early, not too late) rather than speed Information used: Ready time Starting time Completion deadline Processing time Resource requirements Priority Subtask structure

Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Example Collecting data from sensors A and B Scheduling decision every 10ms and based on completion deadlines Fixed priority: A has priority deadline A deadline B A A A A A B B Sensor A: 10ms, every 20ms Sensor B: 25ms, every 50ms deadline A deadline B A B A runs for 10ms B interrupted by A

Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Example Collecting data from sensors A and B Scheduling decision every 10ms and based on completion deadlines Fixed priority: A has priority deadline A deadline B A A A A A B B Sensor A: 10ms, every 20ms Sensor B: 25ms, every 50ms deadline A deadline B A B A B A runs for 10ms B interrupted by A

Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Example Collecting data from sensors A and B Scheduling decision every 10ms and based on completion deadlines Fixed priority: A has priority deadline A deadline B A A A A A B B Sensor A: 10ms, every 20ms Sensor B: 25ms, every 50ms deadline A deadline B A B A B A A runs for 10ms B interrupted by A Deadline of B missed!

Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Example Collecting data from sensors A and B Scheduling decision every 10ms and based on completion deadlines Fixed priority: B has priority deadline A deadline B A A A A A B B Sensor A: 10ms, every 20ms Sensor B: 25ms, every 50ms deadline A deadline B B B has priority Deadline of A missed!

Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Example Collecting data from sensors A and B Scheduling decision every 10ms and based on completion deadlines Earlier Deadline First (EDF) deadline A deadline B A A A A A B B Sensor A: 10ms, every 20ms Sensor B: 25ms, every 50ms deadline A deadline B A B A deadline before B deadline

Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Example Collecting data from sensors A and B Scheduling decision every 10ms and based on completion deadlines Earlier Deadline First (EDF) deadline A deadline B A A A A A B B Sensor A: 10ms, every 20ms Sensor B: 25ms, every 50ms deadline A deadline B A B A B B interr. because of A deadline

Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Example Collecting data from sensors A and B Scheduling decision every 10ms and based on completion deadlines Earlier Deadline First (EDF) deadline A deadline B A A A A A B B Sensor A: 10ms, every 20ms Sensor B: 25ms, every 50ms deadline A deadline B A B A B A B B completes because earliest deadline

Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Example Collecting data from sensors A and B Scheduling decision every 10ms and based on completion deadlines Earlier Deadline First (EDF) deadline A deadline B A A A A A B B Sensor A: 10ms, every 20ms Sensor B: 25ms, every 50ms deadline A deadline B A B A B A B A B A Last B and A complete

Real-Time Systems Real-Time Scheduling Rate-Monotonic Scheduling (RMS) Overview Deadline Scheduling Rate-Monotonic Scheduling Example Proposed by Liu and Layland 1973 Use frequency to assign priority Highest priority to shortest period Priority is a monotonic function of the period Static priority Previously we had T A = 20ms,C A = 10ms and T B = 50ms,C B = 25ms, so RMS would choose A. Issue: C A T A + C B T B = 1!

Real-Time Systems Real-Time Scheduling Liu and Layland Result Overview Deadline Scheduling Rate-Monotonic Scheduling One cannot use more than the full processor time C 1 T 1 + C 2 T 2 + + C n T n 1 For RMS, the following condition is sufficient for schedulability C 1 T 1 + C 2 T 2 + + C n T n n (2 1 n 1) Example name C T U P 1 20 100 0.02 P 2 40 150 0.267 P 3 100 350 0.286 Bound = 3 (2 1 3 1) = 0.779 Sum of U i = 0.753

Summary Real-Time Systems Real-Time Scheduling Overview Deadline Scheduling Rate-Monotonic Scheduling Interrupt based system design is challenging Priority assignment: a Black Art. Great body of litterature; many negative results. We presented 2 positive results for important special cases: RATE MONOTONIC (RMS) Liu&Layland, 1973 Case: Periodic, static priority Rule: priority based on frequency Not always applicable (auto impact sensor gets LOWEST priority) EARLIER DEADLINES (EDF) Knuth, Mok et al. 70 Case: deadline specified at each request Rule: schedule earliest deadline first Optimal for 1 processor; no extension to optimal N-processor scheme