Operating System Aspects Chapter 2: Basics Chapter 3: Multimedia Systems Communication Aspects and Services Multimedia Applications and Communication Multimedia Transfer and Control Protocols Quality of Service and Resource Management Synchronization Multimedia Operating Systems Chapter 4: Multimedia Systems Storage Aspects Chapter 5: Multimedia Usage 3.5: Multimedia Operating Systems Resource Management Process Management Scheduling Strategies Prototype System For multimedia applications not only transferring data has to be considered also, the processing of the data on sender and receiver host has to be fast. Operating System Aspects for Multimedia Processing Most conventional operating systems offer only little or no support for in-time processing of continuous media. This concerns all functions of an operating system like process, memory, file or device management 1. Resource Management How to achieve a coordinated processing of all operating system functions in order to achieve an end-to-end Quality of Service (delay, capacity, loss rate, jitter,...)? An abstract continuous resource model A common resource management procedure 2. Process Management How to schedule processes permitting each to terminate according to its deadline? Rate Monotonic Scheduling Earliest Deadline First Page 1 Page 2 Resource Management Tasks Real-Time Systems 1. Admission control If a new data stream wants to start: is there enough remaining capacity to handle the additional stream? 2. QoS calculation Which characteristics (e.g. in terms of throughput and delay) are available for the new stream? 3. Resource reservation Reserves the resources which are required to meet the deadlines 4. QoS enforcement Quality of service can be (possibly) obtained by appropriate scheduling, e.g. by reordering tasks (serving a tasks with urgent deadlines earlier than a task with less strict bounds) What means real-time? A real-time task is a process which delivers its result in a given time or according to a given deadline. A deadline is given e.g. in the order of msec for interactive voice and video data or in the order of days for text documents. What means the term deadline? A deadline represents the latest acceptable time for finishing the processing of a task. Deadlines are called hard if failures are mission-critical or threatening human beings. Deadlines are called soft if they cannot exactly be determined or a violation is less critical. Fields of application Control systems for manufacturing processes, military systems, telecommunication systems, aircrafts, automobiles, nuclear power plants or interactive multimedia systems. Page 3 Page 4
Classification of Real-Time Systems Real-Time Systems soft real-time systems real-time systems hard real-time systems high availability high integrity fail safe fail operational telephone switching on-line banking High availability Down-time is minimal High integrity Consistency of data must survive any system failure and malicious attempt to alter the data railway signaling flight control Fail safe Probability to detect any failure is close to 1 System can be stopped in case of violation Fail operational Minimal service even in case of failure System cannot be stopped Processing Requirements Predictable fast response to time-critical events Accurate timing information High degree of schedulability, i.e. resource capacity is not wasted (nearly optimal scheduling; finding optimal schedules is often a NP-complete task) Stability under transient overload, i.e. use of buffering to cope with bursty systems Aspects specific to Multimedia Systems In-time processing, transmission and presentation of audio and video data Requirements are described as QoS parameters Throughput, local delay, global (i.e. end-to-end) delay, jitter, and reliability Specified by average values, worst case values, peak rates, distribution functions, and/or moments of the distribution functions Resource Management and Process Management for dealing with deadlines Page 5 Page 6 Resource Management Classification of Resources Killer Application Interactive Video High-Quality Audio Network File Access Resource Requirements insufficient abundant scarce 1980 1990 2000 Hardware resources in Year x Active vs. passive resources (depending on their autonomous processing capabilities) Active resources: CPU, network interface card, etc Passive resources: file system, main memory, etc Shared vs. exclusive resource usage Active resource are usually allocated exclusively whereas passive ones can be shared by multiple tasks Single vs. multiple resource occurrences A normal wortkstation for a human user usually contains only a single CPU, whereas many servers contain two or more CPUs Even sophisticated compression techniques cannot compensate resource bottlenecks In current (interactive) multimedia systems, capacity is necessary for audio and video transmission as well as processing power Page 7 Page 8
Resource Management Procedure Reservation Strategies Resources CPU.. I/O 4. Reservation 4. 8. Assign Resources Dispatcher Resource Manager 2. Schedulability 3. QoS Calculation Queue 6. Add Task 5. Calculate Schedule 7. Schedule Task 1. Request by a new task Steps 1 to 5: preparation of task processing Steps 6 to 8: task processing Principle Base for schedulability test Resource utilization Timelines of processing Airline example: Optimistic detect and resolve conflicts average test potentially high overbooking possible no guarantee Northwest Airlines Risky, lots of overbooking, solve conflicts by finding customers who leave the aircraft (by paying something in cash or better - in terms of a voucher) Pessimistic avoid conflicts maximum for peak rate load no overbooking guarantee Lufthansa very cautious airline (no overbooking, no-shows), low actual load, high prices Page 9 Page 10 Abstract Continuous Media Modeling: Workload Messages Resource Interfaces Messages Data streams consist of periodically arriving Logical Data Units (called Messages) and are described by the Linear Bounded Arrival Process (LBAP) Model A data stream is a triple (M, R, B), where M is the maximal message size R is the maximal message rate (i.e. the number of message per time unit) B is the maximal burstiness or allowed workahead The model is named linear bound arrival process because it assumes that the number of message arrivals N in a given time interval is bound by N ( ) = R + B (R message arrivals in time units) Abstract Continuous Media Modeling: Workahead w(t) 3 2 1 1 2 3 4 5 6 7 8 9 10 a 3 a 4 a 1 a 2 l(m 1 ) l(m 2 ) l(m 3 ) l(m 4 ) The workahead w(t) of a LBAP at time t describes how many messages have arrived that are not yet processed. It is defined by w(t) = max {0, N( [t 0, t] ) - R t - t 0 }. The logical arrive time l(m i ) of message m i is the time at which a message is effectively being scheduled. The scheduling is done by FIFO (First In, First Out). The logical arrival time then is defined by l(m i ) = a i + w(a i ) / R (= actual arrival time + delay due to workahead) l(m i+1 ) = max {a i+1, l(m i ) + 1 / R} t Page 11 where a i is the actual arrival time of message m i Page 12
Abstract Continuous Media Modeling: Resources Process Management Messages Resource Queue Server Interfaces Messages The logical delay d(m) of messages m between two interfaces I 1 and I 2 is defined by d(m) = l 2 (m) - l 1 (m) The buffer requirements of resource for a given data stream are defined by buf = B + R (D - U) with B = number of messages which arrive unexpectedly due to burstiness D = maximum logical delay between input and output interfaces U = minimum (unbuffered) actual delay between the same interfaces R (D - U) = number of messages which may be build up due to the variation of processing times Process Management deals with the assignment of the CPU to processes/tasks. A process may be in one of five basic states: initial, i.e. it is created, but not in schedule; process is idle ready, i.e. it is waiting for CPU assignment running, i.e. it is running on the CPU waiting, i.e. it is waiting for an external event finished in re ru fi A scheduler chooses the next process to become running according to a given schedule. The schedule determines the order of CPU assignment to processes. Goals of traditional scheduling Optimal throughput, optimal resource utilization, fair queuing Goals of real-time scheduling Execute maximum number of processes in time, i.e. according to their deadlines Minimize deadline violations wa Page 13 Page 14 Classification of Real-Time Scheduling Strategies Schedulability Tests and Optimal Schedulers Scheduling strategies can be distinguished by... static vs. dynamic schedule calculation (static = calculation of schedule in advance dynamic = re-calculation whenever a new task arrives) central vs. distributed schedule calculation preemptive vs. non-preemptive task processing (preemptive = a task may be interrupted by any task with higher priority) They schedule... tasks with periodic or aperiodic processing requirements independent tasks or tasks with precedence constraints They are applied to... uniprocessor systems multiprocessor systems (neglecting communication delay) multicomputer systems (taking communication delay into account) The test to determine whether a schedule exists for a given task set is called a schedulability test There are three kinds of test: sufficient, exact, and necessary ones: Sufficient test: if the test is positive, the set of tasks is schedulable. A negative result is possible even if the task set is schedulable ( cautious test). Necessary test: if the test is negative, the set of tasks is not schedulable. A positive result does not guarantee the schedulability of a task set ( optimistic test). Exact test: if the tasks set is schedulable, it returns a positive result. Most exact schedulability tests belong to the class of NP-complete problems A scheduler is called optimal if it always finds a schedule for task sets satisfying an exact schedulability test Page 15 Page 16
Model for Real-Time Tasks A task is characterized by its timing constraints and its resource requirements Most tasks in multimedia systems are periodic and have no precedence constraints p i Preemptive vs. Non-Preemptive Scheduling There are tasks sets that have valid preemptive schedules but no non-preemptive ones. If the cost for preemption is neglected, preemptive scheduling is always better or equal than non-preemptive scheduling. s i period 1 period 2 period 3 period 4 Model for a task timing constraint: (s i, e i, d i, p i ) with s i starting point, i.e. ready time for first period e i processing time for period p i d i deadline for period p i (relative to its period s ready time) If a task set consisting of periodic tasks (,..., T n ) is schedulable [T i = (s i, e i, d i, p i )], then the processor utilization is given by n e i d i ei U = p i= 1 i (where e i /p i = relative processor utilization by task T i ) Page 17 Deadlines High-Rated Task Low-Rated Task Non-Preemptive Schedule Preemptive Schedule a b c d e f a p 1 d a d b d c, d 1 d d d f, d 2 d e 1 p 2 1 b c d 2 e f a 1 b 1 c 1 d 2 e 2 f 2 2 Deadline violation Page 18 Rate Monotonic Algorithm Rate Monotonic Algorithm - Example The Rate Monotonic Algorithm (RM) is a static, preemptive algorithm for periodic tasks Assumptions All time-critical tasks have periodic computing requirements Tasks are mutually independent (i.e. no precedence constraints) A task s deadline equals its period (d i = p i ) A task s maximum computing time is constant and a-priori known Context switches are considered timeless, i.e. preemption is assumed to come without cost (at least without time cost) Principles Shortest period highest priority (i.e. tasks are ordered by decreasing period) Priorities are recalculated if a new task is added to the task set or a task is deleted from the task set (schedule calculation only once for a given task set) RM is optimal among static scheduling algorithms, i.e. if a task set is schedulable by any static algorithm then there exists a feasible RM schedule Principle of operation: high rated tasks preempt lower rated tasks The performance of RM depends on the arrival pattern; in the worst case ( critical instant ), every task with higher priority arrives at the same time than a lower rated task (i.e. maximum disadvantage for lower rated tasks) Deadlines High-Rated Task Low-Rated Task period of d a d b, d 1 d c d d, d 2 a b c d period of RM Schedule a 1 b 1 c 2 d 2 preemption of 1 2 is resumed Page 19 Page 20
Earliest Deadline First (EDF) Earliest Deadline First is a dynamic, preemptive algorithm for periodic tasks Principle of operation: Earliest deadline highest priority Priorities are re-calculated each time a task becomes ready (even for an unchanged task set) Calculation has worst case complexity of O(n 2 ) Comparison of EDF and RM RM schedules require more context switches, i.e. more preemptions, than EDF: Deadlines High-Rated Task d a d b, d 1 d c d d, d 2 a b c d Deadlines d a d b, d 1 d c d d, d 2 Low-Rated Task 1 2 High-Rated Task a b c d EDF Schedule a 1 b c 2 d Low-Rated Task 1 2 RM Schedule a 1 b 1 c 2 d 2 EDF Schedule a 1 b c 2 d The higher number of preemptions for RM has to be compared with the additional cost for EDF due to recalculation of schedules Page 21 Page 22 Comparison of EDF and RM Achievable Processor Utilization with RM Rate monotonic vs. EDF: Processor Utilization and Deadline Violations Deadlines High-Rated Low-Rated EDF Rate Monotonic period 10 20 30 40 50 60 70 80 EDF is better than RM: If RM can schedule a task set then the same d A not met is valid for EDF but not vice versa Deadline Violations d C not met d 1 d 2 d A d 3 d 4 d 5, d B d 6 d 7 d C 1 2 3 4 5 6 0 10 20 30 40 50 60 70 80 period A 0 10 20 30 40 50 B C 7 8 d 8 60 70 80 1 A 2 A 3 B 4 B 5 6 C 7 C 8 0 10 20 30 40 50 60 70 80 1 A 2 A 3 AB 4 B 5 B 6 C 7 C 8 C Page 23 Minimum utilization for all sets of tasks (,..., T n ) which are schedulable and which fully utilize the processor: A task set fully utilizes a processor if the task set can be scheduled If for a single task which is increased in processing time by ε > 0, the assignment becomes infeasible Example: Given a task set (,, T 3 ) with period p i and processing time e i for each task: p 1 = 3, e 1 = 1; p 2 = 4, e 2 = 1; p 3 = 5, e 3 = 1 It can be shown that in this case p p e = p e e = 5 12 12 = 1 3 2 3,max 3 2 1 p2 p1 Page 24
Achievable Processor Utilization with RM: Theorem A set of n independent and periodic tasks (,..., T n ) can be scheduled if 1 e1 e2 en n + +... + n (2 1) p p p 1 2 n For n this expression converges to ln 2 0,693 As a consequence: A lower bound for processor utilization is: ln 2 if RM is applied But: 1 (lower and upper bound) if EDF is applied (and deadline = end of period) Achievable Processor Utilization with EDF For EDF a much better (i.e. perfect) utilization is possible: e1 e2 en + +... + 1 p p p 1 2 n Mixed scheme: Suppose we have n periodic tasks Priorities given according to RM are,,..., T n ( shortest period, T n = longest period), i.e. highest priority for task 1 Compromise: Schedule the highest priority tasks,,..., T k with RM (1 k n) Schedule T k+1, T k+2,..., T n with EDF when the processor is not occupied by,,..., T k Page 25 Page 26 Achievable Processor Utilization with RM and EDF Deadline Monotonic Algorithm A 1 A 2 A 3 A 4 A 9 A 14 A 20 Task 1 (p 1 = 3, e 1 = 1) 0 60 B 1 B 2 B 3 B 4 B 5 Task 2 (p 2 = 4, e 2 = 1) Task 3 (p 3 = 5, e 3 = 1) C 1 C 2 C 3 A 1 C 1 B 2 A 3 B 3 C 3 A 5 1 1 1 47 occupation + + =, i.e. 13 out of 60 slots remain empty 3 4 5 60 A 6 C 4 A 8 B 7 A 10 C 7 A 12 C 8 B 10 A 14 C 9 A 16 B 13 A 18 C 12 A 20 Shortest deadline first is a static algorithm which gives higher priority to a task with shorter deadline, i.e. > if d 1 < d 2 If deadline = period RM algorithm This algorithm is similar but not identical to EDF RM B 1 A 2 C 2 A 4 B 4 B 5 A 7 B 6 C 5 A 9 C 6 B 8 A 11 B 9 A 13 B 11 A 15 B 12 C 10 A 17 C 11 B 14 A 19 B 15 C 1 C 2 C 3 Task 3, new with e 3 = 2 A 1 C 1 C 1 A 3 A 4 C 3 C 3 A 6 1 1 2 59 occupation + + =, 3 4 5 60 C 4 C 4 C 6 C 6 A 8 A 9 A 11 i.e. only 1 out of 60 slots remains empty A 12 C 8 B 10 A 14 C 9 C 9 A 16 A 17 C 11 C 11 A 19 A 20 B 1 A 2 Mixed strategy C 2 (Task 1 with RM, 2 and 3 with EDF) B 2 B 3 A 5 B 4 B 5 A 7 B 6 C 5 B 7 A 10 B 8 C 7 B 9 A 13 C 8 B 11 A 15 B 12 C 10 B 13 A 18 B 14 B C 12 15 Page 27 Page 28
Shortest (Remaining) Processing Time In this strategy, highest priority is given to the task with shortest process time (if non-preemptive) shortest remaining time (if preemptive) Properties: This strategy serves the maximum number of customers In an overload situation and if all tasks have the same deadline then this strategy minimizes the number of deadline violations Preemptive vs. Non-Preemptive Preemptive is more complicated than non-preemptive but... increases the feasibility of scheduling (in some cases a preemptive schedule exists but no non-preemptive schedule) reduces the amount of priority inversion (i.e. situations where lower priority jobs are executed while higher priority jobs are waiting) A task set with processing times e i and with request periods p i is schedulable if: e i ln2 p i i 0.693 (if task priorities are fixed assigned) ei 1 (if priorities may be dynamically adjusted, e.g. by EDF strategy) p In general, the resulting schedule is a preemptive one Page 29 Page 30 Preemptive vs. Non-Preemptive Example for Achievable Processor Utilization Schedulability for non-preemptive strategies is more complicated: Task T k can be scheduled (worst case consideration) if its deadline d k satisfies the following: n dj e j dk ek + maxei + ej + 1 i k j= k+ 1 p j (*) (**) (***) Here, T n is the highest priority task, is the lowest priority task (*) = own execution time (**) = waiting time due to job found in service at arrival time (worst case: maximum execution time of all jobs of other priority classes) (***) = execution time of all jobs of higher priority which are present at arrival time or which arrive during the waiting time (**) (***) job of priority k arrives x my service time higher priority jobs arrive and are served Requirement : dk! x Page 31 Given a task set (,, T 3 ) with period p i and processing time e i for each task: p 1 = 3, e 1 = 1 ; p 2 = 4, e 2 = 1; p 3 = 5, e 3,max = 2 and scheduled with RM, T 3 scheduled with EDF 1 1 2 Utilization (mixed scheme): + + = 0.983 98% 3 4 5 1 1 1 Utilization (RM): + + = 0.783 78% 3 4 5 1 1 2.083 * Utilization (EDF): + + = 1 = 100%, when increasing e3 to e3 = 2.083 3 4 5 Conjecture: Mixed scheme is nearer to the better side (namely EDF) than to the poor side (namely RM) Page 32
Achievable Processor Utilization (here RM) Task Task Task T 3 RM schedule T 3 T 3 T 3 T 3 t Least Laxity First (LLF) In this strategy, the task with shortest remaining laxity is scheduled first Definition: The laxity L k (t) of task T in period k at time t is the remaining time from t to the deadline d of task T in period k that is not necessary for processing task T: L k (t) = (s + (k-1) p + d) - (t + e rem (t)) = (Deadline in period k) - (actual time + remaining processing time) e L k (t 1 ) e rem (t 1 ) L k (t 2 ) Deadline in period k s + k p Utilization: e e e 1 1 1 1 2 3 Uactual = + + = + + = 0.783 p1 p2 p3 3 4 5 s s + (k-1)p t 1 t t 2 s + (k-1) p+d period k The actual utilization is slightly better than the theoretical minimum: 1 3 Utheorem 3 = 2 1 = 0.78 Page 33 LLF has no advantage over the EDF for uniprocessor systems. If tasks have similar laxity values, context switches can occur frequently (calculation overhead compared to EDF). LLF might be suitable only in multiprocessor systems, i.e. when several resources are scheduled simultaneously. Page 34 Least Laxity First - Example for Two Tasks Scheduling Aperiodic (but Independent) Tasks Example with a minimum CPU time granularity of 2 (if no granularity is assumed, then two tasks with equal laxity would preempt each other continuously, i.e. processor sharing would be the result) Task Task LLF Schedule 10 20 30 40 t L 2 ( ) = 20 L 8 ( ) = 14 L 2 ( ) = 14 L 8 ( ) = 14 interrupts interrupts L 10 ( ) = 14... finished L 10 ( ) = 12 interrupts finished Laxities do not change during execution time of a task but become smaller when a task is not served. If the laxity of a waiting task becomes smaller than that of the running task, the running task is interrupted. d 2 d 1 Page 35 Multimedia applications comprise both, periodic and aperiodic tasks Periodic tasks are used for transport and processing of continuous media data whereas control or management tasks are aperiodic Often it is reasonable to give to periodic tasks priority over aperiodic tasks Problem How to schedule a task set which is composed of periodic as well as aperiodic tasks? Idea One special periodic task (called server) is polling aperiodic tasks to be processed New problem Aperiodic tasks can only be processed when server task is scheduled, i.e. they miss their deadline if it is earlier than the next time the server is scheduled Solution: Bandwidth preserving algorithms like Priority Exchange Deferrable Server Sporadic Server Page 36
Bandwidth Preserving Algorithms Priority Exchange One periodic task T s serves all aperiodic tasks. The server exchanges priority with a lower priority periodic task T i if no aperiodic task is ready. The server receives its initial priority for the next period. Case 1: T s serves > 0 aperiodic tasks T s... T i... T n T s... T i... T n period k period k+1 Case 2: T s has no task to serve, changes priority with T i and is reactivated at the time T i was scheduled T i... T s... T n T i... T s... T n Bandwidth Preserving Algorithms Deferrable Server One periodic task (called deferrable server) serves all aperiodic tasks. It defers its processing time if no aperiodic task is ready but retains its priority. As soon as an aperiodic task request occurs, it either (immediately) preempts the running task (that has a lower priority), or resumes processing after the current task terminates T s T 3 T 4... T n nothing to serve aperiodic task arrival Interrupt T 4 or wait until T 4 is finished period k period k+1 Page 37 Page 38 Bandwidth Preserving Algorithms Scheduling Tasks with Blocking Sporadic server Tries to combine the advantages of the Priority Exchange and the Deferrable Server algorithm A sporadic server exchanges its priority when no aperiodic task is ready. Any spare CPU capacity, i.e. time not used by periodic tasks, is transformed into a ticket, that is given to a sporadic server, which then replenishes its initial priority. This way, the sporadic server is allowed to use any idle time of the CPU. The blocking of a task is caused by mutual exclusion when trying to access a critical section currently occupied by another task Priority Inversion Effect: Assume a high-priority task T h wants to enter a critical section currently occupied by a low-priority task T l. T h is blocked until T l leaves the critical section. Until then, not only T l but all medium-priority tasks T m having higher priority than T l and lower priority than T h will be processed prior to T h. This phenomenon is called priority inversion. Task T l Task T m Task T h processing wait(s) signal(s) critical section proc. blocked until Page 39 Page 40
Scheduling Tasks with Critical Sections Several algorithms have been developed in order to avoid priority inversion, e.g. Priority Inheritance, or the Ceiling Protocol Priority Inheritance: A low-priority task T l inherits the priority of a high-priority task T h if it causes the blocking of T h. T l retains its initial priority when leaving the critical section. Thus: Ready jobs between low and high are blocked Disadvantage: Lower utilization of server, possible deadline violations Advantage: Sequence of schedules is preserved Scheduling Tasks with Critical Sections More Problems with standard priority inheritance: Tasks can be blocked in each critical section Deadlocks can occur Transitive blocking is possible, i.e. T 3 is blocked by that is blocked by Attempt for solution of the problem (Ceiling protocol): Each task has two priorities: A static one assigned by the scheduler, e.g. according to RM strategy A dynamic one, i.e. the maximum of the static priority and the highest priority inherited Each semaphore has a maximum ceiling priority value of all tasks that actually use it. A semaphore only can be locked by a task with a higher dynamic priority than any currently locked semaphore. (Comparable to hierarchy of resources approach to deadlock avoidance.) Benefit: a task is only blocked once, no deadlocks and transitive blocking Price: restrictive locking policy (brute force method) longer blocking delays for other tasks Page 41 Page 42 Prototype Systems - Real-Time Mach Task Scheduling in RT Mach Real-Time Mach (RT Mach) is an experimental distributed operating system for realtime applications based on ARTS (Advanced Real-Time System), both developed at Carnegie Mellon University at Pittsburgh, PA, USA. RT Mach supports multi-processor operation clusters n 1 processors into a processor set assigns a separate run queue and scheduling strategies to each processor set enables an application to select the current scheduling strategy (at run-time) RT Mach Kernel processor processor set scheduler strategy run queue Page 43 RT Mach offers three classes of tasks (called RT Threads): Periodic tasks with hard deadlines Aperiodic (sporadic) tasks with hard deadlines Tasks with soft deadlines Tasks are scheduled by one of the following strategies (dynamically changeable): Rate monotonic with deferrable server (RM/DS) or sporadic server (RM/SS) Fixed priority (FP) Round robin (RR) Thread dispatching management controls idle threads and aperiodic server (DS or SS). Processor set management performs context switching, thread preemption, or processor assignment. RM Generic Scheduler RM/DS Thread Dispatching Mgmt. RM/DS Processor Set Mgmt. FP RR Page 44
The QoS Ticket Model Comparison of RT-Threads and Q-Threads The QoS Ticket Model combines resource reservation and adaptation Has been implemented on RT Mach 3.0 supporting so-called Q-Threads QoS Manager 2. Calculation of resource allocation 1. QoS Request 4. Issue Ticket Multimedia Session QoS Ticket 5. Consumption Information 3. Resource reservation RT Mach kernel QoS tickets allow users to specify tolerance ranges for period and computation time A ticket is issued for each session comprising several threads QoS parameters are adapted dynamically based on the current resource consumption Invocation time Thread attributes Invocation period Guarantee of execution Purpose Periodic RT-Thread Q-Thread user-defined entry point is called periodically fixed invocation period ranges for period and computation time fixed (can be re-specified) dynamic, within range, based on QoS control policy none (possible with CPU reservation) real-time processing guaranteed, within the available computation time continuous-media processing Page 45 Page 46 Conclusions Multimedia Applications need lots of protocols to work: Control protocols like H.323 or SIP for session management and coordination Transfer protocols like RTP/RTCP for data transfer Transferring multimedia data streams is not trivial: Quality of Service and resource management QoS Negotiation, Admission Control, Traffic Shaping, Scheduling, Error Control Synchronization aspects Operating system aspects for processing multimedia data Page 47