Outline. Chapter 5: Process Scheduling Yean Fu Wen Nov. 24, Basic Concepts Scheduling Criteria Scheduling Algorithms

Similar documents
CPU Scheduling Outline

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Chapter 5 Process Scheduling

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

CPU Scheduling. CPU Scheduling

ICS Principles of Operating Systems

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Operating System: Scheduling

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

OPERATING SYSTEMS SCHEDULING

CPU Scheduling. Core Definitions

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

W4118 Operating Systems. Instructor: Junfeng Yang

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Operating Systems. III. Scheduling.

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Introduction. Scheduling. Types of scheduling. The basics

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Operating Systems Lecture #6: Process Management

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Real-Time Scheduling 1 / 39

OS OBJECTIVE QUESTIONS

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Job Scheduling Model

Operating Systems Concepts: Chapter 7: Scheduling Strategies

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Processor Scheduling. Queues Recall OS maintains various queues

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

W4118 Operating Systems. Instructor: Junfeng Yang

A Comparative Study of CPU Scheduling Algorithms

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Operating Systems, 6 th ed. Test Bank Chapter 7

Scheduling Algorithms

EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

CHAPTER 1 INTRODUCTION

Operating System Tutorial

Multi-core architectures. Jernej Barbic , Spring 2007 May 3, 2007

Linux scheduler history. We will be talking about the O(1) scheduler

Analysis and Comparison of CPU Scheduling Algorithms

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Linux Process Scheduling Policy

Chapter 2: OS Overview

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Performance Comparison of RTOS

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

A Review on Load Balancing In Cloud Computing 1

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Multi-core Programming System Overview

Chapter 5 Linux Load Balancing Mechanisms

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

This tutorial will take you through step by step approach while learning Operating System concepts.

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

Process Scheduling II

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

CS414 SP 2007 Assignment 1

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Threads Scheduling on Linux Operating Systems

Load-Balancing for a Real-Time System Based on Asymmetric Multi-Processing

Linux Process Scheduling. sched.c. schedule() scheduler_tick() hooks. try_to_wake_up() ... CFS CPU 0 CPU 1 CPU 2 CPU 3

Tasks Schedule Analysis in RTAI/Linux-GPL

An Implementation Of Multiprocessor Linux

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Task Scheduling for Multicore Embedded Devices

Ready Time Observations

3 - Introduction to Operating Systems

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

The Truth Behind IBM AIX LPAR Performance

Types Of Operating Systems

Chapter 1: Introduction. What is an Operating System?

Completely Fair Scheduler and its tuning 1

Multilevel Load Balancing in NUMA Computers

CHAPTER 15: Operating Systems: An Overview

Real-Time Operating Systems for MPSoCs

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

HyperThreading Support in VMware ESX Server 2.1

Web Server Software Architectures

Modular Real-Time Linux

Lecture 3 Theoretical Foundations of RTOS

Syllabus MCA-404 Operating System - II

Asymmetric Scheduling and Load Balancing for Real-Time on Linux SMP

Transcription:

Chapter 5: Process Scheduling Yean Fu Wen yeanfu@mail.ncyu.edu.tw Nov. 24, 2009 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms FCFS, SJF, Priority, RR, MQ, MFQ Thread Scheduling Multiple Processor Scheduling Operating System Examples Algorithm Evaluation 16:58 2 1

Basic Concepts The objective of multiprogramming is to have some processes running at all times maximize CPU utilization make the computer more productive CPU scheduling is the basis of multiprogramming OSs almost all computer resources are scheduled before use CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution CPU burst I/O wait I/O burst 16:58 3 Basic Concepts (cont.) CPU burst distribution measuring the durations of CPU bursts various greatly from process (computer) to process (computer) characterized as exponential or hyperexponential the distribution can be important in the selection of CPU scheduling algorithm 16:58 4 2

CPU Scheduler (Short term Scheduler) The CPU scheduler selects from the processes in memory that are ready to execute, and allocates the CPU to one of them ready queue can be a FIFO queue, a priority queue, a tree the records in the queue are PCBs CPU scheduling decisions may take place when a process: 1. switches from running to waiting state (e.g., I/O request) 2.switches from running to ready state (e.g., interrupt) 3.switches from waiting to ready 4.terminates Scheduling only under 1 and 4 is nonpreemptive (cooperative) the process keeps CPU until it terminates or switches to the waiting state does not require the special hardware (e.g., timer) e.g., Windows 3.x, the previous systems of Mac OS X All other scheduling is preemptive unfortunately, introduces some problems e.g., data sharing, kernel design, interrupt handlers 16:58 5 Dispatcher Dispatcher module gives control of the CPU to the process selected by the short term scheduler; this module involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency time it takes for the dispatcher to stop one process and start another running should be as fast as possible 16:58 6 3

Scheduling Criteria Criteria for comparing CPU scheduling algorithms: CPU utilization keep the CPU as busy as possible in a real system, 40% ~ 90% Throughput # of processes that are completed per time unit Turnaround time amount of time to execute a particular process the period from new state to terminated state Waiting time amount of time a process has been waiting in the ready queue usually, each various scheduling algorithm affects only the waiting time Response time amount of time it takes from the submission of a request until the first response is produced not until output the whole response (this is the turnaround time) this criterion is more meaningful in an interactive system 16:58 7 Scheduling Criteria (cont.) Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time for interactive systems, it is more important to minimize the variance in the response time than to minimize the average response time 16:58 8 4

First Come, First Served (FCFS) Scheduling Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: P 1 P 2 P 2 0 24 27 30 wait 0 milliseconds wait 24 milliseconds wait 27 milliseconds The average waiting time is (0+24+27)/3 = 17 milliseconds 16:58 9 FCFS Scheduling (cont.) Suppose that the processes arrive in the order: P 2, P 3, P 1 The Gantt chart for the schedule is: P 2 P 3 P 1 0 3 6 30 wait 0 milliseconds wait 3 milliseconds wait 6 milliseconds The average waiting time is (6+0+3)/3 = 3 milliseconds Much better than previous case Convoy effect short processes behind long process e.g., the 1 st case lower CPU and device utilization The scheduler is nonpreemptive (releasing CPU until termination or waiting I/O) implemented with a FIFO queue 16:58 10 5

Shortest Job First (SJF) Scheduling Associate with each process the length of its next CPU burst and use these lengths to schedule the process with the shortest time = shortest next CPU burst algorithm if two processes have the same shortest bursts, FCFS scheduling is used Two schemes: nonpreemptive once CPU given to the process it cannot be preempted until completes its CPU burst preemptive if a new process arrives with CPU burst length less than remaining i time of current executing process, preempt. This scheme is know as the Shortest Remaining Time First (SRTF) scheduling SJF is optimal gives minimum average waiting time for a given set of processes 16:58 11 SJF Scheduling (cont.) Example of non preemptive SJF Process Burst Time P 1,P 2,P 3,P 4 P 1 6 P 2 8 P 3 7 P 4 3 0 P 4 P 1 P 3 P 2 3 9 16 24 Average waiting time = (3 + 16 + 9 + 0)/4 = 7 NOTE: using FCFS scheduling, the average waiting time = 10.25 P 1,P 2,P 3,P 4 P 1 P 2 P 3 P 4 0 6 14 21 24 16:58 12 6

SJF Scheduling (cont.) Example of preemptive SJF P 1 P 2 P 3 P 4 Process Arrival Time Burst Time P 1 0.0 8 P 2 1.0 4 P 3 2.0 9 P 4 3.0 5 4 P 2 P 1 P 4 P 3 0 1 5 10 17 26 P 1 Average waiting time = ((10 1)+0+(17 2)+(5 3))/4 = 6.5 NOTE: using nonpreemptive SJF, the average waiting time = (0+(8 1)+(17 2)+(12 3))/4 = 7.75 P 1 P 2 P 3 P 4 P 2 P 4 P 1 P 3 0 8 12 17 26 16:58 13 SJF Scheduling (cont.) Determining length of next CPU burst Can only estimate the length Can be done by using the length of previous CPU bursts, using exponential averaging 1. τ n = the previous predicted value for n th CPU burst (the past history) 2. t n = actual length of n th CPU burst (the most recent information) 3. τ n+1 = predicted value for the next CPU burst 4. α, 0 α 1 (the relative weight of the recent and past history) 5. define: τ n+1 = αt n + (1-α)τ n τ0 is a constant or overall system average p.157 i=0 i=1 t 16:58 1 = 4. 14 α = 1/2 τ 0 = 10 (the defaulted predicted value for t 0 ) t 0 = 6 (measured) τ 1 = ½(t 1 )+ ½(τ 0 ) = 8 (the predicted value for t 1 ) 7

SJF Scheduling (cont.) The SJF scheduling algorithm is optimal!! It gives the minimum average waiting time for a given set of processes. Why? Moving a short process before a long one decreases the waiting time of the short process more than it increases the waiting time of the long process. But how can we obtain the length of the next CPU burst?? It is impossible to do so However, we can try our best to predict the length. We expect that the next CPU burst will be similar in length to the previous ones. And pick the process with the shortest predicted CPU burst. ( 1 α ). τ n = α t + τ + 1 n n [0,1], control the relative weight of recent and past history in prediction The predicted value for the bext CPU burst The length of the nth CPU burst The last prediction 16:58 15 Priority Scheduling A priority number (integer) is associated with each process usually, low numbers are used to represent high priority the number can be defined internally: based on measurable quantity within the system externally: based on criteria outside OS, e.g., importance, user type SJF is a special case of priority scheduling priority is the inverse of the predicted next CPU burst time The CPU is allocated to the process with the highest priority Preemptive nonpreemptive Problem Starvation low priority processes may never execute e.g., amazing IBM 7094 Solution Aging as time progresses increase the priority of the process 16:58 16 8

Round Robin (RR) Scheduling RR scheduling is designed especially for time sharing systems similar to FCFS, but each process gets a small unit of CPU time named time quantum (time slice) usually 10 100 milliseconds after this time has elapsed, the process is preempted and added to the end of the ready queue the scheduler picks the next process from the (FIFO) ready queue If there are n processes in the ready queue and the time quantum is q No process waits more than (n-1)q time units. Example of RR with time quantum = 4 Process Arrival Time Burst Time P 1 0.0 24 P 2 0.0 3 P 3 0.0 3 P 1 P 2 P 3 P 1 P 1 P 1 P 1 P 1 0 4 7 10 14 18 22 26 30 Average waiting time = (6+4+7)/3 = 5.67 16:58 17 Round Robin Scheduling (cont.) Turnaround time varies with the time quantum q large equal to FCFS q small q must be large with respect to context switch, otherwise overhead is too high based on special hardware, extremely small time quantum is possible, the RR is called processor sharing that each process has its own (slower/virtual) CPU Typically, higher average turnaround than SJF, but better response time Guideline: 80% of the CPU bursts should be shorter than the time quantum 16:58 18 9

Multilevel Queue Ready queue is partitioned into separate queues: foreground (for interactive processes) background (for batch processes) Each queue has its own scheduling algorithm foreground RR background FCFS Scheduling must be done among the queues Fixed priority ypreemptive p scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice each queue gets a certain amount of CPU time which it can schedule among its processes; i.e., 80% to foreground in RR 20% to background in FCFS 16:58 19 Multilevel Feedback Queue A process can move between the various queues demoting a process uses too much CPU time, it ll be moved to a lower priority yqueue updating a process waiting too long in a lower priority queue may be moved to a higher priority queue MFQ is the most general CPU scheduling algorithm aging can be implemented this way Multilevel feedback queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter (when that process needs service) 16:58 20 10

Multilevel Feedback Queue Example of multilevel feedback queue with three queues: Q 0 RR with time quantum 8 ms. Q 1 RR with time quantum 16 ms. Q 2 FCFS Scheduling A new job enters queue Q 0 which is served with RR. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q 1. At Q 1, job is again served with RR and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q 2. 16:58 21 Thread Scheduling Thread scheduling kernel level threads scheduled by the OS user level threads managed by the thread library (controlled by the programmer) to run on a CPU, the user level thread must be mapped to an associated kernel level thread Process contention scope (PCS) competition for CPU takes place among threads belonging to the same process Local Scheduling the thread library decides which user level thread to put onto an available LWP tpicall typically, according to priority (set by the programmer) preemption mode NOTE: it does not mean that the thread is actually running on a CPU to run on a CPU, the mapped kernel thread (LWP) must also be scheduled (by SCS) 16:58 22 11

Thread Scheduling (cont.) System contention scope (SCS) competition for CPU takes place among all (kernel) threads in the system Global Scheduling the kernel decides which kernel thread to run next systems using 1 to 1 model (e.g., XP, Linux) schedule threads using only SCS Pthread scheduling API pthread_attr_setscope(pthread_attr_t *attr, int scope) pthread_attr_getscope(pthread_attr_t *attr, int *scope) PCS: scope = PTHREAD_SCOPE_PROCESS SCS: scope = PTHREAD_SCOPE_SYSTEM NOTE: Pthread in Linux and Mac OS X systems allows only PTHREAD_SCOPE_SYSTEM 16:58 23 #include <pthread.h> #include <stdio.h> #define NUM THREADS 5 int main(int argc, char *argv[]) { int i; pthread t tid[num THREADS]; pthread attr t attr; /* get the default attributes */ pthread attr init(&attr); /* set the scheduling algorithm to PROCESS or SYSTEM */ pthread attr setscope(&attr, PTHREAD SCOPE SYSTEM); /* set the scheduling policy - FIFO, RT, or OTHER */ pthread attr setschedpolicy(&attr, SCHED OTHER); /* create the threads */ for (i = 0; i < NUM THREADS; i++) pthread create(&tid[i],&attr,runner,null); /* now join on each thread */ for (i = 0; i < NUM THREADS; i++) pthread join(tid[i], NULL); } /* Each thread will begin control in this function */ void *runner(void *param) { printf("i am a thread\n"); pthread exit(0); 16:58 } 24 12

Multiple Processor Scheduling When multiple CPUs are available CPU scheduling are more complex load sharing becomes possible Homogeneous: the processors are identical in terms of their functionality Approaches to multiprocessor scheduling Asymmetric multiprocessing a single processor handles all scheduling decisions, I/O processing the master server other processors execute only user code Symmetric multiprocessing (SMP) each processor is self scheduling the scheduler must ensure that two processors do not choose the same process from the ready queue all modern OSs support SMP 16:58 25 Multiple Processor Scheduling (cont.) Processor affinity most SMP systems try to keep a process running on the same processor e.g., eg if a process migrates to a new processor, the content of cache memory (for the old processor) must be invalidated the data re populates the cache for the new processor two possible forms soft affinity: the scheduler does not guarantee affinity hard affinity: (Linux) provides system calls to keep the process always on the same processor Load balancing keeping the workload evenly distributed across all processors in SMP needed on systems where each processor has its own private ready queue most modern OSs supporting SMP belong to this type counteracting the benefits of processor affinity 16:58 26 13

Multiple Processor Scheduling (cont.) Load balancing (cont.) two general approaches push migration: a specific task periodically checks each processor s load and moves process from overloaded to less busy processors pull migration: an idle processor actively pulls a waiting (ready) task from a busy processor Symmetric multithreading (SMT) providing multiple logical processors to run threads concurrently each logical processor has its own architecture state (i.e., general purpose registers, machine state registers ) interrupts are handled by the logical processor on the same physical CPU, all logical l processors share ALU, cache, bus called hyperthreading (HT) technology on Intel CPUs SMT is provide in hardware, not software but the performance may be improved if OS/scheduler is aware of SMT 16:58 27 Multicore Processors Recent trend to place multiple processor cores on same physical chip Faster and consume less power Multiple threads per core also growing Takes advantage of memory stall to make progress on another thread while memory retrieve happens Multithreaded Multicore System 16:58 28 14

Operating System Examples Solaris scheduling Windows XP scheduling Linux scheduling does not distinguish between processes and threads task 16:58 29 Example: Solaris Scheduling 4 pre defined classes of scheduling real time system time sharing interactive each class has a set of priorities and scheduling algorithm the scheduler converts the class specific priorities into global priorities and selects the highest priority thread to run 16:58 (ms.) The dispatch table for interactive and timesharing threads including 60 priority levels: a bigger number a higher priority a higher priority has a smaller time slice the number within time quantum expired and return from sleep is the new priority (number) for redispatching the thread 30 15

Example: Solaris Scheduling Time sharing the default class of a process multilevel feedback queue 16:58 31 36 Example: Windows XP Scheduling Windows XP scheduler called dispatcher a priority based, preemptive scheduling algo. using a 32 level l priority i scheme 16~32: real time class the thread priority is fixed 1~15: variable priority class containing 5 classes the priority of thread can be changed within the class each class has 7 relative priorities the NORMAL relative priority is the base priority for each class when a thread s time quantum runs out, its priority is lowered when a variable priority thread returns from waiting queue, the dispatcher boosts its priority depending on what the thread was waiting for tending to give good response times to interactive threads 16:58 32 16

Example: Linux Scheduling Linux scheduler (after version 2.5) preemptive and priority based lower (nice) value indicates higher priority higher priority task has longer quantum running in constant time (O(1)) supporting for SMP each processor has its own runqueue and schedules itself independently providing processor affinity load balancing fairness and support for interactive tasks Runqueue data structure active array: containing tasks with time remaining in their time slices expired array: containing expired tasks before moving a (dynamic priority) task from active to expired array, its new priority and corresponding time slice must be re calculated (refer to the next page) 16:58 if the active array is empty, two arrays are exchanged 33 Example: Linux Scheduling (cont.) Real time scheduling POSIX.1b compliant static priority is assigned its priority is unchanged while the task is moved from active to expired array Dynamic priority task scheduling the priority number is called nice value the task is more interactive has longer sleep time (for I/O) the new nice value = the old nice value - 5 higher h priority if the task is CPU bound has shorter sleep time the new nice value = the old nice value + 5 lower priority 16:58 34 17

Algorithm Evaluation How do we select a CPU scheduling algorithm for a particular system? first of all, defining fii the criteria. i e.g., maximizing CPU utilization under the constraint that the max response time is 1 sec. then we evaluate the algorithms under consideration Possible evaluation methods (also suitable for network, system performance ) Deterministic modeling Queuing models Simulations Implementation 16:58 35 Evaluation: Deterministic Modeling Deterministic modeling one type of analytic evaluation takes tk a particular predetermined dt d workload and defines the performance of each algorithm for that workload simple and fast FCFS algorithm: its answers apply only to those cases over a set of examples, it may indicate trends the average waiting time = (0+10+39+42+49)/5 = 28 ms. nonpreemptive SJF scheduling: 16:58 Process Burst Time P 1 10 P 2 29 P 3 3 P 4 7 P 5 12 the average waiting time = (10+32+0+3+20)/5 = 13 ms. RR algorithm (time quantum=10 ms.): 36 36 the average waiting time = (0+32+20+23+40)/5 = 23 ms. 18

Evaluation: Queueing Models The problem of deterministic modeling the processes that are run vary from day to day no static information Queuing model is based on mathematical formulas describing the system behavior the distribution of CPU and I/O bursts for processes in a system the arrival time distribution of new processes computing the average throughput, utilization, waiting time Example: Little s formula n = the average queue length n = λ x W λ= the arrival rate for new processes it ti The limitations of queuing analysis W = the average waiting time the classes of algo. and distributions that can be handled are limited the mathematics may be too complicated to work with it s necessary to make many assumptions, which may not be accurate queueing models are often only approximations of real systems 16:58 37 Evaluation: Simulations To get more accurate evaluation can be expensive more computer time more detailed simulation more accurate result trace tapes require large amounts of storage space Running simulation involves a model of the computer system including a clock variable software data structures input data distribution drivendriven mathematically empirically trace tapes statistic output 16:58 38 19

Evaluation: Implementation The difficulty with this approach high cost coding the algorithm modifying the OS to support it handling the reaction of the users to a constantly changing the system the environment will change since the new scheduler the programmer may adjust the behavior of processes to gain more benefit from the new scheduler 16:58 39 20