Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

Similar documents
Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Chapter 5 Process Scheduling

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

CPU Scheduling. CPU Scheduling

Operating System: Scheduling

ICS Principles of Operating Systems

CPU Scheduling Outline

CPU Scheduling. Core Definitions

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

OPERATING SYSTEMS SCHEDULING

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

W4118 Operating Systems. Instructor: Junfeng Yang

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Operating Systems. III. Scheduling.

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

A Comparative Study of CPU Scheduling Algorithms

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Analysis and Comparison of CPU Scheduling Algorithms

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Introduction. Scheduling. Types of scheduling. The basics

Processor Scheduling. Queues Recall OS maintains various queues

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Job Scheduling Model

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Real-Time Scheduling 1 / 39

OS OBJECTIVE QUESTIONS

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

Operating Systems Lecture #6: Process Management

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Operating Systems, 6 th ed. Test Bank Chapter 7

Scheduling Algorithms

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

A Review on Load Balancing In Cloud Computing 1

W4118 Operating Systems. Instructor: Junfeng Yang

This tutorial will take you through step by step approach while learning Operating System concepts.

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

Design and performance evaluation of Advanced Priority Based Dynamic Round Robin Scheduling Algorithm (APBDRR)

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Linux Process Scheduling Policy

Operating System Tutorial

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Analysis of Job Scheduling Algorithms in Cloud Computing

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

Performance Comparison of RTOS

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Weight-based Starvation-free Improvised Round-Robin (WSIRR) CPU Scheduling Algorithm

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

Linux scheduler history. We will be talking about the O(1) scheduler

Syllabus MCA-404 Operating System - II

EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Process Scheduling II

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

First-class User Level Threads

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Chapter 2: OS Overview

Comparison between scheduling algorithms in RTLinux and VxWorks

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Types Of Operating Systems

CS414 SP 2007 Assignment 1

4. Fixed-Priority Scheduling

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

The International Journal Of Science & Technoledge (ISSN X)

Tasks Schedule Analysis in RTAI/Linux-GPL

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Load Balancing Scheduling with Shortest Load First

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Task Scheduling for Multicore Embedded Devices

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Chapter 7: Deadlocks!

Scheduling. Monday, November 22, 2004

Chapter 3. Operating Systems

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Kernel comparison of OpenSolaris, Windows Vista and. Linux 2.6

REAL TIME OPERATING SYSTEMS. Lesson-10:

Real-time scheduling algorithms, task visualization

OPERATING SYSTEM - VIRTUAL MEMORY

Efficiency of Batch Operating Systems

Far-western University Central Office, Mahendranagar Operating System

Transcription:

Chapter 5: CPU Scheduling, Silberschatz, Galvin and Gagne 2009

Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Linux Example 5.2 Silberschatz, Galvin and Gagne 2009

Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various CPU-scheduling algorithms To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system 5.3 Silberschatz, Galvin and Gagne 2009

Basic Concepts The objective of multiprogramming is to have some process running at all time, to Maximum CPU utilization. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait Process execution begins with a CPU burst that is followed by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so on,.. The final CPU burst ends the process. CPU burst distribution large number of short CPU bursts and a small number of long CPU bursts. An I/O bound program has many short CPU bursts. A CPU bound program has few long CPU bursts. 5.4 Silberschatz, Galvin and Gagne 2009

Histogram of CPU-burst Times 5.5 Silberschatz, Galvin and Gagne 2009

Alternating Sequence of CPU And I/O Bursts 5.6 Silberschatz, Galvin and Gagne 2009

CPU Scheduler When the CPU becomes idle, the OS must Select from among the processes in memory that are ready to execute, and allocates the CPU to one of them. The selection process is carried out by the short-term scheduler (CPU scheduler ). CPU scheduling decisions may take place when a process: 1. Switches from running state to the waiting state(result of I/o request or wait for the termination of one of the child processes). 2. Switches from running state to ready state(interrupt). 3. Switches from waiting state to ready state(completion of I/O) 4. Terminates Scheduling under 1 and 4 is nonpreemptive or cooperative. All other scheduling is preemptive 5.7 Silberschatz, Galvin and Gagne 2009

Preemptive scheduling Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. Windows 95 and all subsequent versions of windows OS have used preemptive scheduling. 5.8 Silberschatz, Galvin and Gagne 2009

Dispatcher The Dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program It should be fast. Dispatch latency the time it takes for the dispatcher to stop one process and start another running 5.9 Silberschatz, Galvin and Gagne 2009

Scheduling Criteria CPU utilization keep the CPU as busy as possible. Throughput # of processes that complete their execution per time unit(10 processes/second) Turnaround time amount of time to execute a particular process(the interval from the time of submission of a process to the time of completion, waiting to get into memory, waiting in the ready queue, exciting on the CPU, doing I/O). Waiting time the amount of times a process has been waiting in the ready queue Response time amount of time it takes from when a request was submitted until the first response is produced, not output (for timesharing environment) 5.10 Silberschatz, Galvin and Gagne 2009

Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time 5.11 Silberschatz, Galvin and Gagne 2009

First-Come, First-Served (FCFS) Scheduling Jobs are scheduled in order of arrival When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue (the running process is then removed from the queue). Disadvantages: Non-preemptive : once the CPU is allocated to a process, the process keeps the CPU until it releases it, either by terminating or requesting I/O. The average waiting time is often quite long. 5.12 Silberschatz, Galvin and Gagne 2009

example Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: P 1 P 2 P 3 0 24 27 30 Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 5.13 Silberschatz, Galvin and Gagne 2009

FCFS Scheduling (Cont) Suppose that the processes arrive in the order P 2, P 3, P 1 The Gantt chart for the schedule is: P 2 P 3 P 1 0 Waiting time for P 1 = 6; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect as short processes go behind long process lower CPU and device utilization. 3 6 30 5.14 Silberschatz, Galvin and Gagne 2009

Shortest-Job-First (SJF) Scheduling This algorithm Associates with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time, if the next CPU bursts of two processes are the same, FCFS scheduling is used. Two schemes: Nonpreemptive once CPU given to the process it cannot be preempted until completes its CPU burst Preemptive if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is known as the Shortest-Remaining-Time-First (SRTF) 5.15 Silberschatz, Galvin and Gagne 2009

Examples of SJF Example1: Process Arrival Time Burst Time P 1 0.0 6 P 2 2.0 8 P 3 4.0 7 P 4 5.0 3 SJF scheduling chart P 4 P P 3 1 P 2 0 3 9 16 24 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 Compare with FCFS AWT=(0+6+14+21)/4=10.25 5.16 Silberschatz, Galvin and Gagne 2009

Shortest-Job-First (SJF) Scheduling Example2: Process Arrival Time Burst Time P 1 0 7 P 2 2 4 P 3 4 1 P 4 5 4 Non preemptive SJF Average waiting time = (0 + 6 + 3 + 7)/4 = 4 P 1 P 3 P 2 P 4 0 2 4 5 P 1 (7) P 2 (4) P 3 (1) P 4 (4) 7 8 12 16 P 1 s wating time = 0 P 2 s wating time = 6 P 3 s wating time = 3 P 4 s wating time = 7 5.17 Silberschatz, Galvin and Gagne 2009

Shortest-Job-First (SJF) Scheduling Example3: Process Arrival Time Burst Time P 1 0 7 P 2 2 4 P 3 4 1 P 4 5 4 Preemptive SJF(SRTF) Average waiting time = (9 + 1 + 0 +2)/4 = 3 P 1 P 2 P 3 P 2 P 4 P 1 0 2 4 5 7 11 P P 1 (7) 1 (5) P 2 (4) P 2 (2) P 3 (1) P 4 (4) 16 P 1 s wating time = 9 P 2 s wating time = 1 P 3 s wating time = 0 P 4 s wating time = 2 5.18 Silberschatz, Galvin and Gagne 2009

Shortest-Job-First (SJF) Scheduling SJF is optimal gives minimum average waiting time for a given set of processes The difficulty is knowing the length of the next CPU request. 5.19 Silberschatz, Galvin and Gagne 2009

Prediction of the Length of the Next CPU Burst 5.20 Silberschatz, Galvin and Gagne 2009

Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority in Unix but lowest in Java) Equal-priority processes are scheduled in FCFS order. Preemptive: preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. Nonpreemptive : put the new process at the head of the ready queue. SJF is a priority scheduling where priority is the predicted next CPU burst time Problem Starvation low priority processes may never execute Solution Aging as time progresses increase the priority of the process (for example : 1 every 15 minutes) 5.21 Silberschatz, Galvin and Gagne 2009

Priority Scheduling Example : Process Burst Time priority P 1 10 3 P 2 1 1 P 3 2 4 P 4 1 5 p5 5 2 All arrived at time 0. The Gantt chart for the schedule is: P 2 P P 1 5 P 3 P 4 0 1 6 16 18 19 The AWT is (6 +0+ 16+18+1)/5 = 8.2 5.22 Silberschatz, Galvin and Gagne 2009

Priority Scheduling Example: Process arrival time Burst length Priority P1 0 10 3 P2 0 1 1 P3 0 2 4 P4 0 1 5 P5 3 5 2 Gantt chart: Non-preemptive priority scheduling P2 P1 P5 P3 P4 0 1 11 16 18 19 Gantt chart: Preemptive priority scheduling P2 P1 P5 P1 P3 P4 0 1 3 8 16 18 19 5.23 Silberschatz, Galvin and Gagne 2009

Round Robin (RR) Is designed especially for time-sharing systems. Similar to FCFS, but it is Preemptive to enable the system to switch between processes. Each process gets a small unit of CPU time (time quantum or time slice), usually 10-100 milliseconds. The Ready queue is FIFO (new processes are added to the tail of the queue.) The CPU scheduler picks the first process from the ready queue,set a timer to interrupt after 1 time quantum, and dispatch the process. 5.24 Silberschatz, Galvin and Gagne 2009

Round Robin (RR) One of two things will happen The process may have a CPU burst of < 1 time quantum the process itself will release the CPU voluntarily. The CPU burst of the currently running process > 1 time quantum the timer will go off and will cause an interrupt to the OS. a context switch will be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the ready queue. Typically, higher average turnaround than SJF, but better response 5.25 Silberschatz, Galvin and Gagne 2009

Round Robin (RR) Example1: Time quantum = 4 Process Burst Time P 1 24 P 2 3 P 3 3 The Gantt chart is: P 1 P 2 P 3 P 1 P 1 P 1 P 1 P 1 0 4 7 10 14 18 22 26 30 AWT(6(10-4)+4+7)/3 = 5.66 5.26 Silberschatz, Galvin and Gagne 2009

Example2: Round Robin (RR) Time quantum = 20 Process Burst Time Wait Time P 1 53 57 +24 = 81 P 2 17 20 P 3 68 37 + 40 + 17= 94 P 4 24 57 + 40 = 97 P 1 P 2 P 3 P 4 P 1 P 3 P 4 P 1 P 3 P 3 0 20 37 57 77 97 117 121 134 154 162 P 1 (53) P 1 (33) 24 P 1 (13) P 2 (17) P 3 (68) P 4 (24) 20 37 57 57 40 P 3 (48) P 3 (28) P 3 (8) 40 P 4 (4) 17 Average wait time = (81+20+94+97)/4 = 73 5.27 Silberschatz, Galvin and Gagne 2009

Round Robin (RR) If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)*q time units until its next time quantum. (Ex: 5 processes, TQ = 20 milliseconds, each process will get up to 20 milliseconds every 100 milliseconds. The Performance of RR depends heavily on the size of the TQ. TQ large FCFS TQsmall TQ must be large (but not too large)with respect to context switch time, otherwise overhead is too high 5.28 Silberschatz, Galvin and Gagne 2009

Time Quantum and Context Switch Time 5.29 Silberschatz, Galvin and Gagne 2009

Turnaround Time Varies With The Time Quantum The average TurnAroundTime of a set of process does not necessarily improve as the TQ size increase. The AVG TAT can be improved if most process finish their next CPU burst in a single time quantum. 5.30 Silberschatz, Galvin and Gagne 2009

Multilevel Queue Processes are classified into different groups. Each group have different response-time requirements different scheduling needs. A multilevel queue scheduling algorithm partitions the Ready queue into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground RR background FCFS Scheduling must be done between the queues Fixed priority preemptive scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR, 20% to background in FCFS 5.31 Silberschatz, Galvin and Gagne 2009

Multilevel Queue Scheduling 5.32 Silberschatz, Galvin and Gagne 2009

Multilevel Feedback Queue Implement multiple ready queues Different queues may be scheduled using different algorithms Just like multilevel queue scheduling, but assignments are not static Multilevel feedback queue-scheduling algorithm allows a process to move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade and downgrade a process The most general CPU-scheduling algorithm. The most complex algorithm. 5.33 Silberschatz, Galvin and Gagne 2009

Example of Multilevel Feedback Queue Three queues: Q 0 RR with time quantum 8 milliseconds Q 1 RR time quantum 16 milliseconds Q 2 FCFS Scheduling A new job enters queue Q 0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q 1. At Q 1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q 2. AT Q 2 job is served FCFS only when queue 0 and queue 1 are empty. 5.34 Silberschatz, Galvin and Gagne 2009

Multilevel Feedback Queues 5.35 Silberschatz, Galvin and Gagne 2009

Thread Scheduling Distinction between user-level and kernel-level threads On OSs that support them, it is the kernel-level threadsnot processes- that are being scheduled by OS. User-level threads are managed by a thread library and the kernel is unaware of them. To run on CPU, the user level threads must be mapped to an associated kernel-level thread. It may use a lightweight process(lwp). contention scope: one distinction between user-level and kernel-level threads lies in how they are scheduled. 5.36 Silberschatz, Galvin and Gagne 2009

Thread Scheduling Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP. Known as process-contention scope (PCS) since scheduling competition takes place among threads belonging to the same process. PCS is done according to preempt priority. PTHREAD SCOPE PROCESS schedules threads using PCS scheduling. Kernel thread scheduled onto available CPU is system-contention scope (SCS) competition takes place among all threads in system Systems using the one-to-one model schedule threads using only SCS. PTHREAD SCOPE SYSTEM schedules threads using SCS scheduling. 5.37 Silberschatz, Galvin and Gagne 2009

Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available Different rules for homogeneous processors (Identical processors in terms of their functionality) or heterogeneous processors. Asymmetric multiprocessing: All scheduling decisions, I/O processing, and other system activities handled by a single processor the master server. The other processors execute only user code. Simple because only one processor accesses the system data structures, reducing the need for data sharing. Symmetric multiprocessing (SMP): each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processes Multiple processors try to access and update a common data structures. So, scheduler must be programmed carefully. Must ensure that 2 processors don t choose the same process. 5.38 Silberschatz, Galvin and Gagne 2009

Linux Scheduling Linux Scheduler is a preemptive, priority-based algorithm with 2 separate priority ranges: Two priority ranges: time-sharing and real-time A real-time range from 0 to 99 Longer time quantum A nice value ranging from 100 to 140 Shorter time quantum 5.39 Silberschatz, Galvin and Gagne 2009

Linux Scheduling The kernel maintains a list of all runnable tasks in a runqueue data structure. Each runqueue contains two priority arrays : Active : contains all tasks with time remaining in their time slices Expired : contains all expired tasks. 5.40 Silberschatz, Galvin and Gagne 2009

List of Tasks Indexed According to Priorities The scheduler chooses the task with the highest priority from the active array for execution on the CPU. When the active array is empty the 2 arrays are exchanged (the expired array becomes the active array, and vice versa). 5.41 Silberschatz, Galvin and Gagne 2009

Algorithm Evaluation Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload More examples P: 214 Simple and fast Requires exact numbers for input and its answers apply only for those data Queueing models rate at which new processes arrive, ratio of CPU bursts to I/O times, distribution of CPU burst times and I/O burst times can be measured and then approximated or estimated result is a mathematical formula describing it From these it is possible to compute the average throughput, utilization, waiting time, and so on difficult to work 5.42 Silberschatz, Galvin and Gagne 2009

Algorithm Evaluation Simulations run computer simulations of the different proposed algorithms data to drive the simulation can be randomly generated better alternative when possible is to generate trace tapes expensive Implementation The only completely accurate way to evaluate a scheduling algorithm is to code it up, put it in the operating system, and see how it works. high cost (coding and user reaction) 5.43 Silberschatz, Galvin and Gagne 2009

Simulations 5.44 Silberschatz, Galvin and Gagne 2009

Conclusion We ve looked at a number of different scheduling algorithms. Which one works the best is application dependent. General purpose OS will use priority based, round robin, preemptive Real Time OS will use priority, no preemption. 5.45 Silberschatz, Galvin and Gagne 2009

End of Chapter 5, Silberschatz, Galvin and Gagne 2009