Scheduling Algorithms



Similar documents
CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

ICS Principles of Operating Systems

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

CPU Scheduling Outline

W4118 Operating Systems. Instructor: Junfeng Yang

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

OPERATING SYSTEMS SCHEDULING

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Operating Systems. III. Scheduling.

Job Scheduling Model

Operating System: Scheduling

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

CPU Scheduling. CPU Scheduling

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Operating Systems Lecture #6: Process Management

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Introduction. Scheduling. Types of scheduling. The basics

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

CPU Scheduling. Core Definitions

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

OS OBJECTIVE QUESTIONS

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Chapter 5 Process Scheduling

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Processor Scheduling. Queues Recall OS maintains various queues

Real-Time Scheduling 1 / 39

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Operating Systems, 6 th ed. Test Bank Chapter 7

A Comparative Study of CPU Scheduling Algorithms

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

A Review on Load Balancing In Cloud Computing 1

Analysis and Comparison of CPU Scheduling Algorithms

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

W4118 Operating Systems. Instructor: Junfeng Yang

Linux Process Scheduling Policy

Analysis of Job Scheduling Algorithms in Cloud Computing

Linux scheduler history. We will be talking about the O(1) scheduler

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Operating System Tutorial

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Load Balancing Scheduling with Shortest Load First

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Syllabus MCA-404 Operating System - II

University of Pennsylvania Department of Electrical and Systems Engineering Digital Audio Basics

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Lecture 3 Theoretical Foundations of RTOS

Devices and Device Controllers

Performance Analysis of Load Balancing Algorithms in Distributed System

Performance Comparison of RTOS

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

Tasks Schedule Analysis in RTAI/Linux-GPL

EVALUATION OF SCHEDULING ALGORITHMS USING VIDEO AND VOICE APPLICATIONS IN CLOUD COMPUTING

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Android Operating System

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Comparison between scheduling algorithms in RTLinux and VxWorks

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Linux Block I/O Scheduling. Aaron Carroll December 22, 2007

Efficiency of Batch Operating Systems

Linux Process Scheduling. sched.c. schedule() scheduler_tick() hooks. try_to_wake_up() ... CFS CPU 0 CPU 1 CPU 2 CPU 3

Load Balancing in Distributed System Using FCFS Algorithm with RBAC Concept and Priority Scheduling

An Approach to Load Balancing In Cloud Computing

Process Scheduling II

Survey on Job Schedulers in Hadoop Cluster

All JNTU WORLD. Wipro Previous Year Question Paper. Aptitude Questions :

Final Project Report. Trading Platform Server

Demystifying Deduplication for Backup with the Dell DR4000

Why Relative Share Does Not Work

Routing in packet-switching networks

The International Journal Of Science & Technoledge (ISSN X)

Resource Reservation & Resource Servers. Problems to solve

3. Scheduling issues. Common approaches /1. Common approaches /2. Common approaches / /13 UniPD / T. Vardanega 23/01/2013. Real-Time Systems 1

Various Schemes of Load Balancing in Distributed Systems- A Review

Transcription:

Scheduling Algorithms List Pros and Cons for each of the four scheduler types listed below. First In First Out (FIFO) Simplicity FIFO is very easy to implement. Less Overhead FIFO will allow the currently running task to complete its CPU burst, which means that there is no need to preemptively take the processor away from the task and then context-switch. Convoy Effect If a large task grabs the processor, then subsequent tasks that arrive must wait for this large task to complete. This means that short tasks could be stuck waiting for the long task to complete; thus, our task throughput could potentially be very low. Round Robin Scheduler (RR): Fair to small tasks Each task gets a fixed time slice (or quantum) of the CPU before it is forced to give up the processor. Thus, unlike the FIFO case, even if a lot of small tasks enter the system after a large task has entered, everyone gets to run and the small tasks will be able to make progress; thus, this scheduler is much fairer to short jobs. Context-Switch Overhead CPU cycles are now consumed when a task exceeds its allotted time slice: The currently running task must be taken off the processor, and the next chose task must be put on; thus, context-switching reduces the overall CPU utilization. No gain for tasks with equal run times If we have ten tasks in the system which take ten seconds to run, then we will not have a single completed task until 100 seconds have passed. In the FIFO case, we would have our first task complete in 10 seconds, the second task complete in 20 seconds, etc. As a result, the task throughput is potentially lower. Shortest Job First (SJF): Maximizes task throughput By running tasks which take less time to complete first, we can complete more tasks in a given amount of time. Unfair to larger tasks Larger tasks do not get an opportunity to run if smaller tasks keep entering the system. Computers are not psychic It is not feasible to accurately predict how long a task s CPU burst-time could be.

Shortest Remaining Time First (SRTF): Similar to that of SJF, except that if we are currently running a task, and a smaller task arrives, then we pre-empt the currently running task in order to complete the smaller task. Therefore, this scheduling algorithm will have an even higher task throughput rate than SJF. Considered to be the optimal scheduling policy provides an upper bound on performance. Similar to SJF, except that now there is also context-switching overhead incurred for pre-empting a currently running task in favor of a shorter one. True or False: Operating Systems which want a fast response time shouldn t use SRTF. False. Operating Systems which want a fast response time *should* use Shortest Remaining Time First. They key thing to remember is that tasks which are I/O bound (and therefore not very CPU intensive) will tend to have shorter CPU burst times. For example, when typing information into Notepad (or vim, or whatever other editor you use), the CPU use time is minimal as most of the task s time is spent blocking on key strokes. Therefore, we want I/O-related tasks to complete their CPU burst as quickly as possible and be able to wait for further input from the user. In order to accomplish this, SRTF should be used. On the other hand, tasks which *are* CPU intensive (such as a Video Encoder) will get lower priority with respect to these I/O Bound tasks. But this is acceptable because while the user won t care if the video encoding starts half a second late, he will care if he is playing CounterStrike and it takes half a second for a mouse click to register on the screen. True or False: Lottery scheduling can be used to implement any other scheduling algorithm. False. The sheer randomness of the Lottery Scheduler makes it impossible to perfectly implement any other scheduling algorithm. For example, we could not emulate priority starvation of the Priority Scheduler if we tried, just because a low priority thread has a small probability of actually running. Explain what a multi-level feedback scheduler is and why it approximates SRTF. A multi-level feedback scheduler stores processes into different tiers. Processes that exceed the allotted CPU time slice are automatically moved down to a lower tier, while processes that make I/O requests or block will be moved to higher tiers. The multi-level feedback scheduler will favor processes in the higher tiers before focusing on those in the lower tiers. Thus, processes that tend to depend a lot on I/O (and therefore dictate the response time of the system) will reside in higher tiers and have priority over processes that are strictly computational (and therefore reside in the lower tiers). This does not mean that CPU-intensive processes do not get to run. When processes block on I/O, they are no longer in the RUNNABLE state, and it is at this point that the CPU-intensive processes (being the only available RUNNABLE processes in the system) will be able to make progress. Some systems also choose to allocate some CPU time to the lower-tiers, even when there are processes in the higher-tiers (of course, the percentage allocation for the lower-tiers is much lower than that allocated to the higher-tiers) True or False: If a user knows the length of a CPU time-slice and can determine precisely how long his process has been running, then he can cheat a multi-level feedback scheduler

True. Just before the user s time-slice expires, he or she can execute a dummy I/O request that makes the process appear to be I/O Bound. Doing so allows the user to keep the task in the upper tier of the multi-level feedback scheduler, ensuring that he gets more CPU time than a CPU-intensive process should. Note that today s Linux scheduler is more complicated than what we have discussed in class and guards against such abuse. This is accomplished by keeping track of how long the process sleeps vs. how long the process is active. This means that a process that blocks on a lot of I/O *and* uses a lot of CPU time will be regarded as neutral vs. a process that blocks on a lot of I/O and uses very little CPU time (which gets boosted up) vs. a process that spends little time on I/O and uses a lot of CPU time (which would get boosted down).

Consult Fall 2006 Midterm 1 (Problem 4e) for the solutions to the remaining problems. Note that I inadvertently neglected to include a row for Time Slot 0 (on the questions handout) which confused some students. Here is a table of processes and their associated arrival and running times. PID Arrival Time CPU Running Time 1 0 2 2 1 6 3 4 1 4 7 4 5 8 3 Show the scheduling order for these processes under the specified policies. The processer has a time-slice of 1. Assume no context-switch overhead and that new processes are added to the head of the queue except for FCFS, where they are added to the tail. Time Slot FIFO SRTF RR 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Specify the average wait time and total run time for each process: Scheduler Process 1 Process 2 Process 3 Process 4 Process 5 FIFO Wait FIFO TRT SRTF Wait SRTF TRT RR Wait RR TRT

Also specify the average wait time and total run time for each type of scheduler: What would be the optimal average wait time, and which of the above three schedulers would come closest to optimal? Explain. SRTF is the optimal scheduler in this situation since it allows the most tasks to be completed in a fixed amount of time.