CS 571 Operating Systems. CPU Scheduling. Angelos Stavrou, George Mason University

Similar documents
Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

CPU Scheduling. CPU Scheduling

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

ICS Principles of Operating Systems

Chapter 5 Process Scheduling

W4118 Operating Systems. Instructor: Junfeng Yang

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

CPU Scheduling Outline

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

OPERATING SYSTEMS SCHEDULING

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Operating Systems. III. Scheduling.

CPU Scheduling. Core Definitions

OS OBJECTIVE QUESTIONS

Operating System: Scheduling

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Processor Scheduling. Queues Recall OS maintains various queues

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Job Scheduling Model

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Introduction. Scheduling. Types of scheduling. The basics

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Operating Systems, 6 th ed. Test Bank Chapter 7

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Operating Systems Lecture #6: Process Management

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Analysis and Comparison of CPU Scheduling Algorithms

Real-Time Scheduling 1 / 39

A Comparative Study of CPU Scheduling Algorithms

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Scheduling Algorithms

CS414 SP 2007 Assignment 1

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

W4118 Operating Systems. Instructor: Junfeng Yang

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Syllabus MCA-404 Operating System - II

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

This tutorial will take you through step by step approach while learning Operating System concepts.

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

Linux Process Scheduling Policy

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

Operating System Tutorial

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

218 Chapter 5 CPU Scheduling

Performance Comparison of RTOS

Chapter 2: OS Overview

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

A Review on Load Balancing In Cloud Computing 1

4. Fixed-Priority Scheduling

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Scheduling. Monday, November 22, 2004

CS0206 OPERATING SYSTEMS Prerequisite CS0201, CS0203

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Linux scheduler history. We will be talking about the O(1) scheduler

SYSTEM ecos Embedded Configurable Operating System

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Lecture 3 Theoretical Foundations of RTOS

Efficiency of Batch Operating Systems

Weight-based Starvation-free Improvised Round-Robin (WSIRR) CPU Scheduling Algorithm

REAL TIME OPERATING SYSTEMS. Lesson-10:

UNIVERSITY OF WISCONSIN-MADISON Computer Sciences Department A. Arpaci-Dusseau

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

What is best for embedded development? Do most embedded projects still need an RTOS?

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Far-western University Central Office, Mahendranagar Operating System

Analysis of Job Scheduling Algorithms in Cloud Computing

Tasks Schedule Analysis in RTAI/Linux-GPL

CHAPTER 15: Operating Systems: An Overview

Types Of Operating Systems

Exercises : Real-time Scheduling analysis

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Chapter 1 8 Essay Question Review

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Comparison between scheduling algorithms in RTLinux and VxWorks

Undergraduate Course Syllabus

Chapter 1 FUNDAMENTALS OF OPERATING SYSTEM

Predictable response times in event-driven real-time systems

Real- Time Scheduling

Real Time Scheduling Basic Concepts. Radek Pelánek

Transcription:

CS 571 Operating Systems CPU Scheduling Angelos Stavrou, George Mason University

CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms First-Come-First-Served Shortest-Job-First, Shortest-remaining-Time-First Priority Scheduling Round Robin Multi-level Queue Multi-level Feedback Queue

Basic Concepts During its lifetime, a process goes through a sequence of CPU and I/O bursts. In a multi-programmed computer system, multiple process compete for the CPU at a given time, to complete their current CPU bursts. The CPU scheduler (a.k.a. short-term scheduler) will select one of the processes in the ready queue for execution. The CPU scheduler algorithm may have tremendous effects on the system performance Interactive systems Real-time systems

When to Schedule? Under the simple process state transition model we discussed in Lecture 2, CPU scheduler can be invoked at five different points: 1. When a process switches from the new state to the ready state. 2. When a process switches from the running state to the waiting state. 3. When a process switches from the running state to the ready state. 4. When a process switches from the waiting state to the ready state. 5. When a process terminates.

Non-preemptive vs. Preemptive Scheduling Under non-preemptive scheduling, each running process keeps the CPU until it completes or it switches to the waiting (blocked) state. Under preemptive scheduling, a running process may be also forced to release the CPU even though it is neither completed nor blocked. In time-sharing systems, when the running process reaches the end of its time quantum (slice) In general, whenever there is a change in the ready queue.

Non-preemptive vs. preemptive kernels Non-preemptive kernels do not allow preemption of a process running in kernel mode Serious drawback for real-time applications Preemptive kernels allow preemption even in kernel mode Insert safe preemption points in long-duration system calls Or, use synchronization mechanisms (e.g. mutex locks) to protect the kernel data structures against race conditions

Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program We ll discuss scheduling algorithms in two contexts A preemptive scheduler can interrupt a running job A non-preemptive scheduler waits for running job to block

Scheduling Criteria Several criteria can be used to compare the performance of scheduling algorithms CPU utilization keep the CPU as busy as possible Throughput # of processes that complete their execution per time unit Turnaround time amount of time to execute a particular process Waiting time amount of time a process has been waiting in the ready queue Response time amount of time it takes from when a request was submitted until the first response is produced, not the complete output. Meeting the deadlines (real-time systems)

Optimization Criteria Maximize the CPU utilization Maximize the throughput Minimize the (average) turnaround time Minimize the (average) waiting time Minimize the (average) response time In the examples, we will assume average waiting time is the performance measure only one CPU burst (in milliseconds) per process

First-Come, First-Served (FCFS) Scheduling Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule: P 1 P 2 P 3 0 24 27 30 Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: 17

FCFS Scheduling (Cont.) Suppose that the processes arrive in the order P 2, P 3, P 1 The Gantt chart for the schedule: P 2 P 3 P 1 0 3 6 30 Waiting time for P 1 = 6; P 2 = 0; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Problems: Convoy effect (short processes behind long processes) Non-preemptive -- not suitable for time-sharing systems

Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. The CPU is assigned to the process with the smallest (next) CPU burst. Two schemes: Non-preemptive Preemptive Also known as the Shortest-Remaining-Time-First (SRTF). Non-preemptive SJF is optimal if all the processes are ready simultaneously gives minimum average waiting time for a given set of processes. SRTF is optimal if the processes may arrive at different times

Example for Non-Preemptive SJF Process Arrival Time Burst Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 SJF (non-preemptive) P 1 P 3 P 2 P 4 0 3 7 8 12 16 Average waiting time = (0 + 6 + 3 + 7)/4 = 4

Example for Preemptive SJF (SRTF) Process Arrival Time Burst Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 SJF (preemptive) P 1 P 2 P 3 P 2 P 4 P 1 0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)/4 = 3

Estimating the Length of Next CPU Burst Problem with SJF: It is very difficult to know exactly the length of the next CPU burst. Idea: Based on the observations in the recent past, we can try to predict. Techniques such as exponential averaging are based on combining the observations in the past and our predictions using different weights.

Priority-Based Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority). Preemptive Non-preemptive Example priority-based scheduling schemes

Priority-Based Scheduling (Cont.) Priority Assignment Internal factors: timing constraints, memory requirements, the ratio of average I/O burst to average CPU burst. External factors: Importance of the process, financial considerations, hierarchy among users Problem: Indefinite Blocking (or Starvation) low priority processes may never execute. One solution: Aging as time progresses increase the priority of the processes that wait in the system for a long time.

Starvation Starvation occurs when a job cannot make progress because some other job has the resource it require We ve seen locks, Monitors, Semaphores, etc. The same thing can happen with the CPU! Starvation can be a side effect of synchronization Constant supply of readers always blocks out writers Well-written critical sections should ensure bounded waiting Starvation is usually a side effect of the scheduling algorithm: A high priority process always prevents a low priority process from running on the CPU One thread always beats another when acquiring a lock

Extension to Multiple CPU & I/O Bursts When the process arrives, it will try to execute its first CPU burst It will join the ready queue The priority will be determined according to the underlying scheduling algorithm and considering only that specific (i.e. first) burst When it completes its first CPU burst, it will try to perform its first I/O operation (burst) It will join the device queue When that device is available, it will use the device for a time period indicated by the length of the first I/O burst. Then, it will re-join the ready queue and try to execute its second CPU burst Its new priority may now change (as defined by its second CPU burst)!..

Round Robin (RR) Each process gets a small unit of CPU time (time quantum). After this time has elapsed, the process is preempted and added to the end of the ready queue. Newly-arriving processes (and processes that complete their I/O bursts) are added to the end of the ready queue If there are n processes in the ready queue and the time quantum is q, then no process waits more than (n-1)q time units. Performance q large FCFS q small Processor Sharing (The system appears to the users as though each of the n processes has its own processor running at the (1/n) th of the speed of the real processor)

Example for Round-Robin Process Burst Time P 1 53 P 2 17 P 3 68 P 4 24 The Gantt chart: (Time Quantum = 20) P 1 P 2 P 3 P 4 P 1 P 3 P 4 P 1 P 3 P 3 0 20 37 57 77 97 117 121 134 154 162 Typically, higher average turnaround time than SJF, but better response time.

Choosing a Time Quantum The effect of quantum size on context-switching time must be carefully considered. The time quantum must be large with respect to the contextswitch time Turnaround time also depends on the size of the time quantum

Turnaround Time and the Time Quantum

Multilevel Queue Ready queue is partitioned into separate queues: Example, a queue for foreground (interactive) and another for background (batch) processes; or:

Multilevel Queue Scheduling Each queue may have has its own scheduling algorithm: Round Robin, FCFS, SJF In addition, (meta-)scheduling must be done between the queues. Fixed priority scheduling (i.e. serve first the queue with highest priority). Problems? Time slice each queue gets a certain amount of CPU time which it can schedule amongst its processes; for example, 50% of CPU time is used by the highest priority queue, 20% of CPU time to the second queue, and so on.. Also, need to specify which queue a process will be put to when it arrives to the system and/or when it starts a new CPU burst.

Multilevel Feedback Queue In a multi-level queue-scheduling algorithm, processes are permanently assigned to a queue. Idea: Allow processes to move among various queues. Examples If a process in a queue dedicated to interactive processes consumes too much CPU time, it will be moved to a (lowerpriority) queue. A process that waits too long in a lower-priority queue may be moved to a higher-priority queue.

Example of Multilevel Feedback Queue Three queues: Q 0 RR - time quantum 8 milliseconds Q 1 RR - time quantum 16 milliseconds Q 2 FCFS Q i has higher priority than Q i+1 Scheduling A new job enters the queue Q 0. When it gains CPU, the job receives 8 milliseconds. If it does not finish in 8 milliseconds, the job is moved to the queue Q 1. In queue Q 1 the job receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to the queue Q 2.

Multilevel Feedback Queue

Multilevel Feedback Queue Multilevel feedback queue scheduler is defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service The scheduler can be configured to match the requirements of a specific system.

Unix Scheduler The canonical Unix scheduler uses a MLFQ 3-4 classes spanning ~170 priority levels Timesharing: first 60 priorities System: next 40 priorities Real-time: next 60 priorities Interrupt: next 10 (Solaris) Priority scheduling across queues, RR within a queue The process with the highest priority always runs Processes with the same priority are scheduled RR Processes dynamically change priority Increases over time if process blocks before end of quantum Decreases over time if process uses entire quantum

Thread Scheduling When processes have also threads, we have two levels of (pseudo-)parallelism. Scheduling in such systems differs depending on whether user-level or kernel-level threads are supported. With user-level threads, in addition to the process scheduling performed by O.S., we must have thread scheduling performed by the run-time thread management system of each process. With kernel-level threads, the operating system can directly perform thread scheduling.

Thread scheduling with user-level threads Possible scheduling of user-level threads with a 50 ms time quantum and threads that yield the CPU after 5 ms.

Thread scheduling with kernel-level threads Possible scheduling of kernel-level threads with a 50 ms time quantum and threads that yield the CPU after 5 ms.

Thread scheduling With kernel-level threads, the O.S. can exercise a finer control on the execution sequence of threads belonging to different processes. With user-level threads, each process can use its own customized thread scheduling algorithm. With kernel-level threads, a thread switch involves more overhead.

Scheduling in SMP Environments Partitioned versus Global Scheduling Processor Affinity Load Balancing Trade-offs

Solaris 9 Scheduling

Windows XP Scheduling Priorities

Linux Scheduling

Scheduling Summary Scheduler (dispatcher) is the module that gets invoked when a context switch needs to happen Scheduling algorithm determines which process runs, where processes are placed on queues Many potential goals of scheduling algorithms Utilization, throughput, wait time, response time, etc. Various algorithms to meet these goals FCFS/FIFO, SJF, Priority, RR Can combine algorithms Multiple-level feedback queues Unix example