CPU Scheduling. Core Definitions



Similar documents
Chapter 5 Process Scheduling

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

CPU Scheduling Outline

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Processor Scheduling. Queues Recall OS maintains various queues

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

OPERATING SYSTEMS SCHEDULING

CPU Scheduling. CPU Scheduling

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

ICS Principles of Operating Systems

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Operating System: Scheduling

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Operating Systems. III. Scheduling.

W4118 Operating Systems. Instructor: Junfeng Yang

Operating Systems Lecture #6: Process Management

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Job Scheduling Model

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Operating Systems, 6 th ed. Test Bank Chapter 7

EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

OS OBJECTIVE QUESTIONS

Real-Time Scheduling 1 / 39

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Introduction. Scheduling. Types of scheduling. The basics

Linux scheduler history. We will be talking about the O(1) scheduler

Scheduling Algorithms

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Scheduling. Monday, November 22, 2004

W4118 Operating Systems. Instructor: Junfeng Yang

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com

CS414 SP 2007 Assignment 1

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Operating System Tutorial

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Linux Process Scheduling Policy

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

This tutorial will take you through step by step approach while learning Operating System concepts.

A Comparative Study of CPU Scheduling Algorithms

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Chapter 2: OS Overview

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Syllabus MCA-404 Operating System - II

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

Analysis and Comparison of CPU Scheduling Algorithms

3 - Introduction to Operating Systems

Real-Time Operating Systems for MPSoCs

Real- Time Scheduling

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Types Of Operating Systems

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Principles of Operating Systems CS 446/646

Linux Block I/O Scheduling. Aaron Carroll December 22, 2007

Predictable response times in event-driven real-time systems

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Lecture 3 Theoretical Foundations of RTOS

CHAPTER 1 INTRODUCTION

Load-Balancing for a Real-Time System Based on Asymmetric Multi-Processing

Weight-based Starvation-free Improvised Round-Robin (WSIRR) CPU Scheduling Algorithm

Operating Systems. Notice that, before you can run programs that you write in JavaScript, you need to jump through a few hoops first

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Completely Fair Scheduler and its tuning 1

Linux Process Scheduling. sched.c. schedule() scheduler_tick() hooks. try_to_wake_up() ... CFS CPU 0 CPU 1 CPU 2 CPU 3

Chapter 1 8 Essay Question Review

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

What is best for embedded development? Do most embedded projects still need an RTOS?

Tasks Schedule Analysis in RTAI/Linux-GPL

Networking Operating Systems (CO32010)

Analysis of Job Scheduling Algorithms in Cloud Computing

Overview and History of Operating Systems

How To Understand And Understand An Operating System In C Programming

Why Relative Share Does Not Work

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Process Scheduling in Linux

The Truth Behind IBM AIX LPAR Performance

Resource Models: Batch Scheduling

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Operating Systems Overview

A Review on Load Balancing In Cloud Computing 1

Transcription:

CPU Scheduling General rule keep the CPU busy; an idle CPU is a wasted CPU Major source of CPU idleness: I/O (or waiting for it) Many programs have a characteristic CPU I/O burst cycle alternating phases of CPU activity and I/O inactivity CPU-bound programs have fewer, longer CPU bursts I/O-bound programs have more, shorter CPU bursts Core Definitions CPU scheduling (a.k.a. short-term scheduling) is the act of selecting the next process for the CPU to service once the current process leaves the CPU idle Many algorithms for making this selection Implementation-wise, CPU scheduling manipulates the operating system s various PCB queues The dispatcher is the software that performs the dirty work of passing the CPU to the next selected process the time to do this is called the dispatch latency

Preemptive vs. Cooperative Scheduling Two levels at which a program can relinquish the CPU First level = cooperative scheduling Process explicitly goes from running to waiting (e.g., system call for I/O or awaiting child termination) Process terminates Cooperative scheduling (or cooperative multitasking ) was used in older personal computer operating systems (< Windows 95, < Mac OS X), primarily due to what PC hardware could do Second level = preemptive scheduling Interrupt causes a process to go from running to ready state Interrupt causes a process to go from waiting state to ready state (e.g., asynchronous completion of I/O) Key word is interrupt this is a hardware-level feature that is required for preemptive scheduling Prevents a process from running away with the CPU But brings up new issues of its own: Coordinating access to shared data Can you interrupt an interrupt?

Scheduling Criteria There is no one, true scheduling algorithm, because the goodness of an algorithm can be measured by many, sometimes contradictory, criteria: CPU utilization Give the CPU steady work Throughput Work per unit time; one definition of work is a completed process Turnaround time Real time to process completion Waiting time Time spent in ready queue Response time Time from request to response Note how the purpose of a system determines the appropriateness of a criterion: throughput is applicable mainly when individual program runs correspond to completions of a task, while response time is crucial to interactive systems (e.g., graphical user interfaces) Typical goals: maximize CPU utilization and throughput but minimize times Other choices for optimization: Optimize the average measure, or min/max (i.e., how bad can the worse-case scenario be?) Minimize variance in measure, thus resulting in better predictability meaningful for interactive systems, but not much work done in this area

First-Come, First-Served (FCFS) Scheduling Simple premise: the sooner a process asks for the CPU, the sooner it gets it; subsequent processes wait in line until the ones before them finish up (or fall into an I/O wait state) This is a cooperative algorithm the CPU can t be taken away from a process Simple implementation: FIFO queue Analysis: order significantly affects the average waiting time; certain combinations of CPU- and I/O-bound processes decrease utilization Shortest-Job-First (SJF) Scheduling Requires knowledge of CPU burst durations: give the CPU to the process with the shortest next CPU burst; provably minimizes the average waiting time But how the heck do we know the length of the next CPU burst? Better fit for batch systems (long-term scheduling), where users can assign time limits to jobs CPU scheduling can approximate SJF by predicting the next burst length generally via exponential average In preemptive flavor, we change processes if the next CPU burst < what s left of the current one

Priority Scheduling Assign a priority p to a process (typically lower p = higher priority, but this isn t set in stone), and give the CPU to the process with the highest priority Note how SJF is a special case of priority scheduling: p is the inverse of the next predicted CPU burst Processes of equal priority are scheduled FCFS Priorities range from internally-calculated metrics (time limits, memory requirements, open files, I/O-to- CPU burst ratio) to external factors Comes in both preemptive and cooperative flavors: Preemptive version interrupts the currently running process when a higher-priority process comes in Cooperative version puts the higher-priority process at the top of the queue and waits for the currently running process to relinquish the CPU Key issue: indefinite blocking or starvation of a process low-priority processes may wait forever of higherpriority ones keep showing up Address starvation through aging: gradually increase a process s priority as waiting time increases caps the maximum waiting time

Round-Robin (RR) Scheduling Necessarily preemptive: defines a time quantum or time slice; processes never have the CPU for > 1 quantum Implementation: maintain the ready queue as a circular FIFO, then traverse each PCB Two possibilities: Current process relinquishes in < 1 quantum 1 quantum passes, resulting in a timer interrupt In both cases, the done process is moved to the tail of the queue, and the new head becomes current Average waiting time is longer with RR, in exchange for better response time with sufficiently small quantum Note how RR is effectively FCFS if quantum is sufficiently large, unless CPU bursts are really long With RR, the cost of a context switch gains significance: we want quantum > context-switch time, preferably by multiple orders of magnitude (i.e., milli- vs. µ-seconds) But again, not too large! Or else we re back at FCFS Rule of thumb: choose time quantum so that 80% of CPU bursts are shorter than that quantum Interesting tidbit: turnaround time is not necessarily proportional (direct or inverse) to the time quantum

Multilevel Queue Scheduling Our first composite algorithm: partition processes into different queues, each with its own scheduling algorithm (as appropriate for that queue) Canonical example: interactive processes use RR in one queue, batch processes use FCFS in another Now of course we have to schedule among queues: Priority scheduling queues have preset priorities RR scheduling each queue is given some quantum during which its processes do work Multilevel Feedback-Queue Scheduling Multilevel queues + the ability for a process to move to another queue For example, track CPU burst times and move CPUbound processes to a lower-priority queue; vice versa for I/O-bound processes Use aging to prevent starvation: long wait times for a process move it to a higher-priority queue Lots of parameters to play with: number of queues, scheduling algorithms per queue, queue selection rules

Multiple-Processor Scheduling Potential for better overall performance Focus on homogeneous multiple processors: allows any available processor to run any available process Two approaches most modern OSes do SMP: Asymmetric multiprocessing has a single master processor (i.e., the one running OS code), relegating other processors to run user code only Symmetric multiprocessing (SMP) allows each processor to make decisions for itself ( self-scheduling ) Processor affinity Processes tend to perform better if they stay on the same CPU, particularly due to caching Soft affinity tries to keep processes on the CPU, but doesn t absolutely guarantee it Hard affinity disallows process migration Load balancing Ideally, all CPUs have about the same amount of work at any given time; counteracts processor affinity somewhat Process migration may be push (overloaded CPUs dump on idle ones) or pull (idle CPUs yank someone else s work) Many systems do both (Linux, ULE in BSD) Symmetric multithreading (SMT) Hardware ability to present a single physical CPU as multiple logical CPUs: the OS doesn t really need to know, but it may help to be aware of which physical CPU has which logical CPU

Thread Scheduling Contention scope determines a thread s competition for the CPU: either with other threads within the same process (process-contention scope) or with other threads in the entire system (systemcontention scope) Contention scope is typically bound to the operating system s threading model (many-to-one and many-tomany use process-contention scope; one-to-one uses system-contention scope) Scheduling Examples Solaris Priority-based with 4 classes (real time, system, interactive, time sharing); multilevel feedback queue for time sharing and interactive class; the lower the priority, the higher the time quantum Windows XP Priority-based with preemption; 32 priority levels split into variable class (1 15) and real-time class (16 31); priority 0 thread does memory management; special idle thread gets run if no ready threads are found Linux Mac OS X Priority-based with preemption; 141 priority levels split into real-time (0 99) and nice (100 140); the higher the priority, the higher the time quantum; for SMP, each CPU has its own runqueue Policy-based with preemption; priority is embedded in the scheduling policy a standard policy uses a system-defined fair algorithm; a time constraint policy is for real-time needs; and a precedence policy allows externally-set priorities

Evaluating Scheduling Algorithms As you ve seen, different algorithms have different strengths, and no single one is ideal for absolutely every situation So, we need techniques for quantifying the characteristics of each algorithm in order to make an informed choice First things first we need to specify the quantitative metrics for a good algorithm for our particular need: Maximum CPU utilization with maximum response time of 1-second Maximum throughput with turnaround time linearly proportional to execution time Evaluation Techniques Deterministic modeling: Real calculations on exact cases; simple and accurate, but requires exact information Analytic evaluation: Superclass of deterministic modeling reasoned direct study of algorithm s properties Queueing models: Model system resources as servers with queueing properties, then solve Little s formula Simulations: Create a model of the system, then process statistical or real data (trace tapes) Implementations: Just do it then measure