Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5



Similar documents
Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

ICS Principles of Operating Systems

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Operating Systems. III. Scheduling.

Real-Time Scheduling 1 / 39

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Operating Systems Lecture #6: Process Management

CPU Scheduling. CPU Scheduling

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Operating Systems Concepts: Chapter 7: Scheduling Strategies

OPERATING SYSTEMS SCHEDULING

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Processor Scheduling. Queues Recall OS maintains various queues

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

CPU Scheduling Outline

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

W4118 Operating Systems. Instructor: Junfeng Yang

Operating Systems, 6 th ed. Test Bank Chapter 7

CPU Scheduling. Core Definitions

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Operating System: Scheduling

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Job Scheduling Model

Chapter 5 Process Scheduling

Scheduling Algorithms

Linux Process Scheduling Policy

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

OS OBJECTIVE QUESTIONS

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction. Scheduling. Types of scheduling. The basics

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Linux Scheduler. Linux Scheduler

Syllabus MCA-404 Operating System - II

A Comparative Study of CPU Scheduling Algorithms

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

REAL TIME OPERATING SYSTEMS. Lesson-10:

Operating System Tutorial

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Performance Comparison of RTOS

Aperiodic Task Scheduling

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Windows Server Performance Monitoring

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Comparison between scheduling algorithms in RTLinux and VxWorks

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Analysis and Comparison of CPU Scheduling Algorithms

CS414 SP 2007 Assignment 1

Linux scheduler history. We will be talking about the O(1) scheduler

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

CPUInheritance Scheduling. UniversityofUtah

Completely Fair Scheduler and its tuning 1

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Chapter 2: OS Overview

PROCESS SCHEDULING ALGORITHMS: A REVIEW

LAB 5: Scheduling Algorithms for Embedded Systems

Common Approaches to Real-Time Scheduling

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Lecture 3 Theoretical Foundations of RTOS

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

W4118 Operating Systems. Instructor: Junfeng Yang

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

CS0206 OPERATING SYSTEMS Prerequisite CS0201, CS0203

Introduction. Application Performance in the QLinux Multimedia Operating System. Solution: QLinux. Introduction. Outline. QLinux Design Principles

Embedded Systems. 6. Real-Time Operating Systems

Virtualization 101 for Power Systems

Threads Scheduling on Linux Operating Systems

Synchronization. Todd C. Mowry CS 740 November 24, Topics. Locks Barriers

Commonly Used Approaches to Real-Time Scheduling

Chapter 11 I/O Management and Disk Scheduling

These sub-systems are all highly dependent on each other. Any one of them with high utilization can easily cause problems in the other.

Transcription:

77 16 CPU Scheduling Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5 Until now you have heard about processes and memory. From now on you ll hear about resources, the things operated upon by processes. Resources range from cpu time to disk space to channel I/O time. Resources fall into two classes: Preemptible: processor or I/O channel. Can take resource away, use it for something else, then give it back later. Non-preemptible: once given, it can t be reused until process gives it back. Examples are file space, terminal, and maybe memory. This distinction is a little arbitrary, since anything is preemptible if it can be saved and restored. It generally measures the difficulty of preemption. OS makes two related kinds of decisions about resources: Allocation: who gets what. Given a set of requests for resources, which processes should be given which resources in order to make most efficient use of the resources? Implication is that resources aren t easily preemptible. Scheduling: how long can they keep it. When more resources are requested than can be granted immediately, in which order should they be serviced? Examples are processor scheduling (one processor, many processes), memory scheduling in virtual memory systems. Implication is that resource is preemptible. Resource #1: the processor. Processes may be in any one of three general scheduling states: Running. Ready. That is, waiting for CPU time. Scheduler and dispatcher determine transitions between this and running state.

Operating Systems 78 Blocked. Waiting for some other event: disk I/O, message, semaphore, etc. Goals for scheduling disciplines: Efficiency of resource utilization. (keep CPU and disks busy) Minimize overhead. (context switches) Minimize response time. Distribute cycles equitably. What does this mean? FIFO (also called FCFS): run until finished. In the simplest case this means uniprogramming. Usually, finished means blocked. One process can use CPU while another waits on a semaphore. Go to back of run queue when ready. Problem: one process can monopolize CPU. Solution: limit maximum amount of time that a process can run without a context switch. This time is called a time slice. Round Robin: run process for one time slice, then move to back of queue. Each process gets equal share of the CPU. Most systems use some variant of this. What happens if the time slice isn t chosen carefully? Too long: one process can monopolize the CPU. Too small: too much time wasted in context swaps. Originally, Unix had 1 sec. time slices. Too long. Most systems today use time slices of 10-100 milliseconds. Implementation of priorities: run highest-priority processes first, use round-robin among processes of equal priority. Re-insert process in run queue behind all processes of greater or equal priority.

Operating Systems 79 Round-robin can produce poor response time. Go through example of ten processes each requiring 100 time slices. They each take about 1000 time slices to finish, whereas in FIFO they would average 550 time slices to finish. This is like government: it is (supposedly) fair, but uniformly inefficient. Another example: User types text using emacs; emacs process competes with 10 compute-bound processes. With a timeslice of 100ms, each key stroke has a response time of 10 100ms = 1s!. What is the best we can do wrt response time? STCF: shortest time to completion first with preemption. This minimizes the average response time. As an example, show two processes, one doing 1 ms computation followed by 10 ms I/O, one doing all computation. Suppose we use 100 ms time slice: I/O process only runs at 1/10th speed, I/O devices are only utilized 10% of time. Suppose we use 1 ms time slice: then compute-bound process gets interrupted 9 times unnecessarily for each valid interrupt. STCF works quite nicely. Unfortunately, STCF requires knowledge of the future. Instead, we can use past performance to predict future performance. If a process has already taken a long time, it will probably take a long time more. We use priorities to disfavor long processes. Exponential Queue (or Multi-Level Feedback Queues): attacks both efficiency and response time problems. Give newly runnable process a high priority and a very short time slice. If process uses up the time slice without blocking then decrease priority by 1 and double time slice for next time. Go through the above example, where the initial values are 1ms and priority 100. Techniques like this one are called adaptive. They are common in interactive systems.

Operating Systems 80 The CTSS system (MIT, early 1960 s) was the first to use exponential queues. Fair-share scheduling (similar to what s implemented in Unix): Keep history of recent CPU usage for each process. Give highest priority to process that has used the least CPU time recently. Highly interactive jobs, like emacs, will use little CPU time and get high priority. CPU-bound jobs, like compiles, will get lower priority. Can adjust priorities by changing billing factors for processes. E.g. to make high-priority process, only use half its recent CPU usage in computing history. Example: 4.4 BSD UNIX scheduler Priorities in the range [0,127] (0 highest, 0-49 reserved for kernel) In PCB: p usrpri: user-mode scheduling priority of a process p estcpu: estimate of recent CPU utilization of a process p nice: user-settable weighing factor in range [-20,20] User priority recalculated every 4 clock ticks (40ms) as follows: p usrpri 50 + p estcpu/4 + 2 p nice Resulting value is capped into range [50,127] p estcpu is incremented every time the clock ticks (10ms) and the process is executing. In addition, once per second, CPU utilization of a runnable process is adjusted as: p estcpu (2 load/(2 load + 1)) p estcpu + p nice load: sampled average of the sum of lengths of run queue and short-term sleep queue (e.g., processes waiting for disk I/O). Example: let T i be the number of clock ticks accumulated during interval i. Assume load = 1.

Operating Systems 81 p estcpu 0.66 T 0 p estcpu 0.66 (T 1 + 0.66 T 0 ) = 0.66 T 1 + 0.44 T 0 p estcpu 0.66 T 2 + 0.44 T 1 + 0.30 T 0 p estcpu 0.66 T 3 +... + 0.20 T 0 p estcpu 0.66 T 4 +... + 0.13 T 0 about 90% of CPU utilization forgotten after 5s. For sleeping processes, p estcpu is adjusted when the process wakes us: p estcpu (2 load/(2 load + 1)) p slptime p estcpu p slptime: time process spent sleeping in seconds Lottery scheduling: different approach to scheduling/dispatch (not based on priorities) every runnable process owns a set of lottery tickets from a fixed sized ticket pool (range of number from 0-N) fraction of ticket pool allocated to a process is directly proportional to target CPU share during dispatch, OS draws a uniformly distributed random number in the range 0-N process that owns the matching ticket gets the CPU Start-time fair queuing (STFQ): maintain virtual time T i for each process i, initially T i realt ime after process has executed for time t, advance T i T i + (1/r i ) t, where r i is process i s desired CPU share among the currently runnable processes. always schedule runnable process with minimal T i

Operating Systems 82 what happens if a process is not runnable for a while? T i will fall behind, giving i a potentially huge advantage when it wakes up. Solution options: upon wake-up, set T i realt ime upon wake-up, set T i T i + sleept ime upon wake-up, set T i T i + sleept ime d, where d is a system parameter reflecting the amount of CPU time a sleeping process may save up. Earliest deadline first: real-time scheduling algorithm requires a priori knowledge of deadline and CPU requirements Rate-based scheduling: grant a certain amount of CPU time at a fixed period Example: 1ms of CPU time every 100ms in general, requires a priori knowledge of CPU requirements Summary: In principle, scheduling algorithms can be arbitrary, since the system should produce the same results in any event. However, the algorithms have strong effects on the system s overhead, efficiency, and response time. The best schemes are adaptive. To do absolutely best, we d have to be able to predict the future. Best scheduling algorithms tend to give highest priority to the processes that need the least! Reagonomics?