CPU Scheduling. CSC 256/456 - Operating Systems Fall 2014. TA: Mohammad Hedayati



Similar documents
CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

ICS Principles of Operating Systems

CPU Scheduling Outline

CPU Scheduling. CPU Scheduling

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

Operating Systems. III. Scheduling.

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Chapter 5 Process Scheduling

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

CPU Scheduling. Core Definitions

W4118 Operating Systems. Instructor: Junfeng Yang

OPERATING SYSTEMS SCHEDULING

Job Scheduling Model

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Operating System: Scheduling

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Processor Scheduling. Queues Recall OS maintains various queues

Scheduling Algorithms

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Real-Time Scheduling 1 / 39

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Operating Systems Lecture #6: Process Management

W4118 Operating Systems. Instructor: Junfeng Yang

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

A Comparative Study of CPU Scheduling Algorithms

Introduction. Scheduling. Types of scheduling. The basics

Linux scheduler history. We will be talking about the O(1) scheduler

Analysis and Comparison of CPU Scheduling Algorithms

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Operating Systems, 6 th ed. Test Bank Chapter 7

OS OBJECTIVE QUESTIONS

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Lecture 3 Theoretical Foundations of RTOS

Linux Process Scheduling Policy

Linux Scheduler. Linux Scheduler

Overview of the Linux Scheduler Framework

EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun

Operating System Tutorial

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

4. Fixed-Priority Scheduling

THREADS WINDOWS XP SCHEDULING

Syllabus MCA-404 Operating System - II

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Completely Fair Scheduler and its tuning 1

Process Scheduling in Linux

Operating Systems 4 th Class

Weight-based Starvation-free Improvised Round-Robin (WSIRR) CPU Scheduling Algorithm

Why Relative Share Does Not Work

Process Scheduling II

Real- Time Scheduling

Chapter 5 Linux Load Balancing Mechanisms

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Advanced topics: reentrant function

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Far-western University Central Office, Mahendranagar Operating System

General Overview of Shared-Memory Multiprocessor Systems

STEPPING TOWARDS A NOISELESS LINUX ENVIRONMENT

Effective Computing with SMP Linux

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Undergraduate Course Syllabus

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Real-Time Operating Systems for MPSoCs

A Review on Load Balancing In Cloud Computing 1

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

Comparison between scheduling algorithms in RTLinux and VxWorks

Scheduling algorithms for Linux

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

SYSTEM ecos Embedded Configurable Operating System

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

Linux Process Scheduling. sched.c. schedule() scheduler_tick() hooks. try_to_wake_up() ... CFS CPU 0 CPU 1 CPU 2 CPU 3

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

The CPU Scheduler in VMware vsphere 5.1

Predictable response times in event-driven real-time systems

Scheduling. Monday, November 22, 2004

Aperiodic Task Scheduling

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis

Ready Time Observations

Transcription:

CPU Scheduling CSC 256/456 - Operating Systems Fall 2014 TA: Mohammad Hedayati

Agenda Scheduling Policy Criteria Scheduling Policy Options (on Uniprocessor) Multiprocessor scheduling considerations CPU Scheduling in Linux Real-time Scheduling

Process States As a process executes, it changes state o new: The process is being created oready: The process is waiting to be assigned to a process o running: Instructions are being executed o waiting: The process is waiting for some event to occur o terminated: The process has finished execution

Queues of PCBs Ready Queue: set of all processes ready for execution. Device Queue: set of processes waiting for IO on that device. Processes migrate between the various queues.

Scheduling Question: How the OS should decide which of the several processes to take of the queue? owe will only worry about ready queue, but it is applicable to other queues as well. Scheduling Policy: which process is given the access to resources?

CPU Scheduling Selects from among the processes/threads that are ready to execute, and allocates the CPU to it CPU scheduling may take place at: o hardware interrupt o software exception o system calls Non-Preemptive o scheduling only when the current process terminates or not able to run further (due to IO, or voluntarily calling sleep() or yield()) Preemptive o scheduling can occur at any opportunity possible

Some Simplifying Assumptions For the first few algorithms: o Uni-processor oone thread per process o Programs are independant Execution Model: Processes alternate between bursts of CPU and IO Goal: deal out CPU time in a way that some parameters are optimized.

CPU Scheduling Criteria Minimize waiting time: amount of time the process is waiting in ready queue. Minimize turnaround time: amount of time that the system takes to execute a process (= waiting time + execution time in absence of any other process). o response time: amount of time that the system takes to produce the first response (interactivity). e.g. echo back a keystroke on editor Maximize throughput: # of processes that complete their execution per time unit (usu. in a long run) Maximize CPU utilization: the proportion of time that CPU is doing useful job. Fairness: avoid starvation

First-Come, First-Served (FCFS) Proce ss Arrival CPU Time P1 0 24 P2 0 3 P3 0 3 Suppose that the processes arrive in the order: P1, P2, P3. The schedule gantt chart is: Waiting time: P1 = 0, P2 = 24 and P3 = 27 (avg. is 17) Turnaround time: P1 = 24, P2 = 27 and P3 = 30 (avg. is 27) Is it fair?

FCFS (cont.) Suppose that the processes arrive in the order: P1, P2, P3. Now, the schedule is: Waiting time: P1 = 6, P2 = 0 and P3 = 3 (avg. is 3, was 17) Turnaround time: P1 = 30, P2 = 3 and P3 = 6 (avg. is 13, was 27) Short process delayed by long process: Convoy effect Pros and Cons?

Shortest Job First (SJF) Associate with each process the length of its CPU time. Use these lengths to schedule the process with the shortest CPU time. Two variations: o Non-preemptive: once CPU given to the process it cannot be taken away until it completes. o Preemptive: if a new process arrives with CPU time less than remaining time of current executing process, preempt (a.k.a Shortest Remaining Job First - SRJF) Preemptive SJF is optimal: gives minimum average turnaround time for a given set of processes Problem: o don t know the process CPU time ahead of time o is it fair?

Example of Preemptive SJF Proces s Arrival Time CPU Time P1 0 7 P2 2 4 P3 4 1 P4 5 4 SJF (Preemptive) Waiting time: P1 = 9, P2 = 1, P3 = 0 and P4 = 2 (avg. is 3) Turnaround time: P1 = 16, P2 = 5, P3 = 1 and P4 = 6 (avg. is 7)

Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority o Preemptive vs. Non-Preemptive SJF is a priority scheduling where priority is the predicted CPU time. Problem: Starvation low priority processes may never execute Solution: Aging as time progresses, increase the priority of the process

Round Robin (RR) Each process gets a fixed unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units Performance: o q small fair, starvation-free, better interactivity o q large FIFO (q = ) o q must be large with respect to context switch cost, otherwise overhead is too high

Cost of Context Switch Direct overhead of context switch o saving old contexts, restoring new contexts, Indirect overhead of context switch o caching and memory management overhead

Example of RR with Quantum = 20 Process CPU Time P1 53 P2 17 P3 68 P4 24 The schedule is: Typically, higher avg. turnaround time, but better response time.

Multilevel Scheduling Ready tasks are partitioned into separate classes: o foreground (interactive) o background (batch) Each class has its own scheduling algorithm: oforeground RR o background FCFS Scheduling must be done between the classes. ofixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation otime slice each class gets a certain amount of CPU time which it can schedule amongst its processes; e.g., 80% to foreground in RR 20% to background in FCFS

Multilevel Feedback Scheduling A process can move between the various queues oaging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters: onumber of queues o scheduling algorithms for each queue omethod used to determine when to upgrade a process omethod used to determine when to demote a process omethod used to determine which queue a process will enter when that process needs service

Example of Multilevel Feedback Queue Three queues: o Q0 RR with time quantum 8 milliseconds o Q1 RR time quantum 16 milliseconds o Q2 FCFS Scheduling

Solaris Scheduler Combines time slices, priority, and prediction

Lottery Scheduling Give processes lottery tickets Choose ticket at random and allow process holding the ticket to get the resource Hold a lottery at periodic intervals Properties ochance of winning proportional to number of tickets held (highly responsive) o Cooperating processes may exchange tickets ofair-share scheduling easily implemented by allocating tickets to users and dividing tickets among child processes

Multiprocessor Scheduling Given a set of runnable processes, and a set of CPUs, assign processes to CPUs Same considerations: o response time, fairness, throughput, But also, new considerations: o ready queue implementation o load balancing o affinity o resource contention

Ready Queue Implementation Option 1: Single Shared Ready Queue (among CPUs) o Scheduling events occur per-cpu Local timer interrupt Currently executing thread blocks or yields o Scheduling code on any CPU needs to access the shared queue Synchronization is needed. Option 2: Per-CPU Ready Queue o Scheduling code accesses local queue o Load balancing: Infrequent synchronization o Per-CPU variables should lie on separate cache-lines

Load balancing Keep ready queue sizes balanced across CPUs o Main goal: one cpu should not idle while others have processes waiting in their queues o Secondary: scheduling overhead may increase w.r.t queue length Push model: kernel daemon checks queue lengths periodically, moves threads to balance Pull model: CPU notices its queue is empty (or shorter than a threshold) and steals threads from other queues or both

Affinity As threads run, state accumulates in CPU cache Repeated scheduling on same CPU can often reuse this state Scheduling on different CPU requires reloading new cache o And possibly invalidating old cache Try to keep thread on same CPU it used last

Contention-aware Scheduling I Hardware resource sharing/contention in multi-processors o SMP processors share memory bus bandwidths o Multi-core processors share cache o SMT (hyper-thread) processors share a lot more An example: on an SMP machine o a web server benchmark delivers around 6300 reqs/sec on one processor, but only around 9500 reqs/sec on an SMP with 4 processors Threads load data into cache and we can expect multiple threads to trash each others state as they run on one same CPU o Can try to detect cache needs and schedule threads that can share nicely on same CPU

SMP-SMT Multiprocessor Image from http://www.eecg.toronto.edu/~tamda/papers/threadclustering.pdf

Contention-aware Scheduling II Contention-reduction scheduling by co-scheduling tasks with complementary resource needs o e.g. a computation-heavy task and a memory accessheavy task oe.g. several threads with small cache footprints may all be able to keep data in cache at same time oe.g. threads with no locality might as well execute on same CPU since almost always miss in cache anyway

Contention-aware Scheduling III What if contention on a resource is unavoidable? Two evils of contention o high contention performance slowdown ofluctuating contention uneven application progress over the same amount of time poor fairness [Zhang et al. HotOS2007] Scheduling so that: o very high contention is avoided o the resource contention is kept stable

Parallel Job Scheduling "Job" is a collection of processes/threads that cooperate to solve some problem (or provide some service) How the components of the job are scheduled has a major effect on performance Threads in a parallel job are not independent oscheduling them as if they were leads to performance problems owant scheduler to be aware of dependence

Parallel Job Scheduling Threads in a processes are not independent o Synchronize over shared data Deschedule lock holder, other threads in job may not get far o Cause/effect relationships (e.g. producer-consumer problem) Consumer is waiting for data on queue, but producer is not running o Synchronizing phases of execution (barriers) Entire job proceeds at pace of slowest thread Knowing threads are related, schedule all at same time o Space Sharing o Time Sharing with Gang Scheduling

Space Sharing Divide CPUs into groups and assign jobs to dedicated set of CPUs Pros o Reduce context switch overhead (no involuntary preemption) o Strong affinity o All runnable threads execute at same time Cons o CPUs in one partition may be idle while we have multiple jobs waiting to run o Difficult to deal with dynamically changing job sizes

Time Sharing Similar to uni-processor scheduling a queue of ready tasks, a task is dequeued and executed when a processor is available Gang/Cohort scheduling o utilize all CPUs for one parallel/concurrent application at a time

Multiprocessor Scheduling in Linux 2.6 One ready task queue per processor oscheduling within a processor and its ready task queue is similar to single-processor scheduling One task tends to stay in one queue o for cache affinity Tasks move around when load is unbalanced oe.g., when the length of one queue is less than one quarter of the other No native support for gang/cohort scheduling or resourcecontention-aware scheduling

Linux Task Scheduling Linux 2.5 and up uses a preemptive, priority-based algorithm with two separate priority ranges: o A time-sharing class for fair preemptive scheduling (nice value ranging from 100-140) o A real-time class that conforms to POSIX real-time standard (0-99) Numerically lower values indicate higher priority Higher-priority tasks get longer time quanta (200-10 ms) Ready queue indexed by priority and contains two priority arrays active and expired Choose task with highest priority on active array; switch active and expired arrays when active is empty - in O(1)

Priorities and Time-slice length

List of Tasks Indexed According to Priorities

Real-Time Scheduling Deadline otime to react okeep pace with a frequent event Hard real-time systems required to complete a critical task within a guaranteed amount of time Soft real-time computing requires that critical processes receive priority over less fortunate ones Earliest Deadline First (EDF)

Disclaimer Parts of the lecture slides were derived from those by Kai Shen, Willy Zwaenepoel, Abraham Silberschatz, Peter B. Galvin, Greg Gagne, Andrew S. Tanenbaum, Angela Demke Brown, and Gary Nutt. The slides are intended for the sole purpose of instruction of operating systems at the University of Rochester. All copyrighted materials belong to their original owner(s).