CPU Scheduling. Overview: CPU Scheduling. [SGG7/8/9] Chapter 5. Basic Concepts. CPU Scheduler. Scheduling Criteria. Optimization Criteria

Similar documents
Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

ICS Principles of Operating Systems

CPU Scheduling. CPU Scheduling

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Chapter 5 Process Scheduling

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

CPU Scheduling Outline

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

W4118 Operating Systems. Instructor: Junfeng Yang

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

OPERATING SYSTEMS SCHEDULING

CPU Scheduling. Core Definitions

Operating Systems. III. Scheduling.

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Operating System: Scheduling

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

PROCESS SCHEDULING ALGORITHMS: A REVIEW

A Comparative Study of CPU Scheduling Algorithms

Job Scheduling Model

Processor Scheduling. Queues Recall OS maintains various queues

OS OBJECTIVE QUESTIONS

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Introduction. Scheduling. Types of scheduling. The basics

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Analysis and Comparison of CPU Scheduling Algorithms

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Real-Time Scheduling 1 / 39

Operating Systems Lecture #6: Process Management

4. Fixed-Priority Scheduling

Lecture 3 Theoretical Foundations of RTOS

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Operating Systems, 6 th ed. Test Bank Chapter 7

Common Approaches to Real-Time Scheduling

Operating System Aspects. Real-Time Systems. Resource Management Tasks

3. Scheduling issues. Common approaches /1. Common approaches /2. Common approaches / /13 UniPD / T. Vardanega 23/01/2013. Real-Time Systems 1

W4118 Operating Systems. Instructor: Junfeng Yang

Operating System Tutorial

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

A Review on Load Balancing In Cloud Computing 1

Syllabus MCA-404 Operating System - II

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Linux Process Scheduling Policy

Scheduling Algorithms

Multi-core architectures. Jernej Barbic , Spring 2007 May 3, 2007

Real-time Scheduling of Periodic Tasks (1) Advanced Operating Systems Lecture 2

Predictable response times in event-driven real-time systems

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

Aperiodic Task Scheduling

Comparison between scheduling algorithms in RTLinux and VxWorks

Real-time scheduling algorithms, task visualization

This tutorial will take you through step by step approach while learning Operating System concepts.

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Performance Comparison of RTOS

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

Real Time Scheduling Basic Concepts. Radek Pelánek

Priority-Driven Scheduling

EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun

174: Scheduling Systems. Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 TIMING ANALYSIS IN NETWORKED MEASUREMENT CONTROL SYSTEMS

Exercises : Real-time Scheduling analysis

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Linux scheduler history. We will be talking about the O(1) scheduler

Linux Block I/O Scheduling. Aaron Carroll December 22, 2007

Quality of Service su Linux: Passato Presente e Futuro

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Operating Systems 4 th Class

Commonly Used Approaches to Real-Time Scheduling

General Overview of Shared-Memory Multiprocessor Systems

LAB 5: Scheduling Algorithms for Embedded Systems

Real- Time Scheduling

Chapter 2: OS Overview

Overview and History of Operating Systems

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Partition Scheduling in APEX Runtime Environment for Embedded Avionics Software

Introduction. System Calls

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

Transcription:

TDDB68 Concurrent programming and operating systems CPU Scheduling Overview: CPU Scheduling CPU bursts and I/O bursts Scheduling Criteria Scheduling Algorithms Priority Inversion Problem Multiprocessor Scheduling [SGG7/8/9] Chapter 5 Appendix: Scheduling Algorithm Evaluation Appendix: Short Introduction to Real-Time Scheduling Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin s and Gagne s book ( Operating System Concepts, 7th ed., Wiley, 2005). No part of the lecture notes may be reproduced in any form, due to the copyrights reserved by Wiley. These lecture notes should only be used for internal teaching purposes at the Linköping University. Christoph Kessler, IDA, Linköpings universitet 6.2 Basic Concepts CPU Scheduler Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process execution consists of a cycle of alternating CPU execution and I/O wait CPU burst distribution histogram Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them CPU scheduling decisions may take place when a process: 0. Is admitted 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready state 4. Terminates Scheduling under 1 and 4 only is nonpreemptive All other scheduling is preemptive 0 3 2 1 4 6.3 6.4 Scheduling Criteria Optimization Criteria CPU utilization keep the CPU as busy as possible Throughput # of processes that complete their execution per time unit Turnaround time amount of time to execute a particular process Waiting time amount of time a process has been waiting in the ready queue Response time amount of time it takes from when a request was submitted until the first response is produced, not including output (for time-sharing environment) Deadlines met? in real-time systems (later) 6.5 Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time 6.6 1

First-Come, First-Served (FCFS, FIFO) Scheduling Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27) / 3 = 17 0 P 1 P 2 P 3 Waiting time P i = start time P i time of arrival for P i 24 27 30 FCFS normally used for non-preemptive batch scheduling, e.g. printer queues (i.e., burst time = job size) 6.7 FCFS Scheduling (Cont.) Suppose that the processes arrive in the order P 2, P 3, P 1 The Gantt chart for the schedule is: P 2 Waiting time for P 1 = 6; P 2 = 0, P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 0 P 3 Convoy effect short process behind long process Idea: shortest job first? P 1 3 6 30 - much better! 6.8 Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the shortest ready process Two schemes: nonpreemptive SJF once CPU given to the process, it cannot be preempted until it completes its CPU burst preemptive SJF preempt if a new process arrives with CPU burst length less than remaining time of current executing process. Also known as Shortest-Remaining-Time-First (SRTF) SJF is optimal gives minimum average waiting time for a given set of processes Example of Non-Preemptive SJF Process Arrival Time Burst Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 with non-preemptive SJF: P 1 P 3 P 2 0 3 7 8 12 16 Average waiting time = (0 + 6 + 3 + 7) / 4 = 4 P 4 6.9 6.10 Example of Preemptive SJF Process Arrival Time Burst Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 with preemptive SJF: Predicting Length of Next CPU Burst Can only estimate the length Based on length of previous CPU bursts, using exponential averaging: 1. t n actual lenght of n CPU burst 2. n 1 predicted value for the next CPU burst 3., 0 1 4. Define : n 1. t n n 1 + th P 1 P 2 P 3 P 2 P 4 P 1 0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2) / 4 = 3 6.11 6.12 2

Examples of Exponential Averaging Extreme cases: =0 n+1 = n Recent history does not count =1 n+1 = t n Only the latest CPU burst counts Otherwise: Expand the formula: n+1 = t n + (1 - ) t n-1 + +(1 - ) j t n-j + +(1 - ) n +1 0 Since both and (1 - ) are less than 1, each successive term has less weight than its predecessor Priority Scheduling A priority value (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority) preemptive nonpreemptive SJF is a priority scheduling where priority is the predicted next CPU burst time Problem: Starvation low-priority processes may never execute Solution: Aging as time progresses increase the priority of the process 6.13 6.14 Round Robin (RR) Each process gets a small unit of CPU time: time quantum, usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. Given n processes in the ready queue and time quantum q, each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Performance q very large FCFS q very small many context switches q must be large w.r.t. context switch time, otherwise too high overhead Example: RR with Time Quantum q = 20 Process Burst Time P 1 53 P 2 17 P 3 68 P 4 24 The Gantt chart is: P 1 P 2 P 3 P 4 P 1 P 3 P 4 P 1 P 3 P 3 0 20 37 57 77 97 117 121 134 154 162 Typically, higher average turnaround than SJF, but better response 6.15 6.16 Time Quantum and Context Switches RR: Turnaround Time Varies With Time Quantum Smaller time quantum more context switches Time Quantum = 1 P 1 P 2 P 3 P 4 3 5 9 10 15 The average turnaround time can be improved 3 + 9 + 15 if + most 17 =11 processes finish 4 their next CPU burst in a single time quantum. 17 6.17 6.18 3

RR: Turnaround Time Varies With Time Quantum RR: Turnaround Time Varies With Time Quantum Time Quantum = 2 P 1 P 2 P 3 P 4 5 10 14 15 The average turnaround time can be improved 5 +10 + 14 if + most 17 =11.5 processes finish 4 their next CPU burst in a single time quantum. 17 6.19 6.20 Problems with RR and Priority Schedulers Priority based scheduling may cause starvation for some processes. Round robin based schedulers are maybe too fair... we sometimes want to prioritize some processes. Solution: Multilevel queue scheduling...? 6.21 Multilevel Queue Ready queue is partitioned into separate queues, e.g.: foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground RR background FCFS Scheduling must be done also between the queues: Fixed priority scheduling Useful when processes are easily classified into different groups with different characteristica... Serve all from foreground queue, then from background queue. Possibility of starvation. Time slice Each queue gets a certain share of CPU time which it can schedule amongst its processes Example: 80% to foreground in RR, 20% to background in FCFS 6.22 Multilevel Queue Scheduling Multilevel Feedback Queue A process can move between the various queues aging can be implemented this way Time-sharing among the queues in priority order Processes in lower queues get CPU only if higher queues are empty 6.23 6.24 4

Example of Multilevel Feedback Queue Three queues: Q 0 RR with q = 8 ms Q 1 RR with q = 16 ms Q 2 FCFS Scheduling: A new job enters queue Q 0 which is served RR. When it gains CPU, the job receives 8 milliseconds. If it does not finish in 8 milliseconds, it is moved to Q 1. At Q 1 the job is again served RR and receives 16 additional milliseconds. high low priority If it still does not complete, it is preempted and moved to Q 2. Multilevel Feedback Queue Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service priority level of each queue 6.25 6.26 Synchronization might Interfere with Priority Scheduling Priority inversion problem Scenario: H R? L R! A low-prioritized thread L takes resource R and executes... Then mid-prioritized thread M preempts L and executes... M (sleep) Then high-prioritized thread H preempts M and executes......until it needs resource R. But R is occupied (owned by L), and H waits. (done) Now M will be awake and running, and H will have to wait for an unrelated thread M to finish...? Can be resolved by priority inheritance L temporarily inherits H s priority while working on R R! 6.27 TDDB68 Concurrent programming and operating systems Multiprocessor Scheduling Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin s and Gagne s book ( Operating System Concepts, 7th ed., Wiley, 2005). No part of the lecture notes may be reproduced in any form, due to the copyrights reserved by Wiley. These lecture notes should only be used for internal teaching purposes at the Linköping University. Christoph Kessler, IDA, Linköpings universitet Multiprocessor Scheduling CPU scheduling more complex if multiple CPUs available Multiprocessor (SMP) CPU CPU CPU homogeneous processors, shared memory (homogeneous) Multi-core processors memory cores share L2 cache and memory P0 Multithreaded processors HW threads / logical CPUs share basically everything but register sets HW-Contextswitch-on-Load Cycle-by-cycle interleaving Simultaneous multithreading Parallel jobs: Work sharing Centralized vs. local task queues, load balancing P1 L1$ D1$ L1$ D1$ Supercomputing applications often use a fixed-sized process configuration ( SPMD, 1 thread per processor) and implement own methods for scheduling and resource management (goal: load balancing) 6.29 L2$ Memory Ctrl Main memory Intel Xeon Dualcore(2005) Multiprocessor Scheduling Multi-CPU scheduling approaches for sequential and parallel tasks Common ready queue (SMP only) critical section! supported by Linux, Solaris, Windows XP, Mac OS X Job-blind scheduling (FCFS, SJF, RR as above) schedule and dispatch one by one as any CPU gets available Affinity based scheduling guided by data locality (cache contents, loaded pages) Co-Scheduling / Gang scheduling for parallel jobs Processor-local ready queues Load balancing by task migration Push migration vs. pull migration (work stealing) Linux: Push-load-balancing every 200 ms, pull-load-balancing whenever local task queue is empty 6.30 5

Affinity-based Scheduling Cache contains copies of data recently accessed by CPU If a process is rescheduled to a different CPU (+cache): Old cache contents invalidated by new accesses Many cache misses when restarting on new CPU much bus traffic and many slow main memory accesses Policy: Try to avoid migration to other CPU if possible. A process has affinity for the processor on which it is currently running CPU Hard affinity (e.g. Linux): Migration to other CPU is forbidden Soft affinity (e.g. Solaris): Migration is possible but undesirable Cache CPU Cache Main Memory 6.31 Scheduling Communicating Threads Frequently communicating threads / processes (e.g., in a parallel program) should be scheduled simultaneously on different processors to avoid idle times Time quantum CPU 0: A0 Idle B1 A0 Idle B1 A0 B1 CPU 1: CPU 0: CPU 1: B0 A0 A1 Idle B0 A1 B0 A1 A1 Idle B0 B1 A0 B1 A0 B0 A1 A1 B1 B0 time 6.32 Co-Scheduling / Gang Scheduling Tasks can be parallel (have >1 process/thread) Global, shared RR ready queue Execute processes/threads from the same job simultaneously rather than maximizing processor affinity Example: Undivided Co-scheduling algorithm Place threads from same task in adjacent entries in the global queue Window on ready queue of size #processors All threads within same window execute in parallel for at most 1 time quantum RR fair (no indefinite postponement) Programs designed to run in parallel profit from multiprocessor env. May reduce processor affinity A1 A2 A3 A4 B1 B2 B3 C1 C2 C3 C4 C5 6.33 Summary: CPU Scheduling Goals: Enable multiprogramming CPU utilization, throughput,... Scheduling Algorithms Preemptive vs Non-preemptive scheduling RR, FCFS, SJF Priority scheduling Multilevel queue and Multilevel feedback queue Priority Inversion Problem Multiprocessor Scheduling Appendix: Scheduling Algorithm Evaluation, Realtime Scheduling In the book (Chapter 5): Scheduling in Solaris, Windows, Linux 6.34 Scheduling Algorithm Evaluation Analytical modeling Appendix: Scheduling Algorithm Evaluation Material for further reading / self-study Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload. Queuing models Simulation Models: Implementation 6.35 6.36 6

Real-Time Systems Network access - Search - Maintain link - Talk Display - Clock - Browse web - Appendix: Real-Time Scheduling Camera - Take photo (optional) Keyboard - Button pressed - Event-driven and/or time-driven Deadlines 6.37 6.38 Utility Function Real-Time Scheduling (Value of result relative to time) Hard real-time systems missing a deadline can have catastrophic consequences Hard real-time system: utility requires that critical processes receive priority over less important ones missing a deadline leads to degradation of service e.g., ri di lost frames / pixelized images in digital TV Release time Often, periodic tasks or reactive computations benefit vv(t) i (t ) Soft real-time computing Deadline utility loss required to complete a critical task within a guaranteed amount of time time penalty require special scheduling algorithms: RM, EDF,... 6.39 Real-Time CPU Scheduling 6.40 Rate-Monotonic scheduling (RM) Use strict priority-driven preemptive scheduling Priority = inverse periodicity Task with shortest periodicity has highest priority Schedulability analysis Time triggered or event triggered (e.g., sensor measurement) Periodic tasks Each task i has a periodicity pi, a (relative) deadline di and a computation time ti CPU utilization of task i: ti / p i Sporadic tasks (no period, but relative deadline) Aperiodic tasks (no period, no deadline) 6.41 Worst case start all tasks simultaneously Simulate an execution of the system until lowest prioritised task has finished for the first time... if it meets its deadline, then it is all ok (task set is schedulable) Example: P1 (period 50, time 20) has higher priority than P2 (period 100, time 35) 6.42 7

Missed Deadlines with Rate Monotonic Scheduling Example: P1 (periodicity 50, time 25), P2 (periodicity 80, time 35) Earliest Deadline First (EDF) Scheduling Priorities are assigned according to deadlines: the earlier the deadline, the higher the priority; the later the deadline, the lower the priority. P1 preempts P2 P2 misses deadline at 80 P2 keeps CPU as its deadline is closer 6.43 6.44 SJF becomes EDF RM vs. EDF Earliest Deadline First (EDF) same principle as for SJF preemptive version Schedulability: This is an optimal algorithm if CPU utilisation is below 100% the task set is schedulable! RM: Simpler algorithm On overload: always lowest prioritized task fails first Harder schedulablity analysis, not an optimal algorithm EDF: Utilization-based schedulability test Schedulability guaranteed only for utilization < 69% Simulation-based schedulability test Overload may propagate delays all over... Optimal algorithm, simple schedulability analysis (total utilization <100% always ok) 6.45 6.46 Dynamic Scheduling with Dynamic Admission Control activation acceptance test YES Ready scheduling Run termination Minimizing Latency Event latency is the amount of time from when an event occurs to when it is serviced Interrupt latency + Dispatch latency NO signal from resource preemption Waiting wait on busy resource Acceptance test is also known as performing admission control Perform admission control at run-time when tasks have dynamic activations the arrival times are not known a priori execute test every time there is a new task coming into the system 6.47 6.48 8

Interrupt Latency Interrupt latency is the period of time from when an interrupt arrives at the CPU to when it is serviced. Dispatch Latency Dispatch latency is the amount of time required for the scheduler to stop one process and start another. ISR continued 6.49 Period of conflict, e.g., priority inversion 6.50 9