COSC243 Part 2: Operating Systems

Similar documents
W4118 Operating Systems. Instructor: Junfeng Yang

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

CPU Scheduling. CPU Scheduling

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Chapter 5 Process Scheduling

Operating System: Scheduling

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

ICS Principles of Operating Systems

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

CPU Scheduling. Core Definitions

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

OPERATING SYSTEMS SCHEDULING

Operating Systems. III. Scheduling.

Operating Systems Lecture #6: Process Management

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

CPU Scheduling Outline

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Processor Scheduling. Queues Recall OS maintains various queues

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

A Comparative Study of CPU Scheduling Algorithms

OS OBJECTIVE QUESTIONS

Introduction. Scheduling. Types of scheduling. The basics

Job Scheduling Model

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Analysis and Comparison of CPU Scheduling Algorithms

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Scheduling Algorithms

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Real-Time Scheduling 1 / 39

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Linux Process Scheduling Policy

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

Operating Systems, 6 th ed. Test Bank Chapter 7

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

W4118 Operating Systems. Instructor: Junfeng Yang

Operating System Tutorial

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

CS414 SP 2007 Assignment 1

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

Analysis of Job Scheduling Algorithms in Cloud Computing

Chapter 2: OS Overview

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

This tutorial will take you through step by step approach while learning Operating System concepts.

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Lecture 3 Theoretical Foundations of RTOS

Undergraduate Course Syllabus

Processes and Non-Preemptive Scheduling. Otto J. Anshus

A Review on Load Balancing In Cloud Computing 1

Efficiency of Batch Operating Systems

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

Linux scheduler history. We will be talking about the O(1) scheduler

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Load Balancing Scheduling with Shortest Load First

Load Balancing in Distributed System Using FCFS Algorithm with RBAC Concept and Priority Scheduling

Understanding Linux on z/vm Steal Time

Multiprogramming. IT 3123 Hardware and Software Concepts. Program Dispatching. Multiprogramming. Program Dispatching. Program Dispatching

Comparison between scheduling algorithms in RTLinux and VxWorks

Performance Comparison of RTOS

Why Relative Share Does Not Work

Jorix kernel: real-time scheduling

Syllabus MCA-404 Operating System - II

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Devices and Device Controllers

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

CS10110 Introduction to personal computer equipment

218 Chapter 5 CPU Scheduling

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

Process Scheduling in Linux

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Transcription:

COSC243 Part 2: Operating Systems Lecture 17: CPU Scheduling Zhiyi Huang Dept. of Computer Science, University of Otago Zhiyi Huang (Otago) COSC243 Lecture 17 1 / 30

Overview Last lecture: Cooperating Processes and Data-Sharing This lecture: Criteria for Scheduling Algorithms Some Scheduling Algorithms: first-come-first-served, shortest-job-first, priority scheduling, round-robin scheduling, multilevel queue scheduling Note: you will have a TUTORIAL EXAM on CPU scheduling in tutorial 11A (16-17 May). It s worth 10%. The questions in Tutorial 10A are practice for this exam. Zhiyi Huang (Otago) COSC243 Lecture 17 2 / 30

CPU Scheduling: A Recap A CPU scheduler is the kernel process which determines how to move processes between the ready queue and the CPU. ready queue CPU I/O I/O I/O device queue I/O device queue Child executes I/O request made fork a child interrupt occurs wait for an interrupt time slice expired Zhiyi Huang (Otago) COSC243 Lecture 17 3 / 30

Context Switching When the operating system switches between processes, it has a fair amount of housekeeping to do. This housekeeping is known as context switching. Process P0 Operating System Process P1 executing Interrupt or system call save state into PCB0 idle idle reload state from PCB1 Interrupt or system call executing executing Save state into PCB1 Reload state from PCB0 idle Zhiyi Huang (Otago) COSC243 Lecture 17 4 / 30

Terminology: Scheduler and Dispatcher The scheduler decides which process to give to the CPU next, and when to give it. Its decisions are carried out by the dispatcher. Dispatching involves Switching context Switching to user mode Jumping to the proper location in the new program. Dispatch latency: the time it takes the dispatcher to do this. Zhiyi Huang (Otago) COSC243 Lecture 17 5 / 30

Why Do We Want a Scheduler? (1) One key motivation behind CPU scheduling is to keep the CPU busy. This means removing processes from the CPU while they re waiting. If processes never had to wait, then scheduling wouldn t increase CPU utilisation. However, it s a fact about processes that they tend to exhibit a CPU burst cycle. CPU burst I/O burst CPU burst I/O burst Zhiyi Huang (Otago) COSC243 Lecture 17 6 / 30

How Long is a CPU Burst? This is the kind of frequency curve we can expect: 140 120 100 frequency 80 60 40 20 0 8 16 24 32 40 CPU burst duration (ms) Zhiyi Huang (Otago) COSC243 Lecture 17 7 / 30

Why Do We Want a Scheduler? (2) Another reason for having a scheduler is so that processes don t have to spend too much time waiting for the CPU. Even if the CPU is always busy, executing processes in different orders can change the average amount of time a process spends queueing for the CPU. Total CPU time needed P1 P2 P3 P4 Zhiyi Huang (Otago) COSC243 Lecture 17 8 / 30

Why Do We Want a Scheduler? (3) Another reason for having a scheduler is so that interactive processes always respond quickly. One question is how long a process spends waiting for the CPU in total; A different question is how long on average it waits in between visits to the CPU. (Important for interactive processes.) Fast CPU switching P1 P2 P1 P2 P1 P2 P1 P2 time Slower CPU switching P1 P2 P1 P2 Zhiyi Huang (Otago) COSC243 Lecture 17 9 / 30

Criteria for Scheduling Algorithms CPU utilisation: the percentage of time that the CPU is busy. Throughput: the number of processes that are completed per time unit. Turnaround time (for a single process): the length of time from when the process was submitted (arrived) to when it is completed. Waiting time (for a single process): the total amount of time the process spends waiting for the CPU. Response time (for a single process): the average time from the submission of a request to a process until the first response is produced. Zhiyi Huang (Otago) COSC243 Lecture 17 10 / 30

Terminology: The Ready Queue head process ID number pointer process state process ID number pointer process state process ID number pointer process state program counter program counter program counter tail Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Accounting information Accounting information Accounting information Remember: there are two kinds of waiting: Waiting for the CPU (in the ready queue); Waiting for an I/O device (in a device queue). Don t be confused when you hear about processes waiting in the ready queue! Zhiyi Huang (Otago) COSC243 Lecture 17 11 / 30

Terminology: Preemption There are 4 situations for scheduling decisions to take place: 1 a process switches from running to waiting state; 2 a process switches from running to ready (due to an interrupt); 3 a process switches from waiting to ready state (due to completion of I/O); 4 a process terminates. In a non-preemptive scheduling system, scheduling takes place only under 1 and 4, which has no choice. In a preemptive scheduling system, scheduling can take place under 2 and 3 as well. Zhiyi Huang (Otago) COSC243 Lecture 17 12 / 30

Implementing a Preemptive System Implementing preemption is hard. What if a process is preempted while a system call is being executed? Kernel data e.g. I/O queues might be left in an inconsistent state. Earlier versions of UNIX dealt with this problem by waiting until system calls were completed before switching context. Some systems: MS Windows 3.1 and below is nonpreemptive. Windows 95, NT, XP etc are preemptive. Linux is fully preemptive as of 2.6. Zhiyi Huang (Otago) COSC243 Lecture 17 13 / 30

1) First-Come-First-Served Scheduling The simplest method is to execute the processes in the ready queue on a first-come-first-served (FCFS) basis. head process ID number pointer process state process ID number pointer process state process ID number pointer process state program counter program counter program counter tail Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Accounting information Accounting information Accounting information When a process becomes ready, it is put at the tail of the queue. When the currently executing process terminates, or waits for I/O, the process at the front of the queue is selected next. This algorithm is non-preemptive. Zhiyi Huang (Otago) COSC243 Lecture 17 14 / 30

Gantt Charts The operation of a scheduling algorithm is commonly represented in a Gantt chart. Consider the following process information: Process Arrival Time Burst Time P1 0 24 P2 1 3 P3 2 3 (N.B. We re just looking at the initial CPU burst for each process.) The Gantt chart for FCFS with the above data is: P1 P2 P3 0 24 27 30 Zhiyi Huang (Otago) COSC243 Lecture 17 15 / 30

Gantt Charts for Algorithm Evaluation Process Arrival Time Burst Time P1 0 24 P2 1 3 P3 2 3 P1 P2 P3 0 24 27 30 Waiting times: P1? 0MS P2? 24-1 = 23MS P3? 27-2 = 25MS Average Waiting Time? (0+23+25)/3MS = 16MS Zhiyi Huang (Otago) COSC243 Lecture 17 16 / 30

Gantt Charts for Algorithm Evaluation Process Arrival Time Burst Time P1 2 24 P2 0 3 P3 1 3 P2 P3 P1 0 3 6 30 Waiting times: P1? 6-2 = 4MS P2? 0MS P3? 3-1 = 2MS Average Waiting Time? (4+0+2)/3MS = 2MS. Zhiyi Huang (Otago) COSC243 Lecture 17 17 / 30

FCFS: Advantages and Disadvantages ADVANTAGES: EASY TO IMPLEMENT. EASY TO UNDERSTAND. DISADVANTAGES: WAITING TIME NOT LIKELY TO BE MINIMAL. CONVOY EFFECT: LOTS OF SMALL PROCESSES CAN GET STUCK BEHIND ONE BIG ONE. Q: COULD THROUGHPUT (NO OF PROCESSS PER TIME UNIT GOING THROUGH THE SYSTEM) BE IMPROVED? BAD RESPONSE TIME. (SO BAD FOR TIME-SHARING SYSTEMS). Zhiyi Huang (Otago) COSC243 Lecture 17 18 / 30

2) Shortest-Job-First Scheduling If we knew in advance which process on the list had the shortest burst time, we could choose to execute that process next. This method is called shortest-job-first (SJF) scheduling. Example: Process Burst Time P1 6 P2 8 P3 7 P4 3 P4 P1 P3 P2 0 3 9 16 24 N.B. processes with equal burst times are executed in FCFS order. Zhiyi Huang (Otago) COSC243 Lecture 17 19 / 30

SJF: Advantages and Disadvantages ADVANTAGES: PROVABLY OPTIMAL AVERAGE WAITING TIME. DISADVANTAGES: YOU NEVER KNOW IN ADVANCE WHAT THE LENGTH OF THE NEXT CPU BURST IS GOING TO BE. POSSIBILITY OF LONG PROCESSES NEVER GETTING EXECUTED? Zhiyi Huang (Otago) COSC243 Lecture 17 20 / 30

Predicting the next CPU burst length It s possible to approximate the length of the next CPU burst: it s likely to be similar in length to the previous CPU bursts. A commonly-used formula: the exponential average CPU burst length. τ n+1 = α t n + (1 α)τ n τ n : the predicted length of CPU burst n. t n : the actual length of CPU burst n. α: a value between 0 and 1. Zhiyi Huang (Otago) COSC243 Lecture 17 21 / 30

Preemption and SJF Scheduling Scenario: - A process P1 is currently executing. - A new process P2 arrives before P1 is finished. - P2 s burst time is shorter than the remaining burst time of P1. Non-preemptive SJF: P1 keeps the CPU. Preemptive SJF: P2 takes the CPU. Tiny example: Process Arrival Time Burst Time P1 0 8 P2 1 4 P1 P2 P1 0 1 5 12 Zhiyi Huang (Otago) COSC243 Lecture 17 22 / 30

3) Priority Scheduling In priority scheduling, each process is allocated a priority when it arrives; the CPU is allocated to the process with highest priority. Priorities are represented by numbers, with low numbers being highest priority. Can priority scheduling be preemptive? YES, NO REASON WHY NOT. What s the relation of SJF scheduling to priority scheduling? SJF IS A TYPE OF PRIORITY SCHEDULING. (SPECIFICALLY, ONE WHERE THE PRIORITY OF A PROCESS IS SET TO BE THE ESTIMATED NEXT CPU BURST. (IF LOW NUMBERS ARE ASSUMED TO DENOTE HIGH PRIORITIES.) Zhiyi Huang (Otago) COSC243 Lecture 17 23 / 30

Starvation and Aging Starvation occurs when a process waits indefinitely to be allocated the CPU. Priority scheduling algorithms are susceptible to starvation. Imagine a process P1 is waiting for the CPU, and a stream of higher-priority processes is arriving. If these processes arrive sufficiently fast, P1 will never get a chance to execute. A solution to the starvation problem is to increase the priority of processes as a function of how long they ve been waiting for the CPU. (Called aging.) Zhiyi Huang (Otago) COSC243 Lecture 17 24 / 30

4) Round-Robin Scheduling Round-robin (RR) scheduling is designed for time-sharing systems. A small unit of time (time quantum) is defined. The ready queue is treated as a circular list. The CPU scheduler goes round the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum. head process ID number pointer process state process ID number pointer process state process ID number pointer process state program counter program counter program counter tail Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Contents of CPU registers memory management information I/O status information Accounting information Accounting information Accounting information CPU I/O operation? Time quantum expired? Zhiyi Huang (Otago) COSC243 Lecture 17 25 / 30

An example of Round-Robin Say we have an RR algorithm with a quantum of 3, and the following process info: The Gannt chart will look like this: Process Arrival Time Burst Time P1 0 8 P2 1 4 P3 4 4 P2=0 P3=0 P1=5 P2=1 P1=2 P3=1 P1=0 P1 P2 P1 P3 P2 P1 P3 0 3 6 9 12 13 15 16 Note: in RR it s possible that a process re-enters the ready queue at the same time that another process arrives in the queue. We ll use FCFS as a tie-breaker here too. Zhiyi Huang (Otago) COSC243 Lecture 17 26 / 30

Changing the Time Quantum in RR Scheduling If the time quantum is set to be infinitely large: RR scheduling reduces to FCFS scheduing. If the time quantum is set to be very small: we can talk of processor sharing. However, there are drawbacks to making the time quantum very small. Let s say we make the time quantum the same as the context switch time. Then we ll spend half our time on context switching! Zhiyi Huang (Otago) COSC243 Lecture 17 27 / 30

RR: Advantages and Disadvantages IT ALL DEPENDS ON THE SIZE OF THE TIME QUANTUM. IF IT S BIG, WE GET THE ADVANTAGES/DISADVANTAGES OF FCFS SCHEDULING. IF IT S VERY SMALL, WE GET FASTER RESPONSE TIME. BUT SLOWER THROUGHPUT. (WHY? BECAUSE THERE S MORE TIME SPENT CONTEXT-SWITCHING.) EVEN IF YOU IGNORE CONTEXT-SWITCH TIME, TURNAROUND TIME GOES DOWN IF MOST PROCESSES COMPLETE WITHIN A SINGLE QUANTUM. FOR INSTANCE, SAY THERE ARE 3 PROCESSES WITH A NEXT CPU BURST OF 10. IF THE QUANTUM SIZE IS 1, THEN AVERAGE TURNAROUND TIME IS 29. BUT IF QUANTUM SIZE IS 10, THEN THEY DO FINISH IN THE NEXT CPU BURST, AND THE AVERAGE TURNAROUND TIME IS 20. IF WE TAKE CONTEXT-SWITCH TIME INTO ACCOUNT, MAKING THE TIME QUANTUM SMALL ALSO HAS THE EFFECT OF INCREASING TURNAROUND TIME. Zhiyi Huang (Otago) COSC243 Lecture 17 28 / 30

5) Multilevel Queue Scheduling Let s say you have two groups of processes: interactive processes; batch processes. What you really want is to run different scheduling algorithms for the two different groups. Multilevel queue scheduling: Split the ready queue into a number of different queues, each with its own scheduling algorithm. Implement a scheduling algorithm to decide which queue is next allocated. (Preemptive priority scheduling is often used.) Zhiyi Huang (Otago) COSC243 Lecture 17 29 / 30

Exercises For this lecture, you should have read Chapter 5 (Sections 1, 2, 3 and 7) of Silberschatz et al. For next lecture: 1 Read Chapter 6 (Sections 1 7). 2 The Unix command nice can be used to run processes at different priorities. Read the man page for nice. Using one of the programs you have written on UNIX, try the following: /bin/nice -n 19 [program] /bin/nice -n 10 [program] Notice any difference in speed? Zhiyi Huang (Otago) COSC243 Lecture 17 30 / 30