Operating System: Scheduling



Similar documents
Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

CPU Scheduling. CPU Scheduling

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

CPU Scheduling Outline

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

ICS Principles of Operating Systems

OPERATING SYSTEMS SCHEDULING

Chapter 5 Process Scheduling

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

W4118 Operating Systems. Instructor: Junfeng Yang

CPU Scheduling. Core Definitions

Operating Systems Lecture #6: Process Management

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

Operating Systems. III. Scheduling.

Processor Scheduling. Queues Recall OS maintains various queues

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Scheduling Algorithms

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

PROCESS SCHEDULING ALGORITHMS: A REVIEW

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

OS OBJECTIVE QUESTIONS

Introduction. Scheduling. Types of scheduling. The basics

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Job Scheduling Model

Operating Systems, 6 th ed. Test Bank Chapter 7

A Comparative Study of CPU Scheduling Algorithms

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Operating Systems Concepts: Chapter 7: Scheduling Strategies

CS414 SP 2007 Assignment 1

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Operating System Tutorial

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

Real-Time Scheduling 1 / 39

Multi-core Programming System Overview

This tutorial will take you through step by step approach while learning Operating System concepts.

Analysis and Comparison of CPU Scheduling Algorithms

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Processes and Non-Preemptive Scheduling. Otto J. Anshus

First-class User Level Threads

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

Linux scheduler history. We will be talking about the O(1) scheduler

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

Kernel comparison of OpenSolaris, Windows Vista and. Linux 2.6

Syllabus MCA-404 Operating System - II

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Project No. 2: Process Scheduling in Linux Submission due: April 28, 2014, 11:59pm

Analysis of Job Scheduling Algorithms in Cloud Computing

A Review on Load Balancing In Cloud Computing 1

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Threads Scheduling on Linux Operating Systems

Linux Process Scheduling Policy

Comparison between scheduling algorithms in RTLinux and VxWorks

W4118 Operating Systems. Instructor: Junfeng Yang

Tasks Schedule Analysis in RTAI/Linux-GPL

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Devices and Device Controllers

Performance Comparison of RTOS

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

An Implementation Of Multiprocessor Linux

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Why Threads Are A Bad Idea (for most purposes)

Efficiency of Batch Operating Systems

Chapter 2: OS Overview

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

OPERATING SYSTEM CONCEPTS

OPERATING SYSTEM - VIRTUAL MEMORY

Process Scheduling II

CS0206 OPERATING SYSTEMS Prerequisite CS0201, CS0203

Operating Systems 4 th Class

Load Balancing Scheduling with Shortest Load First

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Completely Fair Scheduler and its tuning 1

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

Linux Scheduler Analysis and Tuning for Parallel Processing on the Raspberry PI Platform. Ed Spetka Mike Kohler

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Transcription:

Process Management Operating System: Scheduling OS maintains a data structure for each process called Process Control Block (PCB) Information associated with each PCB: Process state: e.g. ready, or waiting etc. Program counter: address of next instruction Spring 2010 Based on slides OS Concepts by Silberschatz, Galvin and Gagne, 2008 Additional material by Diana Palsetia CPU registers: PC, working registers, stack pointer, condition code CPU scheduling information: scheduling order and priority Memory-management management information: page table/segment table pointers Accounting information: book keeping info e.g. amt CPU used I/O status information: list of I/O devices allocated to this process 2 Process(Running Program) Behavior Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates Scheduling under 1 and 4 is non-preemptive All other scheduling is preemptive 3 4 1

Ready Queue Representation Dispatcher Dispatcher module gives control of the CPU to the process selected by the scheduler This involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency Time it takes for the dispatcher to stop one process and start another running Source: Operating System Concept (7 Ed) Silberschatz, Gavin and Gagne 5 6 Context Switch Time due to context switch Efficiency Issues to Consider If scheduler is invoked every 100 ms & it takes 10 ms to decide the order of execution of the processes then 10/(100 + 10) = 9 % of the CPU is being used simply for scheduling the work Context switch overhead (Dispatcher latency) Saving PCB on process, loading PCB of other Fairness Give each process a fair share Priority Allow more important processes Source: Operating System Concept (7 Ed) Silberschatz, Gavin and Gagne 7 8 2

Scheduling Algorithm Metrics CPU utilization: Fraction of time CPU is utilized Throughput: # of processes that completed per time unit Turnaround time: amount of time to execute a particular process Waiting in Ready Queue + Executing on CPU + doing I/O Response time: amount of time it takes from when a request was submitted until the first response is produced Ideal for interactive systems Note: For simplicity illustration and discussion only one CPU burst per process is used in examples Measure of comparison is done with Average Waiting Time Waiting Time is the sum of the periods spent waiting in ready queue Context switch time is negligible Scheduling Scheme: FCFS First Come First Served (FCFS) The process that requests the CPU first is allocated the CPU Implemented using a FIFO queue When the process enters the read queue, its PCB is linked to the tail The process at the head of the queue is given to the CPU FCFS is non-preemptive and hence easy to implement Wait time varies substantially if the processes that comes first are CPU intensive 9 10 FCFS Example (Execution Time) Process Execution Time P 1 24 P 2 3 P 3 3 FCFS Example (contd..) Suppose that the processes arrive in the order: P 2, P 3, P 1 The time chart for the schedule is: Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt/Time Chart for the schedule is: P 2 P 3 P 1 0 3 6 30 0 P 1 P 2 P 3 Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 24 27 30 Waiting time for P 1 = 6;P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect: all processes wait for the one big process to get off the CPU 11 12 3

Scheduling Algorithm: Shortest Job First (SJF) The process with the shortest execution time takes priority over others Associate with each process the length of its next CPU execution time Achieved by guessing the run time of process for its next execution Non-preemptive Once CPU given to the process it cannot be preempted until completes its CPU execution time is complete. If 2 short jobs with same execution time then use their time of arrival to break the tie Preemptive If a new process arrives with CPU execution time less than remaining time of current executing process, preempt SJF with Preemption Example Process Arrival Time Next CPU Ex Time P 1 0.0 7 P 2 2.0 4 P 3 4.0 1 P 4 5.0 4 SJF (preemptive) P 1 P 2 P 3 P 2 P 4 P 1 0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)/4 = 3 Also known as Shortest Remaining Time First 13 14 SJF Advantages and Disadvantages Advantage SJF is optimal gives minimum average waiting time for a given set of processes Disadvantage Need to have a good heuristic to guess the next CPU execution time Short duration processes will starve longer ones Estimating the Next CPU Execution Time Use previous value of the execution time Predict new time using Exponential Averaging Method: 1. t n = actual length of n CPU Execution Time 2. = predicted value for the next CPUExecution Time τ n + 1 3. α, 0 α 1 4.Formula: : τ n+ 1 = α tn + ( 1 α) τ n. th Since both α and (1 - α) are less than or equal to 1, each successive term has less weight than its predecessor 15 16 4

Example of Exponential Averaging Scheduling Scheme: Priority A priority number (integer) is associated with each process Actual CPU burst time t i CPU is allocated to the process with the highest priority If processes have same priority then schedule according to FCFS SJF is an example of a priority scheduling Predicted CPU burst time τ i with α = 1/2 and τ 0 = 10 Problem: Starvation low priority processes may never execute Solution: Aging g as time progresses increase the priority of the process 17 18 Scheduling Scheme: Round Robin (RR) Each process gets a small unit of CPU time called time quantum or slice After this time has elapsed, the process is preempted and added to the end of the ready queue If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time units Performance If q is large then RR becomes like FCFS q should not be so small such that it requires too many context switches What do real systems use? Multi-level Feedback queues N priority levels Priority scheduling between levels RR within a level Quantum size decreases as priority level increase Process in a given level not scheduled until all higher priority queues are empty If process does not complete in a given quantum at a priority level it is moved to next lower priority level If processes voluntarily relinquishes CPU (becomes I/O bound) then move process to a higher priority queue 19 20 5

User Threads Implemented by a library at the user level Supports thread creation, scheduling, and management with no support from the kernel Advantage Fast and easy to create (no overhead of system calls) Disadvantage If one thread blocks then all block Kernel Threads OS having its own threads Also known as LWP (Light Weight Processes) A LWP can be viewed as virtual CPUs to which the scheduler of threads library schedules user-level threads Advantage If one thread blocks then OS can schedule another thread of the application to execute Disadvantage Slower to create and manage as creation and management is happening via system calls 21 22 Thread Models: Many to One Model All application-level threads map to a single kernel-level scheduled entity Threads of a process are not visible to the os/kernel Threads are not kernel scheduled entities Process is the only kernel scheduled entity Problem with this model is that it constrains concurrency Since there is only one kernel scheduled entity, only one thread per process can execute at a time Cannot benefit from the hardware acceleration on multithreaded processors or multi-processor computers Many to One model 23 24 6

Thread Models: One to One Model One to One Model Maps each user thread to schedulable entity in the kernel All user threads are visible to the kernel All user threads are kernel scheduled entities, and all threads can concurrently execute The drawback to this model is that the creation and management of the threads entails creation of corresponding kernel thread 25 26 Thread Models: Hybrid Models Offers the speed of library threads and the concurrency of kernel threads Many to Many Model Multiplexes many user-level threads to a smaller or equal number of kernel threads There are two levels of thread scheduling: Library manages the scheduling of library threads onto kernel scheduled entities Kernel manages the scheduling of kernel scheduled entities onto processor(s) 27 28 7

Thread Scheduling: Contention Scope Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP Known as process-contention scope (PCS) since scheduling competition is within the process Kernel thread scheduled onto available CPU is system-contention scope (SCS) Competition among all threads in system In Pthreads, scope can be set with thread attribute object (pthread_attr_t) via the pthread_attr_getscope(..) By default the scope used is PCS.h also provided macros PTHREAD_SCOPE_PROCESS and PTHREAD_SCOPE_SYSTEM 29 Thread Scheduling: Scheme Most user-level thread libraries provide getting/setting scheduling policies Example Pthreads API int pthread_attr_setschedpolicy(pthread_attr_t *attr, int policy); int pthread_attr_getschedpolicy(const pthread_attr_t *attr, int *policy); policy -> SCHED_FIFO, SCHED_RR and SCED_OTHER 30 8