Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:



Similar documents
Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

CPU Scheduling Outline

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Chapter 5 Process Scheduling

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

ICS Principles of Operating Systems

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Job Scheduling Model

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

CPU Scheduling. Core Definitions

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Operating Systems. III. Scheduling.

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

CPU Scheduling. CPU Scheduling

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Processor Scheduling. Queues Recall OS maintains various queues

W4118 Operating Systems. Instructor: Junfeng Yang

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Operating System: Scheduling

OPERATING SYSTEMS SCHEDULING

Real-Time Scheduling 1 / 39

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

EECS 750: Advanced Operating Systems. 01/28 /2015 Heechul Yun

PROCESS SCHEDULING ALGORITHMS: A REVIEW

Operating Systems, 6 th ed. Test Bank Chapter 7

Linux scheduler history. We will be talking about the O(1) scheduler

Scheduling Algorithms

Final Report. Cluster Scheduling. Submitted by: Priti Lohani

CHAPTER 1 INTRODUCTION

OS OBJECTIVE QUESTIONS

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

A Comparative Study of CPU Scheduling Algorithms

Operating Systems Lecture #6: Process Management

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Introduction. Scheduling. Types of scheduling. The basics

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Analysis and Comparison of CPU Scheduling Algorithms

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

W4118 Operating Systems. Instructor: Junfeng Yang

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Chapter 1: Introduction. What is an Operating System?

Types Of Operating Systems

Operating System Tutorial

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Linux O(1) CPU Scheduler. Amit Gud amit (dot) gud (at) veritas (dot) com

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Multi-core and Linux* Kernel

Chapter 5 Linux Load Balancing Mechanisms

Linux Scheduler. Linux Scheduler

A Review on Load Balancing In Cloud Computing 1

Syllabus MCA-404 Operating System - II

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Load Balancing Scheduling with Shortest Load First

Effective Computing with SMP Linux

Operating System Impact on SMT Architecture

Multilevel Load Balancing in NUMA Computers

An Implementation Of Multiprocessor Linux

Why Computers Are Getting Slower (and what we can do about it) Rik van Riel Sr. Software Engineer, Red Hat

Task Scheduling for Multicore Embedded Devices

Process Scheduling II

The International Journal Of Science & Technoledge (ISSN X)

Intel DPDK Boosts Server Appliance Performance White Paper

This tutorial will take you through step by step approach while learning Operating System concepts.

Linux Process Scheduling Policy

Chapter 2: OS Overview

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

Scheduling algorithms for Linux

A Survey of Parallel Processing in Linux

A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems

Multi-core Programming System Overview

Asymmetric Scheduling and Load Balancing for Real-Time on Linux SMP

Lecture 3 Theoretical Foundations of RTOS

Scaling Networking Applications to Multiple Cores

Real-Time Operating Systems for MPSoCs

Analysis of Job Scheduling Algorithms in Cloud Computing

Completely Fair Scheduler and its tuning 1

OpenFlow Based Load Balancing

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Devices and Device Controllers

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Transcription:

Scheduling Scheduling Scheduling levels Long-term scheduling. Selects which jobs shall be allowed to enter the system. Only used in batch systems. Medium-term scheduling. Performs swapin-swapout operations if the processes do not fit in the primary memory. Short-term scheduling. Determines which of the processes in the ready queue is selected for execution. The process switch is performed by a dispatcher. Decision to switch the running process can take place under the following circumstances: 1. When a process switches from the running state to the waiting state. 2. When a process switches from the running state to the ready state. 3. When a process switches from the waiting state to the ready state. 4. When a process terminates When scheduling takes place only under circumstances 1 and 4, we say that the scheduling scheme is non-preemptive; otherwise it is preemptive. 1 2

Scheduling Scheduling Possible goals for a scheduling algorithm. Criteria for comparing scheduling algorithms. Be fair. Maximize throughput. Be predictable. Give short response time to interactive processes. Avoid starvation. Enforce priorities. Degrade gracefully under heavy load. Several of the goals are in conflict with each other. CPU utilization. We want to keep the CPU as busy as possible. Throughput. The number of processes completed per time unit. Turnaround time. Measured from the first time a job enters the system until it is completed. Should be as short as possible. Primarily used for batch systems. Waiting time. The sum of a the time periods a process spends in the ready queue. Response time. The time from an event occurs to the first reaction from the system. For example the time from a button is pressed until the character is echoed at the display. 3 4

FCFS - First Come First Served SJF - Shortest Job First Run the processes in the order they arrives to the ready queue. Non-preemptive scheduling Example: Process Burst time P 1 24 P 2 3 P 3 3 Gantt diagram: 0 24 Average waiting time: (0+24+27)/3=17 Gantt diagram: P3 P2 P1 P2 P3 Different arrival sequence: P3, P2, P1 0 3 6 30 P1 Average waiting time: (6+3+0)/3=3 27 30 The process with the shortest estimated time to completion is run first. Optimal in the sense that it gives the minimum average waiting time for a given set of processes. The real problem with SJF is how to know the execution time for the processes. In batch systems it is possible to demand users to specify the run time. The original version of SJF is non-preemptive. A preemptive version of SJF is called Shortest-Remaining-Time-First (SRTF). With SRTF the length of the next CPU burst is associated with each process. The process with the shortest next CPU burst is scheduled first. The length of the CPU bursts cannot be known, but may be predicted as an exponential average of measured lengths of previous CPU bursts. The method may give raise to starvation. 5 6

Priority scheduling Round Robin Scheduling A priority number (integer) is associated with each process. The CPU is allocated to the process with the highest priority. Two methods: Static priorities. Each process is assigned a fixed priority that is never changed. May create starvation for low priority processes. Dynamic priorities. Priorities are dynamically recalculated by the system. Usually a process that have been waiting have it s priority increased. This is called aging. Circular queue of runnable processes. Every process may execute until it: Terminates Becomes waiting due to a blocking operation. Is interrupted by a clock interrupt. Then the execution continues with the next process in the queue. All processes have the same priority. Every time a process is started it may execute no more than one time quantum. How long should a time quantum be? With n processes in the ready queue and quantum size q, the maximum response time will be (n-1)q. For very big q, the method degenerates to FCFS. If the time quantum is the same as the time to switch processes, all the CPU time will be used for process switching. A rule of thumb is that 80 percent of the CPU bursts should be shorter than the time quantum. 7 8

Multilevel Feedback Queues Multiprocessor/Multicore Hardware Several scheduling queues with different priority are used. Then a new process arrives, it is placed the in the highest prioritized queue. If the process becomes waiting within one time quanta, it stays in same queue otherwise it is moved down one level. A process that does not use the whole of it s time quanta, may be moved up one level. Lower priority queues have longer time quanta than the higher priority queues, but processes in these queues only execute if the higher priority queues are empty. The lowest prioritized queue is driven according to round-robin. Several CPU chips shares memory using an external bus In most cases each CPU has a private high speed cache Multicore processors have several CPUs at the same chip Each processor has private high speed L1 cache Typically onchip shared L2 cache Main memory on external bus Multithreaded cores A physical CPU core may have two logical cores Intel calls this hyperthreading. In this case the L1 cache is shared among the logical cores 9 10

Multiprocessor Scheduling Processor Affinity Asymmetric multiprocessing (Master slave architecture) Master runs operating system. Slaves run user mode code Disadvantages: Master can become a performance bottleneck Failure of master brings down entire system Symmetric multiprocessing (SMP) Operating system can execute on any processor Each processor does self-scheduling Recall: Processors share main memory But have local cache memories Recently accessed data populate the caches in order to speed up memory accesses Processor affinity: Most SMP systems try to keep a process running on the same processor Quicker to start process on same processor as last time since the cache may already contain needed data Hard affinity: Some systems -such as Linux- have a system call to specify that a process shall execute on a specific processor 11 12

Assignment of Processes to Processors Load balancing Per-processor ready queues: Each processor has its own ready queue Processor affinity kept A processor could be idle while another processor has a backlog Explicit load-balancing needed Global ready queue: All processors share a global ready-queue Ready-queue can become a bottleneck Task migration not cheap (difficult to enforce processor affinity) Automatic load-balancing On SMP systems the load should preferably be divided equally between the processors. Two methods: push migration A surveillance task periodically checks the load on each processor and moves processes from processors with high load to processors with low load if needed. Pull migration A processor with empty run queue, tries to fetch a process from another processor. 13 14

Linux Scheduler Linux - Implementation1 With Linux kernel version 2.6 a new O(1) scheduler with improved SMP support was introduced. The ULE scheduler for FreeBSD is built on the same principles as the Linux O(1) scheduler. Goals: Adapted for SMP (Symmetric Multi Processing) Give O(1) scheduling. This means that the scheduling time is independent of the number of processes in the system. Processor affinity on SMP. Tries to give interactive processes high priority. Load balancing on SMP. The scheduler uses 140 priority levels. Levels 0-99 are real time priorities and levels 100-140 normal priorities. Each processor is independently scheduled and has it s own run queue. Each run queue has two arrays, active and expired, that points to the scheduling lists for each of the 140 priorities. There exists no further priority subdivision within the scheduling lists. All processes on same priority level in the active array are executed in round robin order. A process is allocated a certain queue until it blocks or it s time quantum runs out. Then the time quantum runs out, the process is moved to the expired array with a recalculated priority. Real time processes are allocated a static priority that cannot be changed. Normal processes are allocated priority based on nice value and the degree of interactivity. Processes with high priority are allocated longer time quanta than lower prioritized processes. Processes are scheduled in priority order from the active array until it becomes empty, then the expired and the active arrays change place. 15 16

Linux - Implementation 2 Linux Load balancing A process interactivity is based on how much time it spends in sleep state compared to running state. Every time a process is awaken, it s sleep avg is increased with the time it has been sleeping. At every clock interrupt, the sleep avg for the running process is decreased. A process is more interactive if it has a high sleep avg. Linux uses both push migration and pull migration. A load balancing task is executed with an interval of 200 ms. If a processor has an empty run queue, processes are fetched from another processor. Load balancing may be in conflict with processor affinity. In order not to disturb the caches to much, Linux avoids moving processes with large amounts of cached data. 17 18