Linux Process Scheduling Policy



Similar documents
Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Process Scheduling in Linux

Scheduling policy. ULK3e 7.1. Operating Systems: Scheduling in Linux p. 1

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Linux scheduler history. We will be talking about the O(1) scheduler

Operating Systems Concepts: Chapter 7: Scheduling Strategies

W4118 Operating Systems. Instructor: Junfeng Yang

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Introduction. Scheduling. Types of scheduling. The basics

CPU Scheduling Outline

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

OPERATING SYSTEMS SCHEDULING

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Processor Scheduling. Queues Recall OS maintains various queues

Operating Systems Lecture #6: Process Management

Real-Time Scheduling 1 / 39

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Operating Systems 4 th Class

I/O Management. General Computer Architecture. Goals for I/O. Levels of I/O. Naming. I/O Management. COMP755 Advanced Operating Systems 1

1. Computer System Structure and Components

Operating Systems. III. Scheduling.

Process Scheduling in Linux

Chapter 2: OS Overview

W4118 Operating Systems. Instructor: Junfeng Yang

Chapter 5 Process Scheduling

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

CPU Scheduling. Core Definitions

ò Scheduling overview, key trade-offs, etc. ò O(1) scheduler older Linux scheduler ò Today: Completely Fair Scheduler (CFS) new hotness

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Operating Systems, 6 th ed. Test Bank Chapter 7

Completely Fair Scheduler and its tuning 1

ICS Principles of Operating Systems

CPU Scheduling. CPU Scheduling

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Linux Scheduler. Linux Scheduler

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Operating System: Scheduling

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Chapter 1 Computer System Overview

The Linux Kernel: Process Management. CS591 (Spring 2001)

Threads Scheduling on Linux Operating Systems

Linux process scheduling

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

OS OBJECTIVE QUESTIONS

CS414 SP 2007 Assignment 1

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

Process Scheduling II

Survey on Job Schedulers in Hadoop Cluster

REAL TIME OPERATING SYSTEMS. Lesson-10:

Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Processes and Non-Preemptive Scheduling. Otto J. Anshus

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Operating System Tutorial

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

CS161: Operating Systems

(Refer Slide Time: 00:01:16 min)

Operating Systems. Lecture 03. February 11, 2013

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Outline: Operating Systems

Department of Electrical Engineering and Computer Science MASSACHUSETTS INSTITUTE OF TECHNOLOGY Operating System Engineering: Fall 2005

Have both hardware and software. Want to hide the details from the programmer (user).

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

SYSTEM ecos Embedded Configurable Operating System

Input / Output and I/O Strategies

Chapter 3. Operating Systems

Windows Server Performance Monitoring

EECE 276 Embedded Systems

Improvement of Scheduling Granularity for Deadline Scheduler

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Introduction. What is an Operating System?

A Comparative Study of CPU Scheduling Algorithms

Scheduling Algorithms

Linux Scheduler Analysis and Tuning for Parallel Processing on the Raspberry PI Platform. Ed Spetka Mike Kohler

Performance Comparison of RTOS

Lecture 3 Theoretical Foundations of RTOS

Chapter 11 I/O Management and Disk Scheduling

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Chapter 11 I/O Management and Disk Scheduling

Storage. The text highlighted in green in these slides contain external hyperlinks. 1 / 14

This tutorial will take you through step by step approach while learning Operating System concepts.

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Transcription:

Lecture Overview Introduction to Linux process scheduling Policy versus algorithm Linux overall process scheduling objectives Timesharing Dynamic priority Favor I/O-bound process Linux scheduling algorithm Dividing time into epochs Remaining quantum as process priority When scheduling occurs Operating Systems - May 17, 2001 Linux Process Scheduling Policy First we examine Linux scheduling policy A scheduling policy is the set of decisions you make regarding scheduling priorities, goals, and objectives A scheduling algorithm is the instructions or code that implements a given scheduling policy Linux has several, conflicting objectives Fast process response time Good throughput for background jobs Avoidance of process starvation etc. 1

Linux Process Scheduling Policy Linux uses a timesharing technique We know that this means that each process is assigned a small quantum or time slice that it is allowed to execute This relies on hardware timer interrupts and is completely transparent to the processes Linux schedule process according to a priority ranking, this is a goodness ranking Linux uses dynamic priorities, i.e., priorities are adjusted over time to eliminate starvation Processes that have not received the CPU for a long time get their priorities increased, processes that have received the CPU often get their priorities decreased Linux Process Scheduling Policy We can classify processes using two schemes CPU-bound versus I/O-bound We learned about this in previous lectures Interactive versus batch versus real-time We have talked about these concepts in previous lectures, so they should be relatively self-explanatory These classifications are somewhat independent, e.g., a batch process can be either I/O-bound or CPU-bound Linux recognizes real-time programs and assigns them high priority, but this is only soft real-time, like streaming audio Linux does not recognize batch or interactive processes, instead it implicitly favors I/O-bound processes 2

Linux Process Scheduling Policy Linux uses process preemption, a process is preempted when Its time quantum has expired A new process enters TASK_RUNNING state and its priority is greater than the priority of the currently running process The preempted process is not suspended, it is still in the ready queue, it simply no longer has the CPU Consider a text editor and a compiler Since the text editor is an interactive program, its dynamic priority is higher than the compiler The text editor will be block often since it is waiting for I/O When the I/O interrupt receives a key-press for the editor, the editor is put on the ready queue and the scheduler is called since the editor s priority is higher than the compiler The editor gets the input and quickly blocks for more I/O Linux Process Scheduling Policy Determining the length of the quantum Should be neither too long or too short If too short, the overhead caused by process switching becomes excessively high If too long, processes no longer appear to be executing concurrently For Linux, long quanta do not necessarily degrade response time for interactive processes because their dynamic priority remains high, thus they get the CPU as soon as they need it For long quanta, responsiveness can degrade in instances where the scheduler does not know if a process is interactive or not, such as when a process is newly created The for Linux is the longest possible quantum without affecting responsiveness; this turns out to be about 20 clock ticks or 210 milliseconds 3

The Linux scheduling algorithm is not based on a continuous CPU time axis, instead it divides the CPU time into epochs An epoch is a division of time or a period of time In a single epoch, every process has a specified time quantum that is computed at the beginning of each epoch This is the maximum CPU time that the process can use during the current epoch A process only uses its quantum when it is executing on the CPU, when the process is waiting for I/O its quantum is not used As a result, a process can get the CPU many times in one epoch, until its quantum is fully used An epoch ends when all runnable processes have used all of their quantum The a new epoch starts and all process get a new quantum Linux Process Scheduling Algorithm When does an epoch end? Important! An epoch ends when all processes in the ready queue have used their quantum This does not include processes that are blocking on some wait queue, they will still have quantum remaining The end of an epoch is only concerned with processes on the ready queue 4

Calculating process quanta for an epoch Each process is initially assigned a base time quantum, as mentioned previously it is about 20 clock ticks If a process uses its entire quantum in the current epoch, then in the next epoch it will get the base time quantum again If a process does not use its entire quantum, then the unused quantum carries over into the next epoch (the unused quantum is not directly used, but a bonus is calculated) Why? Process that block often will not use their quantum; this is used to favor I/O-bound processes because this value is used to calculate priority When forking a new child process, the parent process remaining quantum divided in half; half for the parent and half for the child Linux Process Scheduling Algorithm Selecting a process to run next The scheduler considers the priority of each process There are two kinds of priorities Static priorities - these are assigned to real-time processes and range from 1 to 99; they never change Dynamic priorities - these apply to all other processes and it is the sum of the base time quantum (also called the base priority) and the number of clock ticks left in the current epoch The static priority of real-time process is always higher than the dynamic priority of conventional processes Conventional processes will only execute when there are no real-time processes to execute 5

Scheduling data in the process descriptor The process descriptor (task_struct in Linux) holds essentially of the information for a process, including scheduling information Recall that Linux keeps a list of all process task_structs and a list of all ready process task_structs The next two slides describe the relevant scheduling fields in the process descriptor Linux Process Scheduling Algorithm Each process descriptor (task_struct) contains the following fields need_resched - this flag is checked every time an interrupt handler completes to decide if rescheduling is necessary policy - the scheduling class for the process For real-time processes this can have the value of SCHED_FIFO - first-in, first-out with unlimited time quantum SCHED_RR - round-robin with time quantum, fair CPU usage For all other processes the value is SCHED_OTHER For processes that have yielded the CPU, the value is SCHED_YIELD 6

Process descriptor fields (con t) rt_priority - the static priority of a real-time process, not used for other processes priority - the base time quantum (or base priority) of the process counter - the number of CPU ticks left in its quantum for the current epoch This field is updated every clock tick by update_process_times() The priority and counter fields are used to for timesharing and dynamic priorities in conventional processes, for only time-sharing in SCHED_RR real-time processes, and are not used at all for SCHED_FIFO real-time processes Linux Process Scheduling Algorithm Process descriptor fields (con t) Important! Just in case you missed it in the last bullet of the last slide, the priority and counter fields are used to for calculating the dynamic priority of conventional processes These two fields are added together to get the current dynamic priority of a process when searching for a process to schedule These two fields are also used when assigning new quanta at the end of an epoch; if a process has not used its quantum then it is probably an I/O-bound process and will get a bonus added to its quantum for the next epoch This raises the priority of interactive processes over time 7

Scheduling actually occurs in schedule() Its objective is to find a process in the ready queue then assign the CPU to it It is invoked in two ways Direct invocation Lazy invocation Linux Process Scheduling Algorithm Direct invocation of schedule() Occurs when the current process is going to block because it needs to wait for a necessary resource The current process is taken off of the ready queue and is placed on the appropriate wait queue; its state is changed to TASK_INTERRUPTIBLE or TASK_UNINTERRUPTIBLE Once the needed resource becomes available, the process is immediately woken up and remove from the wait queue 8

Lazy invocation of schedule() Occurs when The current process has used up its quantum; this is checked in update_process_times() A process is added to the ready queue and its priority is higher than the currently executing process; this check occurs in wake_up_process() A process calls sched_yield() Lazy invocation used the need_resched flag of the process descriptor and will cause schedule() to be called later Linux Process Scheduling Algorithm Actions performed by schedule() First it runs any kernel control paths that have not completed and other uncompleted house-keeping tasks Remember, the kernel is not preemptive, so it cannot switch to another process if a process is already in the kernel or if the kernel is in the middle of doing something else If the current process is SCHED_RR and has used all of its quantum, then it is given a new quantum and placed at the end of the ready queue If the process is not SCHED_RR, then it is removed from the ready queue 9

Actions performed by schedule() (con t) It scans the ready queue for the highest priority process It calculates the priority using the goodness() function It may not find any processes that are good when all processes on the ready queue have used up their quantum (i.e., all have a zero counter field) In this case it must start a new epoch by assigned a new quantum to all processes as described on a previous slide (both running and blocked processes this allows us to favor I/O-bound processes) If a higher priority process was found, then the scheduler performs a process switch Linux Process Scheduling Algorithm How good is a runnable process? Uses goodness() to determine priority (goodness == -1000) - do not select process (goodness == 0) - process has exhausted quantum (0 < goodness < 1000) - conventional process with quantum (goodness >= 1000) - real-time process goodness() is essentially equivalent to if (p->policy!= SCHED_OTHER) return 1000 + p->rt_priority; if (p->counter == 0) return 0; if (p->mm == prev->mm) return p->counter + p->priority + 1; return p->counter + p->priority; 10

Linux scheduler issues Does not scale very well as the number of process grows because it has to recompute dynamic priorities Tries to minimize this by computing at end of epoch only Large numbers of runnable processes can slow response time Predefined quantum is too long for high system loads I/O-bound process boosting is not optimal Some I/O-bound processes are not interactive (e.g., database search or network transfer) Support for real-time processes is weak Review of Lectures So Far Computer hardware In general, we can think of the CPU as a small, selfcontained computer It has instructions for performing mathematical operations It has a small amount of storage space (its registers) We can feed instructions to the CPU one at a time and use it to perform complex calculations This is the ultimate in interactive operation; the user does everything It would be better if there was some way to give the CPU a lot of instructions all at once, rather than one at a time 11

Review of Lectures So Far Computer hardware (con t) We need to combine the CPU with RAM and a memory bus The bus connects the CPU to the RAM and allows the CPU to access address location contents Since we are going to load many instructions (i.e., a program) into memory, the CPU must have a special register to keep track of the current instruction, the program counter The program counter is incremented after each instruction Some instructions directly set the value of the program counter, like JUMP or GOTO instruction Review of Lectures So Far Computer hardware (con t) We need to combine the CPU with RAM and a memory bus (con t) By adding memory we must extend the operations that the CPU needs to perform, it needs instructions to read/write to/from memory We can use memory for two purposes now Storing instructions (the program code) Storing data This doesn t allow us to interact with the program and memory is still pretty expensive for its size 12

Review of Lectures So Far Computer hardware (con t) Now we add I/O devices to the communication bus The CPU communicates with I/O devices via the bus This allows user interaction with the program (e.g., via a terminal) This also allows more data and bigger programs (e.g., stored on a disk) Since the CPU is much faster than the I/O devices it has three options when performing I/O It can simply wait (not very efficient) It can poll the device and try to do other work at the same time (complicated to implement and not necessarily timely) It can allow the I/O devices to notify it when they are done via interrupts (still a bit complicated, but efficient and timely) Review of Lectures So Far Computer hardware (con t) Up until this point we have described what amounts to a simple, but reasonable computer system This system stores programs and data on disks It executes a one program at a time by loading a program s instructions into memory and sets the program counter to the first instruction of the program A program runs until completion and has complete access to the hardware and I/O devices There really isn t much of an operating system and no such thing as a process This is good, but a lot of the time the CPU is just sitting around with nothing to do because the program is waiting for I/O 13

Review of Lectures So Far Providing an Operating System We could get a bigger benefit if we could run more than one program at once (i.e., time-sharing) With time-sharing the CPU can execute other program when the current program blocks for I/O This introduces the notion of a process (i.e., an executing program) Currently we have no way of interrupting the current process and starting a new process, there are two options Implement all I/O calls to give up CPU when they might block; this is cooperative multitasking Add a hardware timer interrupt to our CPU so that we can automatically interrupt processes after some amount of time; this is called preemptive multitasking Review of Lectures So Far Providing an Operating System (con t) On a uniprocessor system, a process can only make progress when it has the CPU and only one process can have the CPU at a time How does the OS share the CPU among multiple processes? It preempts the current process (or the current process cooperatively blocks) and the OS chooses another process for the CPU 14

Review of Lectures So Far Providing an Operating System (con t) What happens when a process is preempted? The OS must save the CPU registers for the current process since they contain unfinished work; the CPU registers are saved in the process descriptor in RAM The process descriptor keeps track of all process information for a specific process The OS must also save the program counter in the process descriptor so it knows where to resume the current process For the new process, the OS must restore its CPU registers from the saved values in the process descriptor and restore the program counter to the next instruction for the new process Review of Lectures So Far Providing an Operating System (con t) We now have created a multitasking OS Is it a concurrent system? English definition of concurrent Happening at the same time as something else Computer science definition of concurrent Non-sequential Definition of parallel Happening at the same time as something else This is the same as the English meaning of concurrent In computer science something that is parallel is also concurrent (i.e., non-sequential), but something that is concurrent is not necessarily parallel 15