Processes and Threads

Similar documents
Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Operating System: Scheduling

CPU Scheduling Outline

W4118 Operating Systems. Instructor: Junfeng Yang

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

Chapter 2: OS Overview

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

REAL TIME OPERATING SYSTEMS. Lesson-10:

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Linux Process Scheduling Policy

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

CPU Scheduling. CPU Scheduling

ICS Principles of Operating Systems

ELEC 377. Operating Systems. Week 1 Class 3

Operating Systems Lecture #6: Process Management

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

OPERATING SYSTEMS SCHEDULING

Chapter 5 Process Scheduling

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Processor Scheduling. Queues Recall OS maintains various queues

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

CS414 SP 2007 Assignment 1

Operating Systems, 6 th ed. Test Bank Chapter 7

Why Threads Are A Bad Idea (for most purposes)

Real-Time Scheduling 1 / 39

Performance Comparison of RTOS

This tutorial will take you through step by step approach while learning Operating System concepts.

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

Operating Systems. III. Scheduling.

Operating System Tutorial

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

SYSTEM ecos Embedded Configurable Operating System

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

OS OBJECTIVE QUESTIONS

First-class User Level Threads

Chapter 6, The Operating System Machine Level

Operating Systems 4 th Class

CPU Scheduling. Core Definitions

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Introduction. Scheduling. Types of scheduling. The basics

CHAPTER 15: Operating Systems: An Overview

Scheduling. Scheduling. Scheduling levels. Decision to switch the running process can take place under the following circumstances:

Lecture 1 Operating System Overview

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Resource Containers: A new facility for resource management in server systems

Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture

Replication on Virtual Machines

Multi-core Programming System Overview

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

CSC 2405: Computer Systems II

Introduction. What is an Operating System?

CPU Scheduling 101. The CPU scheduler makes a sequence of moves that determines the interleaving of threads.

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Introduction to Operating Systems. Perspective of the Computer. System Software. Indiana University Chen Yu


Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Threads Scheduling on Linux Operating Systems

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

Predictable response times in event-driven real-time systems

Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.

Linux 2.4. Linux. Windows

Syllabus MCA-404 Operating System - II

COS 318: Operating Systems. Virtual Machine Monitors

Embedded Systems. 6. Real-Time Operating Systems

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Effective Computing with SMP Linux

Overview and History of Operating Systems

CS420: Operating Systems OS Services & System Calls

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

What is best for embedded development? Do most embedded projects still need an RTOS?

Lecture 3 Theoretical Foundations of RTOS

Red Hat Linux Internals

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis

Overview of Operating Systems Instructor: Dr. Tongping Liu

Real Time Programming: Concepts

Linux scheduler history. We will be talking about the O(1) scheduler

Operating Systems Overview

The Real-Time Operating System ucos-ii

Transcription:

Slide 6-1 6 Processes and Threads

Announcements Slide 6-2 Homework Set #2 due Thursday at 11 am Program Assignment #1 due Thursday Feb. 10 at 11 am TA will introduce in recitation Wednesday Read chapters 6 and 7

A process is a program actively executing from main memory has a Program Counter (PC) and execution state associated with it CPU registers keep state OS keeps process state in memory it s alive! has an address space associated with it a limited set of (virtual) addresses that can be accessed by the What is a Process? executing code Main Memory Program P1 binary Code Data Heap Stack Process Fetch Code and Data Registers Write Data Slide 6-3 CPU Execution Program Counter (PC) ALU

How is a Process Structured in Slide 6-4 Run-time memory image Essentially code, data, stack, and heap Code and data loaded from executable file Stack grows downward, heap grows upward Memory? max address address 0 Run-time memory User stack Unallocated Heap Read/write.data,.bss Read-only.init,.text,.rodata

Multiple Processes Slide 6-5 Main Memory Process P1 Code Data Heap Stack Process P2 Code Data Heap Stack OS Code PCB for P1 PCB for P2 More Data, Heap, Stack Process state, e.g. ready, running, or waiting accounting info, e.g. process ID Program Counter CPU registers CPUscheduling info, e.g. priority Memory management info, e.g. base and limit registers, page tables I/O status info, e.g. list of open files

Multiple Processes Slide 6-6 Main Memory Process P1 Code Process P2 Code OS Code CPU Execution Data Data Heap PCB for P1 Program Counter (PC) Heap Stack PCB for P2 ALU Stack More Data, Heap, Stack

Context Switching Slide 6-7 Interrupt Initialization 1 7 9 6 4 Executable Memory 2 Process Manager Interrupt Handler P 1 P 2 P n 8 3 5 Each time a process is switched out, its context must be saved, e.g. in the PCB Each time a process is switched in, its context is restored This usually requires copying of registers

Threads Slide 6-8 A thread is a logical flow of execution that runs within the context of a process has its own program counter (PC), register state, and stack shares the memory address space with other threads in the same process, share the same code and data and resources (e.g. open files)

Threads Slide 6-9 Why would you want multithreaded processes? reduced context switch overhead In Solaris, context switching between processes is 5x slower than switching between threads shared resources => less memory consumption => more threads can be supported, especially for a scalable system, e.g. Web server must handle thousands of connections inter-thread communication is easier and faster than inter-process communication thread also called a lightweight process

Threads Slide 6-10 Main Memory Process P1 s Address Space Code Thread 1 PC1 Reg. State Stack Data Thread 2 PC2 Reg. State Stack Heap Thread 3 PC3 Reg. State Stack Process P2 Code Data Heap Stack Process P1 is multithreaded Process P2 is single threaded The OS is multiprogram med If there is preemptive timeslicing, the system is multitasked

Processes &Threads Slide 6-11 State State Address Space Map Map Resources Static data Stack Map Map Stack Program

Thread-Safe/Reentrant Code Slide 6-12 If two threads share and execute the same code, then the code needs to be thread-safe the use of global variables is not thread safe the use of static variables is not thread safe the use of local variables is thread safe need to govern access to persistent data like global/static variables with locking and synchronization mechanisms reentrant is a special case of thread-safe: reentrant code does not have any references to global variables thread-safe code protects and synchronizes access to global variables

User-Space and Kernel Threads Slide 6-13 pthreads is a POSIX user space threading API provides interface to create, delete threads in the same process threads will synchronize with each other via this package no need to involve the OS implementations of pthreads API differ underneath the API Kernel threads are supported by the OS kernel must be involved in switching threads mapping of user-level threads to kernel threads is usually one-toone

Model of Process Execution Slide 6-14 Preemption or voluntary yield New Process Ready Ready List List job job Ready Allocate Scheduler Resource Manager Resources job job Request Blocked CPU Done CPU job Running

The Scheduler Slide 6-15 From Other States Ready Process Process Process Descriptor Enqueuer Ready Ready List List Dispatcher Context Context Switcher CPU CPU Running Process

Invoking the Scheduler Slide 6-16 Need a mechanism to call the scheduler Voluntary call Process blocks itself Calls the scheduler Involuntary call External force (interrupt) blocks the process Calls the scheduler

Voluntary CPU Sharing Slide 6-17 yield(p i.pc, p j.pc) { memory[p i.pc] = PC; PC = memory[p j.pc]; } p i can be automatically determined from the processor status registers yield(*, p j.pc) { memory[p i.pc] = PC; PC = memory[p j.pc]; }

More on Yield Slide 6-18 p i and p j can resume one another s execution yield(*, p j.pc);... yield(*, p i.pc);... yield(*, p j.pc);... Suppose p j is the scheduler: // p_i yields to scheduler yield(*, p j.pc); // scheduler chooses p k yield(*, p k.pc); // p k yields to scheduler yield(*, p j.pc); // scheduler chooses...

Voluntary Sharing Slide 6-19 Every process periodically yields to the scheduler Relies on correct process behavior process can fail to yield: infinite loop either intentionally (while(1)) or due to logical error (while(!done)) Malicious Accidental process can yield to soon: unfairness for the nice processes who give up the CPU, while others do not process can fail to yield in time: another process urgently needs the CPU to read incoming data flowing into a bounded buffer, but doesn t get the CPU in time to prevent the buffer from overflowing and dropping information Need a mechanism to override running process

Involuntary CPU Sharing Slide 6-20 Interval timer Device to produce a periodic interrupt Programmable period IntervalTimer() { InterruptCount--; if(interruptcount <= 0) { InterruptRequest = TRUE; InterruptCount = K; } SetInterval(programmableValue) { } K = programmablevalue: InterruptCount = K; } }

Involuntary CPU Sharing (cont) Slide 6-21 Interval timer device handler Keeps an in-memory clock up-to-date (see Chap 4 lab exercise) Invokes the scheduler IntervalTimerHandler() { Time++; // update the clock TimeToSchedule--; if(timetoschedule <= 0) { <invoke scheduler>; TimeToSchedule = TimeSlice; } }

Contemporary Scheduling Slide 6-22 Involuntary CPU sharing timer interrupts Time quantum determined by interval timer usually fixed size for every process using the system Sometimes called the time slice length

Choosing a Process to Run Slide 6-23 Ready Process Process Process Descriptor Enqueue Ready Ready List List Mechanism never changes Strategy = policy the dispatcher uses to select a process from the ready list Different policies for different requirements Dispatch Context Context Switch Switch CPU CPU Running Process

Policy Considerations Slide 6-24 Policy can control/influence: CPU utilization Average time a process waits for service Average amount of time to complete a job Could strive for any of: Equitability Favor very short or long jobs Meet priority requirements Meet deadlines

Optimal Scheduling Suppose the scheduler knows each process p i s service time, τ(p i ) -- or it can estimate each τ(p i ) : Policy can optimize on any criteria, e.g., CPU utilization Waiting time Deadline To find an optimal schedule: Have a finite, fixed # of p i Know τ(p i ) for each p i Enumerate all schedules, then choose the best Slide 6-25

However... Slide 6-26 The τ(p i ) are almost certainly just estimates General algorithm to choose optimal schedule is O(n 2 ) Other processes may arrive while these processes are being serviced Usually, optimal schedule is only a theoretical benchmark scheduling policies try to approximate an optimal schedule

Talking About Scheduling... Slide 6-27 Let P = {p i 0 i < n} = set of processes Let S(p i ) {running, ready, blocked} Let τ(p i ) = Time process needs to be in running state (the service time) Let W(p i ) = Time p i is in ready state before first transition to running (wait time) Let T TRnd (p i ) = Time from p i first enter ready to last exit ready (turnaround time) Batch Throughput rate = inverse of avg T TRnd Timesharing response time = W(p i )

Simplified Model Slide 6-28 Preemption or voluntary yield New Process Ready Ready List List job job Ready Allocate Scheduler Resource Manager Resources job job Request Blocked CPU Done CPU job Running Simplified, but still provide analysis result Easy to analyze performance No issue of voluntary/involuntary sharing

Estimating CPU Utilization Slide 6-29 New Process Ready Ready List List Scheduler CPU CPU Done Let λ = the average rate at which processes are placed in the Ready List, arrival rate Let µ = the average service rate 1/ µ = the average τ(p i ) λ p i per second System Each p i uses 1/ µ units of the CPU

Slide 6-30 Estimating CPU Utilization New Process Ready Ready List List Scheduler CPU CPU Done Let λ = the average rate at which processes are placed in the Ready List, arrival rate Let µ = the average service rate 1/ µ = the average τ(p i ) Let ρ = the fraction of the time that the CPU is expected to be busy ρ = # p i that arrive per unit time * avg time each spends on CPU ρ = λ * 1/ µ = λ/µ Notice must have λ < µ (i.e., ρ < 1) What if ρ approaches 1?

Nonpreemptive Schedulers Slide 6-31 Blocked or preempted processes New Process Ready Ready List List Scheduler CPU CPU Done Try to use the simplified scheduling model Only consider running and ready states Ignores time in blocked state: New process created when it enters ready state Process is destroyed when it enters blocked state Really just looking at small phases of a process