OS: Processes vs. Threads

Similar documents
Processes and Non-Preemptive Scheduling. Otto J. Anshus

Operating System: Scheduling

Chapter 2: OS Overview

SSC - Concurrency and Multi-threading Java multithreading programming - Synchronisation (I)

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

CS414 SP 2007 Assignment 1

Chapter 6, The Operating System Machine Level

Chapter 1 FUNDAMENTALS OF OPERATING SYSTEM

Multi-core Programming System Overview

Real Time Programming: Concepts

W4118 Operating Systems. Instructor: Junfeng Yang

ELEC 377. Operating Systems. Week 1 Class 3

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

First-class User Level Threads

Shared Address Space Computing: Programming

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

This tutorial will take you through step by step approach while learning Operating System concepts.

Operating System Tutorial

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

OPERATING SYSTEM SERVICES

Operating Systems Concepts: Chapter 7: Scheduling Strategies

CHAPTER 15: Operating Systems: An Overview

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines

Kernel Types System Calls. Operating Systems. Autumn 2013 CS4023

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Linux Process Scheduling Policy

Control 2004, University of Bath, UK, September 2004

REAL TIME OPERATING SYSTEMS. Lesson-10:

Threads Scheduling on Linux Operating Systems

Multi-core architectures. Jernej Barbic , Spring 2007 May 3, 2007

Mutual Exclusion using Monitors

Introduction to Embedded Systems. Software Update Problem

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

SYSTEM ecos Embedded Configurable Operating System

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

An Implementation Of Multiprocessor Linux

CS161: Operating Systems

Lecture 1 Operating System Overview

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Performance Comparison of RTOS

Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture

Operating Systems Lecture #6: Process Management

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

Advanced topics: reentrant function

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Chapter 3 Operating-System Structures

A Survey of Parallel Processing in Linux

School of Computing and Information Sciences. Course Title: Computer Programming III Date: April 9, 2014

Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification

COS 318: Operating Systems. Virtual Machine Monitors

Topics. Producing Production Quality Software. Concurrent Environments. Why Use Concurrency? Models of concurrency Concurrency in Java

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Cloud Computing. Up until now

Chapter 2 System Structures

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Java Virtual Machine Locks

Glossary of Object Oriented Terms

Chapter 1 Computer System Overview

Thread Synchronization and the Java Monitor

theguard! ApplicationManager System Windows Data Collector

Replication on Virtual Machines

Chapter 3. Operating Systems

PROGRAMMING CONCEPTS AND EMBEDDED PROGRAMMING IN C, C++ and JAVA: Lesson-4: Data Structures: Stacks

Overview and History of Operating Systems

Introduction. What is an Operating System?

Operating Systems and Networks

Kernel Synchronization and Interrupt Handling

How To Understand How A Process Works In Unix (Shell) (Shell Shell) (Program) (Unix) (For A Non-Program) And (Shell).Orgode) (Powerpoint) (Permanent) (Processes

How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself

CS 3530 Operating Systems. L02 OS Intro Part 1 Dr. Ken Hoganson

Operating System Overview. Otto J. Anshus

Embedded Systems. 6. Real-Time Operating Systems

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

Embedded Programming in C/C++: Lesson-1: Programming Elements and Programming in C

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

I/O Device and Drivers

Outline: Operating Systems

Course Development of Programming for General-Purpose Multicore Processors

How To Write A Windows Operating System (Windows) (For Linux) (Windows 2) (Programming) (Operating System) (Permanent) (Powerbook) (Unix) (Amd64) (Win2) (X

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism

Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.

The Real-Time Operating System ucos-ii

REAL TIME OPERATING SYSTEM PROGRAMMING-II: II: Windows CE, OSEK and Real time Linux. Lesson-12: Real Time Linux

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Embedded Systems. Review of ANSI C Topics. A Review of ANSI C and Considerations for Embedded C Programming. Basic features of C

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Process Scheduling. Process Scheduler. Chapter 7. Context Switch. Scheduler. Selection Strategies

CPU Scheduling. CPU Scheduling

Facing the Challenges for Real-Time Software Development on Multi-Cores

Spring 2011 Prof. Hyesoon Kim

Operating System Structures

CS0206 OPERATING SYSTEMS Prerequisite CS0201, CS0203

Transcription:

Process A process is a name given to a program instance that has been loaded into memory and managed by the operating system OS: Processes vs. Threads CIT 595 Spring 2010 Process address space is generally organized into code, data (static/global), heap, and stack segments Every process in execution works with: Registers: PC, Working Registers Call stack (Stack Pointer Reference) CIT 595 2 Process Activity State Diagram of a Process Activity/State As process executes over time it can be doing either of the activities or is in following states : State/Activity new running waiting/blocked ready terminated Description Process is being created Instructions are being executed on the processor Process is waiting for some event to occur Process is waiting to use the processor Process has finished execution Note: state names vary across different OS CIT 595 3 Note: In uniprocessor system, only one process can be in running state while many processes can be in ready and waiting states CIT 595 4 1

Process Creation A process can create several other process a.k.a child or sub processes Exploit the ability of the system to concurrently execute E.g. gcc program invokes different processes for the compiler, assembler, and linker Each process could get its resources directly from OS Can restrict resources to subset of parent s process Prevent overloading of system with too many children The new process gets its own space in memory Parent and child processes address space are still different Because parent and child are isolated, they can communicate only via system calls Process Management OS maintains a data structure for each process called Process Control Block (PCB) Information associated with each PCB: Process state: e.g. ready, or waiting etc. Program counter: address of next instruction CPU registers: PC, working registers, stack pointer, condition code CPU scheduling information: scheduling order and priority Memory-management management information: page table/segment table pointers Accounting information: book keeping info e.g. amt CPU used I/O status information: list of I/O devices allocated to this process CIT 595 5 CIT 595 6 Ready Queue Processes resident in main memory and that in ready state are kept in a ready queue Process waits in the ready queue until selected Unless a process terminates, t it will eventually be put back into a ready queue Process Scheduling Idea of multi-tasking or multiprogramming Realize that process has cpu-burst and I/O burst cycle When I/O burst, CPU idle Exploit the idleness to better achieve parallel tasking On Uniprocessor system switch between processes so fast to give an illusion of parallelism Determine which process should be next in line for the CPU Selects from among the processes that are ready to execute (more on scheduling algorithms later) Similarly OS keeps device queues for processes waiting for I/O CIT 595 7 CIT 595 8 2

Non-Preemptive vs. Preemptive A process can give up CPU in two ways Non-preemptive: A process voluntarily gives up CPU I/O request Process is blocked, then when request ready it is put back into ready queue A process creates a new child/sub process (more later) Finished Instructions to execute (Process termination) PCB and resources assigned are de-allocated Preemptive: A process is forced to give up the CPU Interrupted due to higher priority process Each process has fixed time-slice to use CPU CIT 595 9 Multithreading Multithreading a program appears to do more than one thing at a time The idea of Multithreading is same as Multiprogramming i.e. multitasking but within a single process Multiprogramming is multitasking across different process E.g. A word processing program has separate threads: One for displaying graphics Other for reading in keystrokes from the user Another to perform spelling and grammar checking in the background CIT 595 10 Why do Multithreading? A process includes many things: An address space (defining all the code and data pages) OS descriptors of resources allocated (e.g., open files) Execution state (PC, SP, regs, etc). Creating a new process is costly because of all of the data structures that must be allocated and initialized Multithreading (contd..) Allow process to be subdivided into different threads of control A thread is the smallest schedulable unit in multithreading A thread in execution works with thread ID Registers (program counter and working register set) Stack (for procedure call parameters, local variables etc.) Communicating between processes is costly because most communication goes through h the OS Inter-Process Communication (IPC) Overhead of system calls and copying data A thread shares with other threads a process s (to which it belongs to) Code section Data section (static + heap) Permissions Other resources (e.g. files) Process with 2 threads CIT 595 11 CIT 595 12 3

Difference between Single vs. Multithread Process Advantages of Multithreading Increase responsiveness to the user Allows a program to continue running even if parts of it is waiting Resource Sharing Threads share memory and resources of the process to which they belong All threads run within same address space A process by itself can be viewed a single thread and is traditionally known as a heavy weight process CIT 595 13 Economical They can communicate through shared data and thereby eliminate the overhead of system calls Multiprocessor system They allow you to get parallel performance on a sharedmemory multiprocessor CIT 595 14 Threads Like process states, threads also have states: New, Ready, Running, Waiting and Terminated Like processes, the OS will switch between threads (even if though h they belong to a single process) for CPU usage Like process creation, thread creation is supported by APIs Java Threads may be created by: Extending Thread class Implementing the Runnable interface In C, threads are created using functions in <pthread.h> library (p stand for posix) Sharing Address Space Sharing address space requires only one copy of code or data in main memory E.g.1: 2 processes share the same library routine (code) E.g.2: A print program produces characters and that t is consumed by printer driver (two processes sharing data) E.g.3: Threads within a process share (global) data section As long as shared data is not being modified there is no problem But concurrent access to shared data that modify the value of the data can lead to data inconsistency E.g. Printer driver consumed data before print program produced it CIT 595 15 CIT 595 16 4

Threading Example Threading Example (contd..) class Counter { private int c = 0; public void increment() { c++; public void decrement() { c--; public int value() { return c; If a Counter object is referenced from multiple threads There will be interference between threads when 2 operations (increment and decrement), running in different threads, but acting on the same data (i.e. c) This means that the two operations consist of multiple steps, and the sequences of steps overlap. Remember that single expression c++ can be decomposed into three steps: 1. Retrieve the current value of c. 2. Increment the retrieved value by 1. 3. Store the incremented value back in c. The same applies for c-- CIT 595 17 CIT 595 18 Threading Example (contd..) Suppose Thread 1 invokes increment at about the same time Thread 2 invokes decrement In reality OS is going to switch between Thread 1 and 2 If the initial value of c is 0, their interleaved actions might follow this sequence: 1. Thread 1: Retrieve c 2. Thread 2: Retrieve c 3. Thread 1: Increment retrieved value; result is 1 4. Thread 2: Decrement retrieved value; result is -1 5. Thread 1: Store result in c; c is now 1 6. Thread 2: Store result in c; c is now -1 Race Condition In previous example, Thread 1's result is lost, overwritten by Thread 2 Many different interleaving can result in different value of c Race Condition Several process/threads access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place CIT 595 19 CIT 595 20 5

Synchronization To avoid race conditions, need to guarantee atomic execution of sequence of instructions Execute without an interruptions Mutual exclusion for shared data Big Picture Request entry to critical section (regions of code that may change shared data) Access (shared) data in critical section Exit from critical section OS Implementing Synchronization Done via system call Block (wait) until you have exclusive access Interrupts are temporarily disabled to carryout atomic execution Low-level support E.g. Test-and-Set (TS) instruction provided in IBM/370 ISA Lock/Unlock mechanism Lock state is implemented by a memory location Location contains value 0 if the lock is unlocked and 1 if the lock is locked If value is 0, then lock is closed and critical section is executed. After finishing the critical section, the lock is opened This support is usually for shared memory multiprocessor system CPU executing such a instruction locks the memory bus to prohibit other CPUs CIT 595 21 CIT 595 22 Synchronization Mechanics for Programmer High-Level Language constructs which inherently translates to OS system calls E.g. In Java you can synchronize methods using synchronized keyword Guarantees mutual exclusion i.e. acquires the intrinsic lock for that method's object and releases it when method returns Guarantees that changes to the state of the object are visible to all threads public class SynchronizedCounter { private int c = 0; public synchronized void increment() { c++; public synchronized void decrement() { c--; public synchronized int value() { return c; CIT 595 23 6