Last Class: Synchronization!

Similar documents
Last Class: Semaphores

Lecture 6: Semaphores and Monitors

6 Synchronization with Semaphores

Semaphores and Monitors: High-level Synchronization Constructs

! Past few lectures: Ø Locks: provide mutual exclusion Ø Condition variables: provide conditional synchronization

Chapter 6, The Operating System Machine Level

Real Time Programming: Concepts

Outline of this lecture G52CON: Concepts of Concurrency

Monitors, Java, Threads and Processes

Parallel Programming

OPERATING SYSTEMS PROCESS SYNCHRONIZATION

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

COS 318: Operating Systems. Semaphores, Monitors and Condition Variables

SYSTEM ecos Embedded Configurable Operating System

Lecture 6: Introduction to Monitors and Semaphores

Mutual Exclusion using Monitors

Understanding Hardware Transactional Memory

Monitors and Semaphores

A Survey of Parallel Processing in Linux

Chapter 1 FUNDAMENTALS OF OPERATING SYSTEM

Concurrency and Parallelism

SSC - Concurrency and Multi-threading Java multithreading programming - Synchronisation (I)

Synchronization. Todd C. Mowry CS 740 November 24, Topics. Locks Barriers

Networking Operating Systems (CO32010)

Shared Address Space Computing: Programming

Monitors (Deprecated)

Semaphores. Otto J. Anshus University of {Tromsø, Oslo}

Operating Systems 4 th Class

W4118 Operating Systems. Instructor: Junfeng Yang

Distributed Systems [Fall 2012]

CSC 2405: Computer Systems II

Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification

OPERATING SYSTEMS Internais and Design Principles

Concurrent Programming

First-class User Level Threads

Priority Based Implementation in Pintos

Effective Computing with SMP Linux

Programming real-time systems with C/C++ and POSIX

Multi-core architectures. Jernej Barbic , Spring 2007 May 3, 2007

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Linux Scheduler. Linux Scheduler

An Implementation Of Multiprocessor Linux

Chapter 6 Concurrent Programming

Priority Inversion Problem and Deadlock Situations

CS11 Java. Fall Lecture 7

Outline. Review. Inter process communication Signals Fork Pipes FIFO. Spotlights

Kernel Synchronization and Interrupt Handling

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Linux scheduler history. We will be talking about the O(1) scheduler

Thesis Proposal: Improving the Performance of Synchronization in Concurrent Haskell

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Embedded Systems. 6. Real-Time Operating Systems

W4118 Operating Systems. Instructor: Junfeng Yang

How To Write A Windows Operating System (Windows) (For Linux) (Windows 2) (Programming) (Operating System) (Permanent) (Powerbook) (Unix) (Amd64) (Win2) (X

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

OPERATING SYSTEMS STRUCTURES

Java Virtual Machine Locks

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

3C03 Concurrency: Condition Synchronisation

1.4 SOFTWARE CONCEPTS

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Monitors & Condition Synchronization

Chapter 13 Embedded Operating Systems

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Fall 2007 Lecture 5 - DBMS Architecture

Chapter 2: OS Overview

THE UNIVERSITY OF AUCKLAND

Operating Systems Concepts: Chapter 7: Scheduling Strategies

REAL TIME OPERATING SYSTEMS. Lesson-10:

Operating System Manual. Realtime Communication System for netx. Kernel API Function Reference.

Lecture 22: C Programming 4 Embedded Systems

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:

How To Understand And Understand An Operating System In C Programming

Distributed Operating Systems

Have both hardware and software. Want to hide the details from the programmer (user).

Performance Improvement In Java Application

Topics. Producing Production Quality Software. Concurrent Environments. Why Use Concurrency? Models of concurrency Concurrency in Java

Operating System Impact on SMT Architecture

Main Points. Scheduling policy: what to do next, when there are multiple threads ready to run. Definitions. Uniprocessor policies

Tasks Schedule Analysis in RTAI/Linux-GPL

Gildart Haase School of Computer Sciences and Engineering

Operating Systems Principles

Ready Time Observations

Concurrent programming in Java

Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures

Chapter 12: Multiprocessor Architectures. Lesson 01: Performance characteristics of Multiprocessor Architectures and Speedup

Computer-System Architecture

Design Patterns in C++

opensm2 Enterprise Performance Monitoring December 2010 Copyright 2010 Fujitsu Technology Solutions

CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015

Linux Driver Devices. Why, When, Which, How?

W4118 Operating Systems. Instructor: Junfeng Yang

Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture

Temporal inventory and real-time synchronization in RTLinuxPro (R)

Spring 2011 Prof. Hyesoon Kim

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Transcription:

Last Class: Synchronization! Wrap-up on CPU scheduling MLFQ and Lottery scheduling Synchronization Mutual exclusion Critical sections Example: Too Much Milk Locks Synchronization primitives are required to ensure that only one thread executes in a critical section at a time. Lecture 8, page 1 Today: Synchronization: Locks and Semaphores! More on hardware support for synchronization Implementing locks using disabling interrupts, test&set and busy waiting What are semaphores? Semaphores are basically generalized locks. Like locks, semaphores are a special type of variable that supports two atomic operations and offers elegant solutions to synchronization problems. They were invented by Dijkstra in 1965. Lecture 8, page 2

Hardware Support for Synchronization! Implementing high level primitives requires low-level hardware support What we have and what we want Concurrent programs Low-level atomic operations (hardware) load/store interrupt disable test&set High-level atomic operations (software) lock monitors semaphore send & receive Lecture 8, page 3 Implementing Locks By Disabling Interrupts! There are two ways the CPU scheduler gets control: Internal Events: the thread does something to relinquish control (e.g., I/O). External Events: interrupts (e.g., time slice) cause the scheduler to take control away from the running thread. On uniprocessors, we can prevent the scheduler from getting control as follows: Internal Events: prevent these by not requesting any I/O operations during a critical section. External Events: prevent these by disabling interrupts (i.e., tell the hardware to delay handling any external events until after the thread is finished with the critical section) Why not have the OS support Lock.Acquire() and Lock.Release as system calls? Lecture 8, page 4

Implementing Locks by Disabling Interrupts! For uniprocessors, we can disable interrupts for high-level primitives like locks, whose implementations are private to the kernel. The kernel ensures that interrupts are not disabled forever, just like it already does during interrupt handling. class Lock { public: void Acquire(); void Release(); private: int value; Queue Q; Lock() { // lock is free value = 0; // queue is empty Q = 0; Acquire(T Thread){ disable interrupts; if (value == BUSY) { add T to Q T.Sleep(); else { value = BUSY; enable interrupts; Release() { disable interrupts; if queue not empty { take thread T off Q put T on ready queue else { value = FREE enable interrupts; Lecture 8, page 5 Wait Queues! When should Acquire re-enable interrupts when going to sleep? Before putting the thread on the wait queue? No, Release could check the queue, and not wake up the thread. After putting the thread on the wait queue, but before going to sleep? No, Release could put the thread on the ready queue, but it could already be on the ready queue. When the thread wakes up, it will go to sleep, missing the wakeup from Release. =>We still have a problem with multiprocessors. Lecture 8, page 6

Example! When the sleeping thread wakes up, it returns from Sleep back to Acquire. Interrupts are still disabled, so its ok to check the lock value, and if it is free, grab the lock and turn on interrupts. Lecture 8, page 7 Atomic read-modify-write Instructions! Atomic read-modify-write instructions atomically read a value from memory into a register and write a new value. Straightforward to implement simply by adding a new instruction on a uniprocessor. On a multiprocessor, the processor issuing the instruction must also be able to invalidate any copies of the value the other processes may have in their cache, i.e., the multiprocessor must support some type of cache coherence. Examples: Test&Set: (most architectures) read a value, write 1 back to memory. Exchange: (x86) swaps value between register and memory. Compare&Swap: (68000) read value, if value matches register value r1, exchange register r2 and value. Lecture 8, page 8

Implementing Locks with Test&Set! Test&Set: reads a value, writes 1 to memory, and returns the old value. class Lock { Acquire() { public: // if busy do nothing void Acquire(); while (test&set(value) == 1); void Release(); private: Release() { int value; value = 0; Lock() { value = 0; If lock is free (value = 0), test&set reads 0, sets value to 1, and returns 0. The Lock is now busy: the test in the while fails, and Acquire is complete. If lock is busy (value = 1), test&set reads 1, sets value to 1, and returns 1. The while continues to loop until a Release executes. Lecture 8, page 9 Busy Waiting! Acquire(){ //if Busy, do nothing while (test&set(value) == 1); What's wrong with the above implementation? What is the CPU doing? What could happen to threads with different priorities? How can we get the waiting thread to give up the processor, so the releasing thread can execute? Lecture 8, page 10

Locks using Test&Set with minimal busy-waiting! Can we implement locks with test&set without any busy-waiting or disabling interrupts? No, but we can minimize busy-waiting time by atomically checking the lock value and giving up the CPU if the lock is busy class Lock { // same declarations as earlier private int guard; Acquire(T:Thread) { while (test&set(guard) == 1) ; if (value!= FREE) { put T on Q; T->Sleep() & set guard = 0; else { value = BUSY; guard = 0; Release() { // busy wait while (test&set(guard) == 1) ; if Q is not empty { take T off Q; put T on ready queue; else { value = FREE; guard = 0; Lecture 8, page 11 Semaphores! Semaphore: an integer variable that can be updated only using two special atomic instructions. Binary (or Mutex) Semaphore: (same as a lock) Guarantees mutually exclusive access to a resource (only one process is in the critical section at a time). Can vary from 0 to 1 It is initialized to free (value = 1) Counting Semaphore: Useful when multiple units of a resource are available The initial count to which the semaphore is initialized is usually the number of resources. A process can acquire access so long as at least one unit of the resource is available Lecture 8, page 12

Semaphores: Key Concepts! Like locks, a semaphore supports two atomic operations, Semaphore.Wait() and Semaphore.Signal(). S.Wait() // wait until semaphore S // is available <critical section> S.Signal() // signal to other processes // that semaphore S is free Each semaphore supports a queue of processes that are waiting to access the critical section (e.g., to buy milk). If a process executes S.Wait() and semaphore S is free (non-zero), it continues executing. If semaphore S is not free, the OS puts the process on the wait queue for semaphore S. A S.Signal() unblocks one process on semaphore S's wait queue. Lecture 8, page 13 Binary Semaphores: Example! Too Much Milk using locks: Thread A Thread B Lock.Acquire(); Lock.Acquire(); if (nomilk){ if (nomilk){ buy milk; buy milk; Lock.Release(); Lock.Release(); Too Much Milk using semaphores: Thread A Thread B Semaphore.Wait(); Semaphore.Wait(); if (nomilk){ if (nomilk){ buy milk; buy milk; Semaphore.Signal(); Semaphore.Signal(); Lecture 8, page 14

Implementing Signal and Wait! class Semaphore { public: void Wait(Process P); void Signal(); private: int value; Queue Q; Semaphore(int val) { value = val; Q = empty; // queue of processes; Wait(Process P) { value = value - 1; if (value < 0) { add P to Q; P->block(); Signal() { value = value + 1; if (value <= 0){ remove P from Q; wakeup(p); => Signal and Wait of course must be atomic! Lecture 8, page 15 P1: S.Wait(); Signal and Wait: Example! S.Wait(); P2: S.Wait(); S.Signal(); S.Signal(); S.Signal(); Lecture 8, page 16

Signal and Wait: Example! Lecture 8, page 17 Using Semaphores! Mutual Exclusion: used to guard critical sections the semaphore has an initial value of 1 S->Wait() is called before the critical section, and S->Signal() is called after the critical section. Scheduling Constraints: used to express general scheduling constraints where threads must wait for some circumstance. The initial value of the semaphore is usually 0 in this case. Example: You can implement thread join (or the Unix system call waitpid (PID)) with semaphores: Semaphore S; S.value = 0; // semaphore initialization Thread.Join S.Wait(); Thread.Finish S.Signal(); Lecture 8, page 18

Multiple Consumers and Producers! class BoundedBuffer {! public:! void Producer();! void Consumer();! private:! Items buffer;! // control access to buffers! Semaphore mutex;! // count of free slots! Semaphore empty;! // count of used slots! Semaphore full;!! BoundedBuffer::BoundedBuffer (int N){! mutex.value = 1;! empty.value = N;! full.value = 0;! new buffer[n];!! BoundedBuffer::Producer(){! <produce item>! empty.wait(); // one fewer slot, or wait! mutex.wait(); // get access to buffers! <add item to buffer>! mutex.signal(); // release buffers! full.signal(); // one more used slot!! BoundedBuffer::Consumer(){! full.wait(); //wait until there's an item! mutex.wait(); // get access to buffers! <remove item from buffer>! mutex.signal(); // release buffers! empty.signal(); // one more free slot! <use item>! Lecture 8, page 19 Multiple Consumers and Producers Problem! Lecture 8, page 20

Summary! Locks can be implemented by disabling interrupts or busy waiting Semaphores are a generalization of locks Semaphores can be used for three purposes: To ensure mutually exclusive execution of a critical section (as locks do). To control access to a shared pool of resources (using a counting semaphore). To cause one thread to wait for a specific action to be signaled from another thread. Lecture 8, page 21