First-class User Level Threads



Similar documents
Kernel comparison of OpenSolaris, Windows Vista and. Linux 2.6

Operating System: Scheduling

Operating Systems Concepts: Chapter 7: Scheduling Strategies

SYSTEM ecos Embedded Configurable Operating System

Chapter 13 Embedded Operating Systems

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Chapter 2: OS Overview

Chapter 6, The Operating System Machine Level

An Implementation Of Multiprocessor Linux

Operating Systems. 05. Threads. Paul Krzyzanowski. Rutgers University. Spring 2015

CS414 SP 2007 Assignment 1

Real-Time Scheduling 1 / 39

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism

Performance Comparison of RTOS

Tasks Schedule Analysis in RTAI/Linux-GPL

This tutorial will take you through step by step approach while learning Operating System concepts.

Multi-core Programming System Overview

Linux Process Scheduling Policy

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Linux Scheduler. Linux Scheduler

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Real Time Programming: Concepts

Synchronization. Todd C. Mowry CS 740 November 24, Topics. Locks Barriers

Chapter 5 Process Scheduling

ELEC 377. Operating Systems. Week 1 Class 3

CS0206 OPERATING SYSTEMS Prerequisite CS0201, CS0203

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Have both hardware and software. Want to hide the details from the programmer (user).

Mutual Exclusion using Monitors

Introduction. What is an Operating System?

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

CS Fall 2008 Homework 2 Solution Due September 23, 11:59PM

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Embedded Systems. 6. Real-Time Operating Systems

Advanced Operating Systems ( L)

Linux scheduler history. We will be talking about the O(1) scheduler

Lecture 25 Symbian OS

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

SSC - Concurrency and Multi-threading Java multithreading programming - Synchronisation (I)

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

The Linux Kernel: Process Management. CS591 (Spring 2001)

Intro to GPU computing. Spring 2015 Mark Silberstein, , Technion 1

Red Hat Linux Internals

Replication on Virtual Machines

Konzepte von Betriebssystem-Komponenten. Linux Scheduler. Valderine Kom Kenmegne Proseminar KVBK Linux Scheduler Valderine Kom

Threads Scheduling on Linux Operating Systems

REAL TIME OPERATING SYSTEMS. Lesson-10:

Performance Evaluation and Optimization of A Custom Native Linux Threads Library

Shared Address Space Computing: Programming

Operating System Tutorial

Lecture 6: Interrupts. CSC 469H1F Fall 2006 Angela Demke Brown

ThreadMon: A Tool for Monitoring Multithreaded Program Performance

Operating Systems 4 th Class

Automatic Logging of Operating System Effects to Guide Application-Level Architecture Simulation

Kernel Synchronization and Interrupt Handling

RTOS Debugger for ecos

Chapter 6 Concurrent Programming

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Why Threads Are A Bad Idea (for most purposes)

Control 2004, University of Bath, UK, September 2004

Operating System Components and Services

Principles and characteristics of distributed systems and environments

System Software and TinyAUTOSAR

A Survey of Parallel Processing in Linux

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Thesis Proposal: Improving the Performance of Synchronization in Concurrent Haskell

REAL TIME OPERATING SYSTEM PROGRAMMING-II: II: Windows CE, OSEK and Real time Linux. Lesson-12: Real Time Linux

Resource Containers: A new facility for resource management in server systems

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

Effective Computing with SMP Linux

Parallel Computing with Mathematica UVACSE Short Course

Chapter 1 Computer System Overview

OPERATING SYSTEMS SCHEDULING

Distributed Systems [Fall 2012]

Using NSM for Event Notification. Abstract. with DM3, R4, and Win32 Devices

Completely Fair Scheduler and its tuning 1

Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification

Chapter 2 Process, Thread and Scheduling

Chapter 3 Operating-System Structures

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Quality of Service su Linux: Passato Presente e Futuro

W4118 Operating Systems. Instructor: Junfeng Yang

Binary Device Driver Reuse. Bernhard Poess. Diplomarbeit

6.828 Operating System Engineering: Fall Quiz II Solutions THIS IS AN OPEN BOOK, OPEN NOTES QUIZ.

Concepts of Concurrent Computation

How To Write A Windows Operating System (Windows) (For Linux) (Windows 2) (Programming) (Operating System) (Permanent) (Powerbook) (Unix) (Amd64) (Win2) (X

Review from last time. CS 537 Lecture 3 OS Structure. OS structure. What you should learn from this lecture

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Transcription:

First-class User Level Threads based on paper: First-Class User Level Threads by Marsh, Scott, LeBlanc, and Markatos research paper, not merely an implementation report

User-level Threads Threads managed by a threads library Kernel is unaware of presence of threads Advantages: No kernel modifications needed to support threads Efficient: creation/deletion/switches don t need system calls Flexibility in scheduling: library can use different scheduling algorithms, can be application dependent Disadvantages Need to avoid blocking system calls [all threads block] Threads compete for one another Does not take advantage of multiprocessors [no real parallelism]

User-level threads

Kernel-level threads Kernel aware of the presence of threads, but Support 1000 u-l threads? overhead on kernel resources what if u-l thread doesn t need all info which kernel maintains per thread? Kernel needs to provide general mechanisms hard to tailor scheduling or synchronization operations

Lightweight Processes Several LWPs per heavy-weight process User-level threads package Create/destroy threads and synchronization primitives Multithreaded applications create multiple threads, assign threads to LWPs (one-one, many-one, many-many) Each LWP, when scheduled, searches for a runnable thread [two-level scheduling] Shared thread table: no kernel support needed When a LWP thread block on system call, switch to kernel mode and OS context switches to another LWP

LWP Example lack of coordination between scheduling and synchronization persists blocking system calls will limit per-process resources cooperation among threads from different packages

Goal of paper present a minimal kernel interface for enabling the kernel to support user-level threads as if they were kernel-level threads work done in the context of Psyche: an experimental OS that attempts to integrate many models of parallelism that will coexist and cooperate

User-level threads second class difficult or impossible to perform scheduling at appropriate times many blocking system calls: e.g. read, write, oper, close, but also ioctl, mkdir, rmdir, rename, link, unlink although some systems offer some non-blocking alternatives, (mutex) no coordination between scheduling and synchronization a preempted thread may be holding a lock for which all other threads will wait no conventions for communication and synchronization across thread packages

Approach kernel thread interface is designed for use by userlevel libraries user level libraries do almost everything create, destroy, synch, context switch. kernel threads implement a virtual processor kernel executes system calls and does coarse grain scheduling - preemtive scheduling However -> kernel and user level library communicate by accessing information maintained by the other e.g., blocking calls

Mechanisms shared kernel/user data software interrupts maskable, vector specified by user (app) standard interface for schedulers

Shared kernel/user structures read/only access to kernel data no system call needed to determine current processors number or process id user-writable data can be seen by the kernel no system call needed to set up handlers for kernel events, such as timer expiration (valuable because it reduces the overhead of thread context switch) pointer to software interrupt handler (i.e., library scheduler) and parameters the kernel and library share a structure for the current thread thread state (stack, registers, etc.) kernel and user would write to user-writable area in mutual exclusion by default

Software interrupts to user level library whenever scheduling may be required timer expiration (for time slicing!) imminent preemtion two minute warning or interrupt it is signaled when the thread is about to be preempted can be used by threads to avoid acquiring spin locks when they are about to be descheduled i.e., the implementation of spin locks checks for that; otherwise, newly scheduled threads will spend their time time slicing useful because critical sections are usually short good or bad? start and finish of blocking calls all calls become non-blocking because the library scheduler gets to run before and after kernel passes state of interrupted thread to the library (PC, stack pointer, etc.)

Standard interface for user-level schedulers (i.e., scheduling routines to block/unblock a thread) exported locations for functions implementing the interface hence, the threads from different packages can be blocked and unblocked without needing to compile the program in a special language for a threads package without needing to link the program with an implementation of the threads package the kernel does not call these functions directly, but they are there for other libraries to see e.g. atomic queue shared b/w thread packages A and B. when a thread adds data to the queue, it will need to find the routing to unblock threads waiting on the queue, possibly in another package

paper advocates more communication between kernel and user space. is that what we want? general observations: threads should not block in kernel unbeknown to the thread scheduler kernel scheduling should be coordinated with user level scheduling