Chapter 2: Processes, Threads, and Agents



Similar documents
Operating System Tutorial

Network Attached Storage. Jinfeng Yang Oct/19/2015

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Introduction to Operating Systems. Perspective of the Computer. System Software. Indiana University Chen Yu

Chapter 6, The Operating System Machine Level

ELEC 377. Operating Systems. Week 1 Class 3

Agent Languages. Overview. Requirements. Java. Tcl/Tk. Telescript. Evaluation. Artificial Intelligence Intelligent Agents

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Virtual machine interface. Operating system. Physical machine interface

Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme. Middleware. Chapter 8: Middleware

Chapter 2: OS Overview

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

Introduction. What is an Operating System?

Multi-core Programming System Overview

Replication on Virtual Machines

SYSTEM ecos Embedded Configurable Operating System

Linux Process Scheduling Policy

The ConTract Model. Helmut Wächter, Andreas Reuter. November 9, 1999

Operating Systems OBJECTIVES 7.1 DEFINITION. Chapter 7. Note:

Operating System Components and Services

Chapter 3 Operating-System Structures

Operating System: Scheduling

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

CS550. Distributed Operating Systems (Advanced Operating Systems) Instructor: Xian-He Sun

Middleware: Past and Present a Comparison

Operating Systems Concepts: Chapter 7: Scheduling Strategies

Middleware Lou Somers

Technical Properties. Mobile Operating Systems. Overview Concepts of Mobile. Functions Processes. Lecture 11. Memory Management.

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL

CHAPTER 15: Operating Systems: An Overview

OPERATING SYSTEMS SCHEDULING

Computer Organization & Architecture Lecture #19

Division of Informatics, University of Edinburgh

Chapter 11 I/O Management and Disk Scheduling

Chapter 2: Remote Procedure Call (RPC)

Transparent Redirection of Network Sockets 1

Virtual Machines.

Contributions to Gang Scheduling

Objectives. Distributed Databases and Client/Server Architecture. Distributed Database. Data Fragmentation

Agenda. Distributed System Structures. Why Distributed Systems? Motivation

Distributed Data Management

This tutorial will take you through step by step approach while learning Operating System concepts.

Real Time Programming: Concepts

JADE: Java Agent Development Framework What is it? How can I use it?

Infrastructure that supports (distributed) componentbased application development

Mutual Exclusion using Monitors

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Transparent Redirection of Network Sockets 1

Cloud Computing. Up until now

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

How To Write A Network Operating System For A Network (Networking) System (Netware)

CPU Scheduling Outline

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

Infrastructure for Load Balancing on Mosix Cluster

Operating Systems Overview

Linux Driver Devices. Why, When, Which, How?

Chapter 14 Virtual Machines

Introduction to CORBA. 1. Introduction 2. Distributed Systems: Notions 3. Middleware 4. CORBA Architecture

How To Understand The Concept Of A Distributed System

IMPLEMENTATION OF AN AGENT MONITORING SYSTEM IN A JINI ENVIRONMENT WITH RESTRICTED USER ACCESS

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

Have both hardware and software. Want to hide the details from the programmer (user).

A Comparison of Distributed Systems: ChorusOS and Amoeba

Operating Systems 4 th Class

CS420: Operating Systems OS Services & System Calls

Distributed Systems Lecture 1 1

Introduction CORBA Distributed COM. Sections 9.1 & 9.2. Corba & DCOM. John P. Daigle. Department of Computer Science Georgia State University

Multiprogramming. IT 3123 Hardware and Software Concepts. Program Dispatching. Multiprogramming. Program Dispatching. Program Dispatching

Tier Architectures. Kathleen Durant CS 3200

Example of Standard API

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Fall 2007 Lecture 5 - DBMS Architecture

Lecture 7: Java RMI. CS178: Programming Parallel and Distributed Systems. February 14, 2001 Steven P. Reiss

Origins of Operating Systems OS/360. Martin Grund HPI

Operating Systems for Parallel Processing Assistent Lecturer Alecu Felician Economic Informatics Department Academy of Economic Studies Bucharest

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

1 Organization of Operating Systems

DB2 Connect for NT and the Microsoft Windows NT Load Balancing Service

Operating Systems Lecture #6: Process Management

Chapter 11 I/O Management and Disk Scheduling

Event-based middleware services

Code and Process Migration! Motivation!

Trace-Based and Sample-Based Profiling in Rational Application Developer

The Service Availability Forum Specification for High Availability Middleware

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Distribution transparency. Degree of transparency. Openness of distributed systems

Types Of Operating Systems

Software design (Cont.)

SOFT 437. Software Performance Analysis. Ch 5:Web Applications and Other Distributed Systems

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.

Distributed Systems. Security concepts; Cryptographic algorithms; Digital signatures; Authentication; Secure Sockets

Kernel comparison of OpenSolaris, Windows Vista and. Linux 2.6

Distributed File Systems

Operating System Structures

Transcription:

Process Management A distributed system is a collection of cooperating processes. Applications, services Middleware Chapter 2: Processes, Threads, and Agents OS: kernel, libraries & servers OS1 Processes, threads, communication,... Computer & network hardware OS2 Processes, threads, communication,... Computer & network hardware Platform Node 1 Node 2 Beneath the aspect of communication, the organisation and management of processes has to be considered. Some aspects in this are Control flows in processes Organisation of processes Allocation of processes Scheduling 1 2 Processes Processes Processes can be in several states: A process is a program which is currently executed; the execution is strict sequentially Very often, a large number of concurrently running user processes and system processes exist Resources of processes: Disk storage CPU time Access to I/O resources... When several processes want to access the same resource, an efficient coordination of these processes is necessary State running ready waiting blocked new killed terminated Meaning Process operations are currently executed A process is ready to run but waits for a processor to become free A process waits for an event (e.g. a resource to finish its task) A process waits for a resource used by another process A process is created A process is terminated before ending its task A process finishes after working on all of its instructions At each time, only one process can be executed on a processor! 3 4

Processes Processes Transitions between process states can be described by using state diagrams Example: The operating system is responsible for: Creation and destruction of user processes and system processes Sharing memory and CPU time new admitted interrupt exit terminated Provision of mechanisms for communication and synchronisation between processes Intervention in case of deadlocks ready I/O or event completion scheduler dispatch waiting running I/O or event wait For the operating system, processes are represented as process control blocks (PCB) PCBs contain all information relevant for the operating system 5 6 Process Execution Multiprogramming vs. Multitasking The main memory contains the operating system kernel and (build upon it) a command interpreter (CI) A raw division of operating systems can be made by process execution management: Single-Tasking: a process is exclusively loaded into the memory when executed memory free CI kernel job request Multiprogramming, Multitasking: several processes are in memory simultaneously memory free process CI kernel (partly the process can overwrite CI!) memory P3 free P2 P1 CI kernel 7 Difference Multiprogramming Multitasking In multiprogramming the user cannot be interrupted when using the CPU; this can cause a blocking of the whole system for processes which need high CPU capacity. In multitasking a process can get an interrupt to stop working. Additionally, a time span can be defined which gives the maximum duration of CPU usage for one process (time-sharing). By time-sharing, an interactive usage of the system by several processes is possible. A user executing one process seems to use the CPU exclusively. 8

Scheduling Scheduling Scheduling is the process of managing the access to resources, especially the CPU, by several processes. Scheduling is made in different temporal dimensions: The long-term scheduler (or job scheduler) chooses processes to be load into the memory Processes migrate between these queues: new process ready queue CPU terminated process The short-term scheduler (or CPU scheduler) decides when already loaded processes are allowed to use the CPU The scheduling manages several queues: Job queue contains all processes in the system. Ready queue contains all processes load into the main memory, which are in the state ready. Device queues contains all processes currently waiting for an I/O device. I/O I/O queue child executes interrupt occurs I/O request time slice expires fork a child wait for an interrupt 9 10 Processes and Threads Threads Traditional operating system Each process has its own address space and an own control flow. A scheduler is responsible to allocate the operating system's resources for running processes. computer process Execution environment for process Address space Distributed systems A single control flow is no longer acceptable: Several clients can wish to access a server process simultaneously A speedup can be obtained by executing process operations and communication operations in parallel threads process Synchronisation and communication between threads Files, windows,... Can only be accessed from inside the execution environment For these reasons, several control flows inside a single process are needed which can share process resources and work concurrently. Threads process When needed, threads can be created and destroyed dynamically. Threads can be called small processes, and sometimes they are denoted as lightweight processes. 11 12

Threads Each process has an own program counter, stack, registers, and address space. It works independently from other processes but is able to communicate with them. Each thread as well has an own program counter, stack, and registers, but there is only one address space for all threads belonging to a process. Thus, all threads can share global variables. Threads A process can create several threads which work together and do not disturb the work of the other threads. fork(...) task create(...) States of threads join(...) running, the thread was assigned to a processor blocked w.r.t. a semaphore ready, the threads waits on a processor to be assigned to terminated, all operations are finished, the thread waits to be destroyed Execution of threads A scheduler assigns CPU capacity by a given scheme to make all processes work 'simultaneously'. Inside these processes, threads are executed by using FIFO or Round Robin (time-sharing). 13 14 Thread Management Why Threads? Name self() create(name) exit() join() detach() cancel() masking prevention Description Gives back the ID of the current thread Creates a new thread Finishes the current thread Waits for another thread to finish; possibly gives back a result Detaches a thread from the other ones, i.e. no other thread will wait for results of this thread Asynchronous finishing of a thread's work A thread can define areas in which a cancel() can be delayed (masked) A thread can define a cleanup handler which is executed in case of an unmasked cancel() e.g. for garbage collection + Speedup by working on several process sub-tasks in parallel + Threads can be created more easily than new processes +A context change between threads is cheaper than the operating system's context change between processes + Threads inside a process share data and resources... but: the division of a tasks into threads is difficult or even impossible When using threads, they have to be protected against simultaneous access to process resources. Therefore, coordination mechanisms are needed. Additional communication overhead and concurrent programming by using monitors, semaphores,... Decreasing efficiency when inconveniently dividing a task into sub-tasks 15 16

Thread Implementation Implementation of Threaded Servers There are two implementation possibilities: User-level threads (creating and destroying threads as well as context changing is easy. But when a thread blocks, the whole process is blocked as well.) Common model for implementation: Dispatcher/Worker model Kernel-level threads (thread operation become as costly as process operations.) Efficient method: combine both thread variants 17 18 Threads in Client and Server Server-Threading Architectures Thread 1 generates result Thread 2 makes requests to server Queuing of requests Input/Output I/O workers remote objects per-connection threads remote objects per-object threads I/O remote objects Requests N Threads Client Client Server Implement an own thread for requests. Thread 1 is not blocked while thread 2 waits for a result. Server: Worker Pool Architecture The server creates a fixed number of worker threads. A dispatcher thread is responsible for receiving requests and assigning them to the N worker threads. (Disadvantage: the fixed number of worker threads can sometimes be to small; additionally, many context changes between dispatcher thread and worker threads have to be made.) 19 a. Thread-per-request b. Thread-per-connection c. Thread-per-object a. For each request, a new worker thread is created. Disadvantage: additional overhead for creation and destruction of threads b. For each connection of a client a new thread is created. This decreases the ration of creating and destroying threads. Disadvantage: higher response times for single connections if many requests are made on such a connection. c. For each object a new thread is created. This thread is responsible for all connections opened by its object. The overhead (create/destroy) is further reduced, but the response times can go up. 20

Distribution Transparency for the Client Replication transparency on the client's side can be realised by using threads: if a request is sent to several severs in parallel (robustness), the client could use a proxy knowing all replicates. This proxy now can use several threads, one for each replicate, to simultaneously contact all servers and collect the results for computing a final result for the client. 21 Example - Threads in Java Thread(ThreadGroup group, Runnable target, String name) creates a new thread in state SUSPENDED. The thread belongs to a group group and is identified by name. After being created, the thread calls the method run() of the object target. setpriority(int newpriority), getpriority() sets/gets the priority of a thread. run() the run() method of the target object is executed. If the object has no such method, the thread executes an own (default) run() method (thread implements runnable). start() the current state of the thread is changed from SUSPENDED to RUNNABLE. sleep(int millisecs) the thread is suspended (state SUSPENDED) for the given time span. yield() transition to state READY, calling the scheduler 22 Example - Threads in Java destroy() destroying the thread. thread.join(int millisecs) the thread is blocked for the given time span. If the called thread terminates before the end of this time span, the blocking is cancelled. thread.interrupt() the called thread is interrupted in its execution; by this, he can be invoked from a blocking call like sleep(). object.wait(long millisecs, int nanosecs) the calling thread is blocked till a request notify() or notifyall() in object is made; if the given time span end before this or if an interrupt() comes in, the calling thread is also invoked. object.notify(), object.notifyall() invokes the thread resp. All threads which have called wait() on object. Combination of Communication and Processes Processes are working on (sub-)tasks Communication services enable a cooperation of processes But: as well process execution as communication can be very time-costly: A computer is heavily loaded and needs much time for working on a process Exchanging (many) data frequently Transmission of code A process can be migrated to another host for a faster execution If several request (which build upon the results of the request made before) have to be made, the computation and generation of the next request can take place at the target host: no more data exchange is necessary by using the network A client can dynamically be enhanced by new functionalities 23 24

Dynamic Enhancements Models for Code Migration Transmission of code and initial state Sending a search client e.g. Java Applets Additional transmission of execution state Using a code repository, a client can dynamically be enhanced by interfaces to new servers. This enables the implementation of 'lightweight' clients without the need to determine all used services before runtime. Problem in heterogeneous systems: how to achieve a correct reconstruction of a process on the target platform? 25 26 Migration in heterogeneous Systems Idea: Restrict migration to fixed points in time of the program execution: only when a sub-routine is called The system manages a so-called migration stack in a machineindependent format as a copy of the normal program stack The migration stack is updated when a sub-routine is called or ends In case of a migration, migration stack and global program data are transmitted The target host gets the migration stack, but has to get the routine code for its platform from a code repository Better solution: using a virtual machine Java Scripting languages 27 Mobile Agents An agent is a piece of software, which works on a task autonomous, on behalf of a user, and together with other agents. Not each migrating process is a (mobile) agent: Process migration the operating system decides the migration of processes. Details are hidden for the application. Agent migration the agent decides its migration activities itself, depending on its internal state. It is able to initiate its own migration. Not each migrating process is directly called an agent; but in the following, only agents are talked about. 28

Agent Migration Strong agent migration State of data and execution are to be transmitted The whole transmission process (getting state, marshalling, transmission, and unmarshalling) are transparent Slow and costly because of possibly large states Weak agent migration Only data state is transmitted An own start method interpolates form data state, where to continue with the agent execution Faster, but more complicated to program Remote access in distributed systems: Communication mechanisms need the transmission of: Remote Procedure Call control Remote Method Invocation control + code Weak agent migration control + code + data Strong agent migration control + code + data + state 29 Mobile Agents Host 1 Host 2 Host 1 Host 2 MA RPC-based stationary agent mobile agent Steps for migration: Serialisation of code Transmission of code (and state) De-serialisation of code mobile agent (deleted) local communication MA migration-based network traffic (RPC) network traffic (migration) (sometimes) additional steps: Class loading on host 2 Deletion of agent from host 1 to prevent from agent copies Prevent agent loss 30 Agent Example: D Agents Supports writing agents in different languages: Java Tcl Scheme Communication and migration overhead not as low as possible Strong mobility Migration in D Agents Scripting Languages proc factorial n { if ($n 1) { return 1; # fac(1) = 1 expr $n * [ factorial [expr $n 1] ] # fac(n) = n * fac(n 1) set number set machine # tells which factorial to compute # identify the target machine Agent Server searched = searched + 1; jump B; meet with search-engine; 1. Capture agent state 2. Sign/encrypt state 3. Send state to B searched = searched + 1; jump B; meet with search-engine; 6. Resume agent 5. Restore state 4. Authenticate procedures variables Scripts to execute agent_submit $machine procs factorial vars number script {factorial $number Weak Mobility (create a new agent) agent_receive # receive the results (left unspecified for simplicity) Machine A Machine B 31 Simple example: a Tcl-Agent in D'Agents transmits a script to a remote host 32

Migration in D Agents Scripting Languages all_users $machines proc all_users machines { set list "" foreach m $machines { agent_jump $m Strong set users [exec who] Mobility append list $users return $list set machines set this_machine # Create an initially empty list # Consider all hosts in the set of given machines # Jump to each host # Execute the who command # Append the results to the list # Return the complete list when done # Initialize the set of machines to jump to # Set to the host that starts the agent # Create a migrating agent by submitting the script to this machine, from where # it will jump to all the others in $machines. agent_submit $this_machine procs all_users -vars machines -script { all_users $machines agent_receive #receive the results (left unspecified for simplicity) Chapter Simple 2: example: Processes and the Threads agent migrates to several hosts 33 Implementation For migration to and execution on a remote host, an agent needs a special execution environment: 1: transmission by e-mail or TCP/IP 2: the server manages agents and their communication as well as authentication services 3: Common Agent RTS (Run-Time System) provides basic services for agent support: start, termination, migration,... 4: several language interpreters allow the agents to be realised in several languages Layer 2 to 4 enable an execution of agents in a heterogeneous environment by separating them from the underlying platform Agent platform 34 Mobile Agents in Java (Voyager) Many agent platforms allow an execution of agents written in Java, because the concept of Java is platform-independency by using a virtual machine. An example is the agent platform Voyager. This platform provides Java classes for the management and the migration of agents. As in D'Agents, the mobility can be easily embedded in the agents. IMobileAgentMini.java: public interface IMobileAgentMini { void start(); void goal(); Interface definition for an agent 35 Mobile Agents in Java (Voyager) Loading of classes for management and MobileAgentMini.java: mobility support of agents import java.io.*; import com.objectspace.voyager.*; import com.objectspace.voyager.agent.*; Serialiseability for code transmission public class MobileAgentMini implements IMobileAgentMini, Serializable { public void start () { try{ Agent.of(this).moveTo("//hostname:8000", "goal()"); catch(exception exception) { System.err.println(exception); System.exit(0); Migration by specifying target host and port as well as code to be transmitted public void goal() { System.err.println("-> done\n"); 36

Mobile Agents in Java (Voyager) Why Mobile Agents? AgentStarter.java: import com.objectspace.voyager.*; import com.objectspace.voyager.agent.*; public class AgentStarter { public static void main(string[] argv) { try { Voyager.startup(); Start of agent environment // add mobility part IMobileAgentMini mobileagent = (IMobileAgentMini) Factory.create( MobileAgentMini.class.getName()); Initialisation of agent System.err.println("-> start migration"); mobileagent.start(); catch(exception exception) { System.err.println(exception); System.exit(0); Voyager.shutdown(); Chapter 2: Processes and Threads 37 transferred data / kbytes 100 80 60 40 20 migration based RPC based 0 0 5 10 15 20 25 30 number of colums/rows in table Example: search a remote table for a particular value 38 RMI vs. Agents in a Local Network Network Load time in ms one migration equals migration with class loading 1430,1 - remote CORBA communication (e.g. check status) 0,9 1557 remote CORBA communication (100 byte) 1,2 1210 remote CORBA communication (1 Kbyte) 3,8 372 remote CORBA communication (10 Kbyte) 23,1 62 time in ms one migration equals migration without class loading (status only) 19,38 - remote CORBA communication (e.g. status check) 0,9 21 remote CORBA communication (100 byte) 1,2 16 remote CORBA communication (1 Kbyte) 3,8 5 remote CORBA communication (10 Kbyte) 23,1 0,8 The slower the network, the more suitable are mobile agents: 2 1 RPC Mobile Agent Request Serialisation Reply De-serialisation 0% 20% 40% 60% 80% 100% RPC Marshal Request Request Unmarshal Request Service Marshal Reply Reply Unmarshal Reply data transfer over network CORBA: communication using RPC resp. RMI 39 Important in wireless networks: the risk of connection loss is a strong motivation for mobile agent usage 40

Stationary vs. Migrating Agents Not each agent needs to be mobile; there is also the concept of stationary agents. The main difference between normal objects and stationary agents is the autonomy. Agents in Distributed Systems There is no general agreed definition of agents. On the other hand, agents can be characterised by several properties: Stationary agents Are placed on a host for particular tasks e.g. local search service database services accounting Advantages: Mobile agents Are able to migrate to other hosts e.g. distributed search service conferencing services mobile communications reduction of network load division of tasks to several agents better flexibility and location transparency Property Autonomous Reactive Proactive Communicative Continuous Mobile Adaptive Common to all agents? Yes Yes Yes Yes No No No Description Can act on its own Responds timely to changes in its environment Initiates actions that affects its environment Can exchange information with users and other agents Has a relatively long lifespan Can migrate from one site to another Capable of learning Problems: location of agents migration management,... 41 42 Agent Technology By the definition of lots of agent concepts, lots of different agent systems are originated. It would be advantageous to enable an inter-platform communication and migration. But this only is possible with lots of additional efforts. As a solution, the Foundation for Intelligent Physical Agents (FIPA, http://www.fipa.org) has developed a general architecture model for agent platforms: Agent Communication Languages Mobility was no central topic in FIPA's work. The main topic was the standardisation of a uniform Agent Communication Language (ACL) which defines basic message types. The transmission of such messages builds upon communication protocols like TCP/IP. Message purpose INFORM QUERY-IF Description Inform that a given proposition is true Query whether a given proposition is true Message Content Proposition Proposition QUERY-REF CFP PROPOSE ACCEPT-PROPOSAL REJECT-PROPOSAL REQUEST SUBSCRIBE Query for a give object Ask for a proposal Provide a proposal Tell that a given proposal is accepted Tell that a given proposal is rejected Request that an action be performed Subscribe to an information source Expression Proposal specifics Proposal Proposal ID Proposal ID Action specification Reference to source 43 44

Agent Communication Languages FIPA's ACL allows a message exchange with a defined semantic. Example: Field Purpose Sender Receiver Language Ontology Content Value INFORM max@http://fanclub-beatrix.royalty-spotters.nl:7239 elke@iiop://royalty-watcher.uk:5623 Prolog genealogy female(beatrix),parent(beatrix,juliana,bernhard) Concluding can be said, that there are two kinds of communication between agents: Migration (with local method invocations) and message exchange 45