Design and Implementation of Distributed Process Execution Environment



Similar documents
Datagram-based network layer: forwarding; routing. Additional function of VCbased network layer: call setup.

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

A Comparison Study of Qos Using Different Routing Algorithms In Mobile Ad Hoc Networks

Using DC Agent for Transparent User Identification

ECE 358: Computer Networks. Solutions to Homework #4. Chapter 4 - The Network Layer

CSE 123: Computer Networks

Additional Information: A link to the conference website is available at:

Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops

Packet Sniffing on Layer 2 Switched Local Area Networks

F5 BIG-IP V9 Local Traffic Management EE Demo Version. ITCertKeys.com

Communication Protocol

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

How do I load balance FTP on NetScaler?

Distributed Databases

A Dynamic Approach for Load Balancing using Clusters

What is CSG150 about? Fundamentals of Computer Networking. Course Outline. Lecture 1 Outline. Guevara Noubir noubir@ccs.neu.

Microsoft Network Load Balancing and Cisco Catalyst Configuration

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

The IP Transmission Process. V1.4: Geoff Bennett

Communications and Computer Networks

KNX IP using IP networks as KNX medium

Andrew McRae Megadata Pty Ltd.

Minimal network traffic is the result of SiteAudit s design. The information below explains why network traffic is minimized.

GLBP - Gateway Load Balancing Protocol

The International Journal Of Science & Technoledge (ISSN X)

Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems

User-ID Best Practices

CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013

Faculty of Engineering Computer Engineering Department Islamic University of Gaza Network Chapter# 19 INTERNETWORK OPERATION

Network FAX Driver. Operation Guide

ECE 333: Introduction to Communication Networks Fall 2002

A Study on the Application of Existing Load Balancing Algorithms for Large, Dynamic, Heterogeneous Distributed Systems

DISCOVERING DEVICES CONTENTS. using HP Web Jetadmin

Using IPM to Measure Network Performance

Performance Analysis of Load Balancing Algorithms in Distributed System

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

Detecting rogue systems

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Level 2 Routing: LAN Bridges and Switches

Chapter 25 DHCP Snooping

CHAPTER 1 INTRODUCTION

1. Introduction 2. Getting Started 3. Scenario 1 - Non-Replicated Cluster 4. Scenario 2 - Replicated Cluster 5. Conclusion

Link Layer Discovery Protocol

Tunnel Broker System Using IPv4 Anycast

LOAD BALANCING QUEUE BASED ALGORITHM FOR DISTRIBUTED SYSTEMS

Network Time Management Configuration. Content CHAPTER 1 SNTP CONFIGURATION CHAPTER 2 NTP FUNCTION CONFIGURATION

Understanding Slow Start

IP Routing Features. Contents

Using RADIUS Agent for Transparent User Identification

Chapter 7. Address Translation

Can PowerConnect Switches Be Used in IP Multicast Networks?

Firewall Load Balancing

ELIXIR LOAD BALANCER 2

MANAGING NETWORK COMPONENTS USING SNMP

AUTO DEFAULT GATEWAY SETTINGS FOR VIRTUAL MACHINES IN SERVERS USING DEFAULT GATEWAY WEIGHT SETTINGS PROTOCOL (DGW)

What communication protocols are used to discover Tesira servers on a network?

QoS issues in Voice over IP

ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE

Data Link Layer(1) Principal service: Transferring data from the network layer of the source machine to the one of the destination machine

LUCOM GmbH * Ansbacher Str. 2a * Zirndorf * Tel / * Fax / *

Optimization of AODV routing protocol in mobile ad-hoc network by introducing features of the protocol LBAR

Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme. Auxiliary Protocols

How To Configure Virtual Host with Load Balancing and Health Checking

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

HP Switches Controlling Network Traffic

Listeners. Formats. Free Form. Formatted

EECS 489 Winter 2010 Midterm Exam

Technical Bulletin. Arista LANZ Overview. Overview

Fault-Tolerant Framework for Load Balancing System

Firewall Load Balancing

Configuring Nex-Gen Web Load Balancer

Follow these steps to prepare the module and evaluation board for testing.

Management Software. Web Browser User s Guide AT-S106. For the AT-GS950/48 Gigabit Ethernet Smart Switch. Version Rev.

Chapter 3. Internet Applications and Network Programming

AS/400e. TCP/IP routing and workload balancing

Resource Utilization of Middleware Components in Embedded Systems

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Load balancing; Termination detection

Per-Flow Queuing Allot's Approach to Bandwidth Management

Use MAC-Forced Forwarding with DHCP Snooping to Create Enhanced Private VLANs

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

Optimistic way of Accessing Open Storage using Collaborative Approach

Keywords: DSDV and AODV Protocol

High Availability and Clustering

Transport Layer Protocols

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM

16-PORT POWER OVER ETHERNET WEB SMART SWITCH

Migration from Cisco GLBP to industry standard VRRPE

Monitoring Routing. Monitoring OSPF LSAs. Type 1 CHAPTER

16/5-05 Datakommunikation - Jonny Pettersson, UmU 2. 16/5-05 Datakommunikation - Jonny Pettersson, UmU 4

Chapter 52 WAN Load Balancing

Cisco Networking Academy CCNP Multilayer Switching

04 Internet Protocol (IP)

Review: Lecture 1 - Internet History

Deploying Microsoft Operations Manager with the BIG-IP system and icontrol

How To Load Balance On A Bgg On A Network With A Network (Networking) On A Pc Or Ipa On A Computer Or Ipad On A 2G Network On A Microsoft Ipa (Netnet) On An Ip

Whitepaper Continuous Availability Suite: Neverfail Solution Architecture

Getting Started Guide

ENTERASYS WEBVIEW WEB-BASED MANAGEMENT FOR THE VH-2402S/VH-2402S2 WEB MANAGEMENT GUIDE

Transcription:

Design and Implementation of Distributed Process Execution Environment Project Report Phase 3 By Bhagyalaxmi Bethala Hemali Majithia Shamit Patel

Problem Definition: In this project, we will design and implement a distributed process execution environment. The system will have several sites, each having processes running. The site receiving a new task will first analyze its load and decide whether it can execute the task. If it decides to execute the task, it will locate and fetch the code (object) corresponding to the task using the Service Location Protocol. The site will then execute the code. However if the site is loaded and decides not to execute the new task then it will discover some other site, which is willing to execute that task. The new site is selected in a way that the system is load shared. The new site willing to execute the task will then locate the task, fetch it and execute it using load balancing algorithm. Assumptions: The following assumptions are made for the system to be implemented: 1) The system is dynamic i.e., new sites can be added to the system and none of the sites can go down. 2) Initially the sites will be idle i.e. will not be executing any tasks. 3) Two queues will be maintained for each site in the system; ready queue and wait queue. When a new task arrives at a particular site, it is put in the wait queue. When the site is ready to execute the task, the task is transferred from the wait queue to the ready queue. 4) The Service Location Protocol is used to locate the code to be executed for the task. The Service Location Protocol is localized at each site. 5) If the system is overloaded i.e. exceeds the threshold limit, the new tasks are discarded from the system. 6) FIFO scheduling is used for the queues in the system. 7) A modified version of Stable Sender s Algorithm is implemented for load balancing in the system. 8) The code of the task is available only at one particular site. Multiple copies of the code are not available. 9) We need to have a time interval between the addition of two new sites. State Diagrams of the system: Context Diagram New Task DPEE Execution Complete Level 1 Diagram

Enqueue the task Put in Wait queue Transfer of task to new site Figure out next site Load Balancing algorithm No Level 1 State Diagram New Task Initial state Ready queue not empty OK/Receiver Check the Load Status Sent a task transfer Switch to new process Ready Queue Empty Fetch code Queue Not Empty Service Location Protocol Get Code RUN

Load Balancing Algorithm loadreply timed out Wait Receive loadrequest Sent loadrequest Check load status status = recv Reply and increment the load Wait for loadreply Received a loadreply Send tasktransfer Received the tasktransfer Wait for with task loadreply timeout loadreply timeout loadreply timed out Put the task in the ready queue Discard the task

Service Location Protocol Ready queue empty/ Node status ok/recv Fetch code Wait Received slprequest / Fetch code Search for the code slprequest timeout Send slprequest Code sent Location of the code Wait for slpreply Send the code to the requestor Received the code and go to RUN Design Specifications Each site maintains its status i.e. whether it is in a receiver, OK, or a sender state depending on the threshold. Threshold is decided by process queue length. Every site in an initial state has no processes in its ready queue. When a new task arrives, it is put in the wait queue and proceeds as follows: (1) If the processor at the site is not busy executing some task or has finished executing a task, the wait queue is checked to see if it is empty. If the wait queue is not empty, then load status is checked and decision is made whether it can execute the new task or not. If it decides to execute the task (status is ok/receiver), the task is transferred to the ready queue. If site decides not to execute the task, the loadbalancing algorithm is executed to find a new site where this task can be transferred. Once the site is determined and the wait queue is empty, it executes the next task from the ready queue. (2) If the processor is busy, we put the task in the wait queue. Once the code is fetched, the task is executed. On completion of the execution, site proceeds as in (1).

Service Location Protocol (SLP) This module is designed to locate and fetch the required code for the node. As assumed SLP is local to each site, it is invoked just before the execution of the task. The SLP service is provided to a site with the help of two ports where the process will listen for s from other sites. One of the ports is used for receiving replies for this site and the other port for receiving s from other sites. Whenever the service location protocol is invoked for fetching code by a site, it sends broadcast, slprequest and waits for replies from other sites. The site that has the code for the program responds to the slprequest with a slpreply. Once the code is fetched, it goes back to listen at the port. Site can also get a slprequest from other sites that are trying to fetch some code. To handle this type of s we use the second port. If a site has the code that is requested, it responds with a slpreply and goes back to listen at that port for slprequest s. Load Balancing The load balancing policy used is the sender-initiated algorithm. Selection policy: Whenever the load of the site becomes greater than the threshold, it becomes a sender. The load of the site is a measure of the length of the Ready queue length. Transfer policy: The new task that arrives at a site is chosen for transferring. Location policy: The load-balancing algorithm uses two ports for distributing the overall system load. It works in the following way: we bind a process to each of the two ports and keep listening at the ports for any new s. Whenever a site becomes a sender, it invokes the load-balancing algorithm. It sends out a unicast, loadrequest from receiver list, in an attempt to find a receiver. It then sets up a timeout interval and waits for replies from receiver. If a node considers itself as a receiver, it replies with a loadreply and increments its load by one in anticipation of the new incoming task. On receiving the loadreply, the sender removes the task from its wait queue, and sends a tasktransfer. On receiving a tasktransfer, the receiver adds the new task to its ready queue. On the other hand if a site gets a loadrequest it responds only if it is a receiver. If the site receiving a loadrequest is not a receiver then it does not reply. Next site form the receiver list is tried until they become empty. When the receiver list becomes empty then sites from OK list are sent s. If none of the sites from the OK list are receivers then the process is discarded. It thus implements modified sender initiated algorithm. Information policy: The information policy is demand driven policy. Whenever a site becomes a sender, then only it tries to update the status information.

Implementation Modules 1) mydpe : This is the starting module of the system. An instance of newdpe is created in it. This module takes no inputs and does not return any value. 1) newdpe : This module initializes the new site in the system and integrates it with the existing sites in the system. It initializes the ready queue, wait queue, directory, loading balancing, service location and the addlist. (receiver list, sender list and ok list) 1) dirinit1 : This module handles the configuration of the site. This includes getting the port number for each site from the user, creation of directory, addition of processes to the site. 1) addlist : After the configuration of the new site, a broadcast is send to all the other sites informing them about it s port number and IP address. The existing sites respond with their corresponding IP addresses and port number, which is added to the receiver list of the new site. 1) Lb1: This module takes care of load balancing in the system. We use the modified version of stable sender initiated algorithm. We use three queues i.e. send queue, receive queue and ok queue. Initially all sites in the system are assumed to be receivers. The receiver queue is checked first when the site is overloaded. The head of the queue is obtained to get the IP address and port number where the request is sent to transfer the task. If the site has not changed its status by then, the task has to be executed by the site. If the site in the receiver list is also loaded then it proceeds to check in the ok list and then the sender list. If no site is ready to execute the task, then it discards the task. 1) Localsearch1: Two functions are implemented in this module. Localfilesearch returns true if the process to be executed is present in the local site. If the process is not found then broadcast is sent to all the other sites in the system. The site having the source file for the process to be executed appends the file to the reply. The local site fetches the file from the packet and stores this file into its directory. Search and execute function compiles and executes the source code in the local site. After the execution the file, it is deleted from the local site. 1) Multicast: This module is primarily responsible for multicasting s and getting unicast replies. 1) Miscellaneous Modules (getuserport, getuserinput, q2, q): getuserport is mainly concern with getting the port numbers in the form of integers. q2 and q are queues implemented to adapt to the system.

Test Modules a1,a2,a3 test1, test2, test3, test4, test5, test6, test7, test8, test10, test11, test12, test13, test14, test15, test16 : These test modules are used as processes to be executed in the system. Strength of the System: As seen from the load-balancing algorithm used, the strength system tries to minimize the network traffic. We try to find a receiver and then transfer the task rather than trying to find the best receiver. This improves the response time of the task that is transferred as we try to maintain the status of the system. As the information policy is demand driven, i.e. only when a site becomes a sender does it poll sites and either transfers the task or gets the current status of the polled site. This again helps in reducing the network traffic rather than informing all sites whenever the status is changed by broadcasting. Another strength of the system is the dynamic configuring of the site with respect to the code that is available at the site. In order to make the availability of the code dynamic, at the time of bringing up the site, we present the user with a set of test programs, the code of which can be made available at that site. Our system does not provide the user with a Graphical User Interface as it involves overhead on the system. Drawbacks of the system: 1) Preemptive task transfer is not implemented. 2) Since the LAN was used, and since we are transmitting the file compiling, executing and deleting the local copy at the user machine we have not used serialization. 3) The lost of s is not taken care of by the system. 4) We have not implemented any security measure for the system. 5) During load balancing implementation of the site, first the receiver list is checked for the sites to send s. If receiver list is empty then it would be a better design issue to assume that the OK list and the sender list be queried as the last elements unlike the receiver list which is queried for get head. 6) There should be a time delay before the integration of two new sites to the system. 7) the response time of the task may increase as the system does not mainatain the up to date status information of the system. Packages used are java.io java.lang java.net java.util javax.swing

Time Line: We have followed time schedule to implement our system: 24th Oct - Phase 1 27th Oct - Pseudo Code of each Module 28th Oct - Start Coding 04th Nov - Service Location Protocol 08th Nov - Load Balancing Algorithm 09th Nov - Integrating SLP and Balancing Algorithm 13th Nov - Design Test Modules Implementation 21st Nov - Update the design and make the next design document 01st Dec - Testing 07th Dec - Document the entire report 12th Dec - Submit the final report and give demo.