CHAPTER 3 LOAD BALANCING MECHANISM USING MOBILE AGENTS



Similar documents
Load Balancing in Fault Tolerant Video Server

ELIXIR LOAD BALANCER 2

High Availability and Clustering

The International Journal Of Science & Technoledge (ISSN X)

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

Keywords Load balancing, Dispatcher, Distributed Cluster Server, Static Load balancing, Dynamic Load balancing.

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

Fault-Tolerant Framework for Load Balancing System

Load Balancing of Web Server System Using Service Queue Length

Abstract. 1. Introduction

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

Running a Workflow on a PowerCenter Grid

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT

Dynamic Adaptive Feedback of Load Balancing Strategy

A Link Load Balancing Solution for Multi-Homed Networks

SAP SMS 365, enterprise service Standard Rate SMS Messaging. October 2013

Database Replication with MySQL and PostgreSQL

Comparative Study of Load Balancing Algorithms

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

A NOVEL RESOURCE EFFICIENT DMMS APPROACH

Server Scalability and High Availability

An Approach to Load Balancing In Cloud Computing

Cognos8 Deployment Best Practices for Performance/Scalability. Barnaby Cole Practice Lead, Technical Services

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

SiteCelerate white paper

International Journal Of Engineering Research & Management Technology

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

IOS Server Load Balancing

2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

A Review on Load Balancing In Cloud Computing 1

Cisco Application Networking for Citrix Presentation Server

CDBMS Physical Layer issue: Load Balancing

Implementation, Simulation of Linux Virtual Server in ns-2

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

Multilevel Communication Aware Approach for Load Balancing

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

IOS Server Load Balancing

An Energy Efficient Server Load Balancing Algorithm

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions

Resource Utilization of Middleware Components in Embedded Systems

A Survey on Load Balancing and Scheduling in Cloud Computing

Fax Server Cluster Configuration

Adaptable Load Balancing

Load balancing using java Aspect Component(Java RMI)

The Effectiveness of Request Redirection on CDN Robustness

TCP in Wireless Mobile Networks

Fair Scheduling Algorithm with Dynamic Load Balancing Using In Grid Computing

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

Server Traffic Management. Jeff Chase Duke University, Department of Computer Science CPS 212: Distributed Information Systems

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

Rackspace Cloud Databases and Container-based Virtualization

The Problem with TCP. Overcoming TCP s Drawbacks

SOLUTION BRIEF: SLCM R12.7 PERFORMANCE TEST RESULTS JANUARY, Load Test Results for Submit and Approval Phases of Request Life Cycle

Measuring IP Performance. Geoff Huston Telstra

Deploying Load balancing for Novell Border Manager Proxy using Session Failover feature of NBM and L4 Switch

MEASURING PERFORMANCE OF DYNAMIC LOAD BALANCING ALGORITHMS IN DISTRIBUTED COMPUTING APPLICATIONS

Load Testing and Monitoring Web Applications in a Windows Environment

SCALABILITY AND AVAILABILITY

@IJMTER-2015, All rights Reserved 355

Various Schemes of Load Balancing in Distributed Systems- A Review

Chapter 10: Scalability

TRUFFLE Broadband Bonding Network Appliance. A Frequently Asked Question on. Link Bonding vs. Load Balancing

Apache Tomcat. Tomcat Clustering: Part 2 Load balancing. Mark Thomas, 15 April Pivotal Software, Inc. All rights reserved.

Load Balancing using Pramati Web Load Balancer

CHAPTER 3 PROBLEM STATEMENT AND RESEARCH METHODOLOGY

Guideline for stresstest Page 1 of 6. Stress test

Clustering Versus Shared Nothing: A Case Study

HUAWEI OceanStor Load Balancing Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

MuleSoft Blueprint: Load Balancing Mule for Scalability and Availability

A Novel Approach for Efficient Load Balancing in Cloud Computing Environment by Using Partitioning

Computer Networks. Chapter 5 Transport Protocols

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

Design and Implementation of Distributed Process Execution Environment

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

Load Balancing using DWARR Algorithm in Cloud Computing

Monitoring DoubleTake Availability

CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Performance Comparison of Server Load Distribution with FTP and HTTP

Comparison of PBRR Scheduling Algorithm with Round Robin and Heuristic Priority Scheduling Algorithm in Virtual Cloud Environment

Chapter 52 WAN Load Balancing

AN EFFICIENT LOAD BALANCING APPROACH IN CLOUD SERVER USING ANT COLONY OPTIMIZATION

IP SLAs Overview. Finding Feature Information. Information About IP SLAs. IP SLAs Technology Overview

Mobile Communications Chapter 9: Mobile Transport Layer

Application Note. Network Optimization with Exinda Optimizer

Optimizing TCP Forwarding

Transcription:

48 CHAPTER 3 LOAD BALANCING MECHANISM USING MOBILE AGENTS 3.1 INTRODUCTION Load balancing is a mechanism used to assign the load effectively among the servers in a distributed environment. These computers work together so that in many respects it can be viewed as a single computer. The load balancing operations are transparent to the user. The existing work reveals that for effective load balancing, comprehensive and current load information should be available. In the pursuit of achieving the above, excessive bandwidth of the network is utilized in the existing methods for collection and exchange of load information. There exists contention between collecting load information and dispatching user request under the limited bandwidth condition. The limited availability of bandwidth becomes a restriction in carrying out other useful work. Also in the existing system, when a mobile agent fails, the reliability of the load balancing mechanism is reduced and thereby increasing the possibility of system overload. A load balancing strategy is proposed in this work which streamlines the bandwidth usage for effective utilization of resources. This

49 proposed method uses a mobile agent based solution to the problem of load distribution in a dispatcher based client/server environment. This proposed system of Load Distribution by dynamically Fixing input to the server using Mobile agent (LDFM) addresses the problem of excessive utilization of network bandwidth for collection and exchange of load information. The proposed work also provides solution for the failure of mobile agents by monitoring its status and creating a new agent, thus leading to improved reliability of the load balancing mechanism. 3.2 LOAD DISTRIBUTION BY DYNAMICALLY FIXING INPUT TO THE SERVER USING MOBILE AGENT (LDFM) SYSTEM The proposed system consists of a set of clients and a network of servers. The LDFM framework considers two worlds, namely: Client world and server world. The client world is an aggregation of all the clients in the physical world and the server world is an aggregation of the web servers or replicas in a cluster as shown in Figure. 3.1. The data structure for qitem is organized as class qitem { int load; int ipaddr [] = new int [4]; } qitem [ ] mmu_queue = new qitem [L]; The data structure for ipdata is defined as class ipdata { char servr_id [ ] = new char[l]; int ipaddr [ ] = new int [4]; int rank; } ipdata [ ] dispatcher_table = new ipdata [L];

50 In the proposed system, the client world communicates with the server world through the dispatcher. The number of servers in the multi server environment is configured as fixed entity L. The system deploys two data structures namely qitem and ipdata. qitem is used for collecting the load information from the server and ipdata for organizing, ranking and distributing the requests to the server. Figure 3.1 Client-Server Dispatcher based System

51 The dispatcher receives the HTTP request from the client world and sends it to the server world according to the IP address available in the dispatch table. The dispatcher table in the dispatcher is updated by the Mobile Management Unit (MMU). The purpose of the mmu_queue is to receive the server load information according to the IP address of the server. The MMU uses this information to sort the load of the servers, in ascending order. Once the server loads are sorted, it is used to update the dispatch table. The dispatch table now contains the IP address of the servers according to the ascending order of server load. The least loaded server will be the first entry in the table and the heavily loaded server will be the last entry in the table. The load information is proactively collected and used to dynamically update the table. The dispatcher table is shown in Table 3.1. Table 3.1. Dispatcher table Server ID IP address Rank P 192.168.1.100 1 Q 192.168.1.101 2 R 192.168.1.102 3 The dispatcher is a front end machine having a virtual IP address and has a dispatcher table and a Mobile Management Unit (MMU). The server to which the http request is to be sent for processing by the dispatcher is available in the dispatcher table.

52 3.2.1 Assigning Request Identification Number The cyclic property of the mod function is used for assigning the Request Identification Number(RID). Let n be a positive integer and let m be any integer. Then there exist unique integers q and r such that m = qn + r, 0 r < n. Here q is called the quotient and r the remainder when m is divided by n. i.e. m mod n. When a range of values is assigned to m and the mod operation is carried out, then the remainder value will be cyclic in the range 0<=r<n. Consider n=5 and m=1, 2,3,4,5,6,7,8,9,10,11,12,13,14,15,16, 17,18,19,20. The possible values for mod operation will be 1,2,3,4, or 0. When the HTTP requests sent by the user arrives at the dispatcher the dispatcher increments the count value by one. The RID is generated for each request using the mod function. The HTTP request is associated with a RID. The mod function computes the RID using count value and the number of servers available in the distributed system for serving HTTP requests from the user. The RID value is computed as RID = count mod (L+1). The RID values assigned will be cyclic in nature. 3.2.2 Processing the Request The HTTP request received from the user is sent to the dispatcher. The dispatcher associates the request with the RID and sends it to a server. The dispatcher uses the rank of the server for distributing the HTTP requests. The HTTP request and the associated RID are received at the server. When the request with RID is received by the server, the incoming RID is compared with the previous RID in the queue. If the incoming RID is greater than the

53 previous RID then the current load of the server is communicated to the MMU. Initially the requests are distributed in a round robin fashion as the ranks of all the servers are assigned with a value 1, the highest rank. As the HTTP requests arrive at the server queue it is processed in sequential fashion on a first come first served basis as long as the server is not overloaded and the processing of requests proceeds normally. If the server is overloaded then requests are held up in the queue for processing and the processing of the requests will be delayed. In order to avoid this delay, it is necessary to detect the overload condition of the server. For this purpose the incoming RID is compared with the previous RID in the queue. In the normal operating conditions of the server, the incoming RID will be lesser than the previous RID. However under overload conditions of the server, this condition will fail and at this point of time the mobile agent communicates the server load information to the MMU. Consider the normal arrival pattern of the HTTP request at the servers where the number of servers here is assumed to be four as shown Figure 3.2. The RID have a periodicity equal to the number of servers plus one in the distributed environment. The periodicity of RID triggers the mobile agent to send the server load information on a periodic basis. The mobile agent in the server will transfer load information periodically which is equal to the number of servers in the distributed environment. However when the servers become overloaded the periodicity of information transfer by the mobile agent is altered depending upon the update of dispatch table. The HTTP request is transferred based on the rank of server in the table.

54 Server RID 1 1 0 4 3 2 1 0 4 3 2 2 1 0 4 3 2 1 0 4 3 3 2 1 0 4 3 2 1 0 4 4 3 2 1 0 4 3 2 1 Figure 3.2 RID Pattern 3.2.3 Ranking of Servers The MMU initiates the mobile agent to the server. The mobile agent in the server monitors the load on the server to which it is associated. The mobile agent in the server communicates the load information to the MMU only when the RID of the incoming request is greater than the RID in the queue.the above condition ensures that any two mobile agent will not communicate to the MMU simultaneously. This helps in avoiding unnecessary usage of network bandwidth for communicating load information between the mobile agent and MMU. The MMU ranks the server according to the load information received from the mobile agent The MMU is responsible for the mobile agent and ensures that it is active and has not crashed. The mobile agent periodically sends an I AM ALIVE (IAA) signal to the MMU. The MMU based on the IAA signal checks the status of mobile agent and creates a new mobile agent when needed. The

55 MMU sets the time out timer on receipt of IAA. When the time out occurs a new mobile agent is initiated from the MMU for the server. 3.2.4 Update of Dispatcher Table Each server in the server world has a mobile agent initiated from the MMU so that polling of server by the mobile agent is avoided. This reduces the latency in collecting the load information. At any point of time only one mobile agent will be communicating the load information to the MMU. Initially the dispatcher assigns the user request with RIDs to the servers 1, 2, 3.N in a round robin fashion. This process continues as long as the incoming RID is lesser than the previous RID. Due to the cyclic property of the mod function the above condition will fail, initiating the mobile agent. This arrangement enables staggered communication between the mobile agent and the MMU. The mobile agent collects the load of the server proactively and sends it to the MMU. The mobile management unit after receiving the information from the mobile agent computes and updates the dispatcher table with IP addresses along with ranks 1, 2, 3 of N servers with the lightest load. The ranking of the server in the dispatch table will be different every time the load information is received and the dispatch table is updated dynamically. The table is used by the dispatcher for allocation of server for new HTTP request. 3.2.5 Formal Presentation of the Algorithm The algorithm is event based. The event can be one of the following 1. Receiving HTTP request from client at dispatcher

56 2. Receiving request with RID at server from dispatcher 3. Receiving server load from mobile agent at dispatcher running MMU 4. Receiving I Am Alive (IAA) signal from mobile agent at dispatcher running MMU 5. Receiving server response at dispatcher A set of instructions corresponding to each event is executed when the event occurs. 1. On receiving HTTP request from client at dispatcher count = count+1 RID = count mod (L+1) send request with RID to server 2. On receiving request with RID at server from dispatcher if (incoming RID > previous RID in the queue) send load of server using mobile agent to dispatcher running Mobile Management Unit (MMU) process the request 3. On receiving server load from mobile agent at dispatcher running MMU Accept the load of server Compute the rank of server Update the dispatcher table according to the rank of server

57 4. On receiving I Am Alive (IAA) signal from mobile agent at dispatcher running MMU Set time out timer On time out at MMU create mobile agent move mobile agent to corresponding server 5. On receiving server response at dispatcher Send server response to corresponding client 3.3 PERFORMANCE ANALYSIS The performance of the proposed system is analyzed through simulation. The main objective of the simulation is comparing the performance of the proposed system with the existing system. Among the several parameters, system throughput i.e. the total number of requests processed per second is considered. The simulation was performed by the number of servers (L) set to 30. The request from the user will have uniform execution time. The servers are heterogeneous, having varying CPU capability and memory capacity. The simulation was performed and the system throughput was measured by gradually increasing the number of user requests. The experiment was repeated a number of times and the average value is used for evaluation. The system throughput of the server (the total number of requests processed per second) is plotted against the number of requests from client world. The results of the proposed system (LDFM) and the existing system(without LDFM) are shown in Figure 3.3. It was observed that the throughput of the proposed system is better than the existing system.

58 System Throughput 18 16 14 12 10 8 existing proposed 6 4 2 0 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 850 900 950 1000 Number of clients Figure 3.3 Comparison of Throughput 3.3.1 Impact of Mobile Agent Failure The result also shows a performance degradation when mobile agent failure was simulated. New mobile is created after timeout, when a mobile agent fails. However a performance degradation was observed and this is due to the fact that in the intervening period, the mobile agent is not communicating the load information and the dispatcher table is not updated. The dispatcher table is not updated with the current information of load on the server, resulting in the performance degradation. The same phenomenon was observed in the other works, but the failure condition was not restored the performance was poor.it was observed that in LDFM the performance degradation was temporary until the new agent was restored,but in existing work the degradation was severe as there was no

59 inbuilt mechanism to restore the mobile agent. The results are shown in Figure 3.4. System Throughput 18 16 14 12 10 8 6 4 2 0 0 50 100 150 200 250 300 350 400 450 500 550 600 650 700 750 800 850 900 950 1000 Number of clients existing proposed Figure 3.4 Throughput with Mobile Agent Faliure The communication overhead in the LDFM and the existing method were compared. It was observed that the communication overhead in LDFM was a constant, whereas in the existing method, it was observed that it was increasing with the number of servers. In LDFM only one mobile agent will communicate with the MMU at any point of time and the possibility of two or more communicating at the same instance is a remote possibility and this has contributed to the low overhead. The increased overhead in the existing work can be attributed to the load collection and update mechanism used.when the servers become overloaded all the overloaded servers try to communicate their load information at the same time resulting in the high overhead. The result of the simulation is shown in Figure 3.5.

60 1600 1400 1200 1000 800 600 400 200 Network Traffic without LDFM LDFM 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 Number of servers Figure 3.5 Communication Overhead 3.3.2 Response time The response time of the existing method is compared with the proposed method and the results are shown in Figure 3.6. It can be observed that the average improvement in response time of the proposed method is 6%. This can be attributed to common overheads like round trip time, double IP address rewriting in both the methods. 2.5 Comparison of response time 2 1.5 1 without LDFM with LDFM 0.5 0 10 20 30 40 50 60 70 80 90 Load (% ) Figure 3.6 Response Time

61 3.4 CONCLUSION The improvement in performance is attributed to the reduced message complexity of mobile agents. The reduced message complexity compared with other method has contributed to processing more user requests. The reduction in message complexity is achieved by using mobile agent to communicate with the MMU for collection of load information. The mobile agent communicates only when the RID condition fails. The failure of the condition is achieved by the use of mod function. This arrangement helps the mobile agent to communicate always in a sequential manner. In the previous work of Hong et al (2006) the load information was broadcast to all the servers. Also Pao & Chen (2006) have used the advertisement technique to convey the load information to the dispatcher from the backend servers. The communication is not at all regulated. In the experiment there is no distribution of job request from the one queue to another. The load information was proactively collected, ranked and dispatcher table updated. The sequence of distribution of user request by the dispatcher to the server is altered whenever the dispatcher table is updated. The next chapter 4 deals with the challenges faced and the proposed methods to handle Load Balancing in a peer to peer environment.