A Dynamic Load Balancing Algorithm For Web Applications



Similar documents
LOAD BALANCING QUEUE BASED ALGORITHM FOR DISTRIBUTED SYSTEMS

A Study on the Application of Existing Load Balancing Algorithms for Large, Dynamic, Heterogeneous Distributed Systems

High Performance Cluster Support for NLB on Window

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

Comparison of PBRR Scheduling Algorithm with Round Robin and Heuristic Priority Scheduling Algorithm in Virtual Cloud Environment

Public Cloud Partition Balancing and the Game Theory

ADAPTIVE LOAD BALANCING FOR CLUSTER USING CONTENT AWARENESS WITH TRAFFIC MONITORING Archana Nigam, Tejprakash Singh, Anuj Tiwari, Ankita Singhal

Back-End Forwarding Scheme in Server Load Balancing using Client Virtualization

An Efficient Load Balancing Technology in CDN

Load Balancing Algorithms for Peer to Peer and Client Server Distributed Environments

Efficient DNS based Load Balancing for Bursty Web Application Traffic

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

LOAD BALANCING STRATEGY BASED ON CLOUD PARTITIONING CONCEPT

AN EFFICIENT LOAD BALANCING APPROACH IN CLOUD SERVER USING ANT COLONY OPTIMIZATION

Load Balancing in Fault Tolerant Video Server

Statistics Analysis for Cloud Partitioning using Load Balancing Model in Public Cloud

Load Balancing in Distributed System. Prof. Ananthanarayana V.S. Dept. Of Information Technology N.I.T.K., Surathkal

A Game Theory Modal Based On Cloud Computing For Public Cloud

@IJMTER-2015, All rights Reserved 355

LOAD BALANCING IN CLOUD COMPUTING USING PARTITIONING METHOD

Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.

The International Journal Of Science & Technoledge (ISSN X)

How To Balance A Web Server With Remaining Capacity

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

Load Balancing in cloud computing

FAQ: BroadLink Multi-homing Load Balancers

2 Prof, Dept of CSE, Institute of Aeronautical Engineering, Hyderabad, Andhrapradesh, India,

A Novel Load Balancing Optimization Algorithm Based on Peer-to-Peer

Effective Load Balancing Based on Cloud Partitioning for the Public Cloud

AN ADAPTIVE DISTRIBUTED LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING

Analysis of IP Network for different Quality of Service

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

Comparative Study of Load Balancing Algorithms

A Study of Network Security Systems

An Approach to Load Balancing In Cloud Computing

A Game Theoretic Approach for Cloud Computing Infrastructure to Improve the Performance

Multilevel Communication Aware Approach for Load Balancing

A Survey on Load Balancing and Scheduling in Cloud Computing

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

Performance Evaluation of Mobile Agent-based Dynamic Load Balancing Algorithm

Performance Prediction, Sizing and Capacity Planning for Distributed E-Commerce Applications

Effective Virtual Machine Scheduling in Cloud Computing

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments

COMPARATIVE ANALYSIS OF DIFFERENT QUEUING MECHANISMS IN HETROGENEOUS NETWORKS

Comparisons between HTCP and GridFTP over file transfer

DESIGN OF CLUSTER OF SIP SERVER BY LOAD BALANCER

Technote: AIX EtherChannel Load Balancing Options

International Journal Of Engineering Research & Management Technology

Fair Scheduling Algorithm with Dynamic Load Balancing Using In Grid Computing

The Load Balancing Strategy to Improve the Efficiency in the Public Cloud Environment

Efficient Service Broker Policy For Large-Scale Cloud Environments

IMPROVED LOAD BALANCING MODEL BASED ON PARTITIONING IN CLOUD COMPUTING

A Comparative Study of Load Balancing Algorithms in Cloud Computing

A Classification of Job Scheduling Algorithms for Balancing Load on Web Servers

The Application Front End Understanding Next-Generation Load Balancing Appliances

A Group based Time Quantum Round Robin Algorithm using Min-Max Spread Measure

NON-INTRUSIVE TRANSACTION MINING FRAMEWORK TO CAPTURE CHARACTERISTICS DURING ENTERPRISE MODERNIZATION ON CLOUD

Load Balancing to Save Energy in Cloud Computing

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS

LOAD BALANCING AS A STRATEGY LEARNING TASK

TRUFFLE Broadband Bonding Network Appliance BBNA6401. A Frequently Asked Question on. Link Bonding vs. Load Balancing

An Architecture Model of Sensor Information System Based on Cloud Computing

Cloud Partitioning of Load Balancing Using Round Robin Model

Flexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks

CSE LOVELY PROFESSIONAL UNIVERSITY

Performance Modeling and Analysis of a Database Server with Write-Heavy Workload

CDBMS Physical Layer issue: Load Balancing

Extended Round Robin Load Balancing in Cloud Computing

Implementing Parameterized Dynamic Load Balancing Algorithm Using CPU and Memory

A Load Balancing Model Based on Cloud Partitioning for the Public Cloud

Ranch Networks for Hosted Data Centers

CLOUDDMSS: CLOUD-BASED DISTRIBUTED MULTIMEDIA STREAMING SERVICE SYSTEM FOR HETEROGENEOUS DEVICES

Saisei FlowCommand FLOW COMMAND IN ACTION. No Flow Left Behind. No other networking vendor can make this claim

A Survey Study on Monitoring Service for Grid

Load Balancing in Structured Peer to Peer Systems

Programming Assignments for Graduate Students using GENI

Load Balancing in Structured Peer to Peer Systems

Memory Database Application in the Processing of Huge Amounts of Data Daqiang Xiao 1, Qi Qian 2, Jianhua Yang 3, Guang Chen 4

A Review of Load Balancing Algorithms for Cloud Computing

Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud

Krunal Patel Department of Information Technology A.D.I.T. Engineering College (G.T.U.) India. Fig. 1 P2P Network

Managing SIP-based Applications With WAN Optimization

TRUFFLE Broadband Bonding Network Appliance. A Frequently Asked Question on. Link Bonding vs. Load Balancing

MEASURING PERFORMANCE OF DYNAMIC LOAD BALANCING ALGORITHMS IN DISTRIBUTED COMPUTING APPLICATIONS

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

A Comparative Study on Load Balancing Algorithms with Different Service Broker Policies in Cloud Computing

An Optimized Load-balancing Scheduling Method Based on the WLC Algorithm for Cloud Data Centers

DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING

VoIP Performance Over different service Classes Under Various Scheduling Techniques

Modeling and Simulation of Queuing Scheduling Disciplines on Packet Delivery for Next Generation Internet Streaming Applications

Transcription:

Computing For Nation Development, February 25 26, 2010 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi A Dynamic Load Balancing Algorithm For Web Applications 1 Sameena Naaz, 2 Mohamed Meftah Alrayes and 3 M. Afshar Alam Department of Computer Science, Jamia Hamdard, Hamdard University New Delhi, India 1 snaaz@jamiahamdard.ac.in, 2 moha872@yahoo.co.uk and 3 alam@jamiahamdard.ac.in ABSTRACT Rapid growth of internet use has resulted in network traffic congestion problem. Network load balancing is a one method to eliminate traffic congestion in the network as well as to improve the scalability and availability of Internet server programs, such as Web servers, proxy servers, DNS servers and FTP servers It actually distributes Web traffic among these different servers and also helps to increase the performance of the server by regulating the traffic to conform to the service rate. There are many algorithms being used for network load balancing such as round robin and random. A new dynamic load balancing algorithm has been proposed in this paper. This algorithm is concerned with distributing the incoming requests between the servers in fair and dynamic way. LBQ (Load Balancing Queue) is a parameter that is used to decide how many jobs will be stored in the network load balancer to be distributed in the next stage and LBD (Load Distributed) is the parameter that is used to decide how many jobs will be distributed among the servers at every stage., so that no server has more load than other. The performance of each server depends on the capability to serve the requests when ever the request is coming from the clients. KEYWORDS Load balancing, load balancing queue, load distribution, server utilization. 1. INTRODUCTION As Internet connectivity becomes a commodity, large enterprises increasingly want high levels of performance and reliability guarantees for their performance sensitive traffic (such as voice, multimedia, online financial trading or Electronic Commerce). Increasingly, new applications are being deployed on the Internet; some new applications such as peer-to-peer (P2P) file sharing and online gaming are becoming popular. With the evolution of Internet traffic, both in terms of number and type of applications a current challenge facing many network administrators is how to make their TCP/IP applications scalable and keep them available for users. In today's marketplace, it is very important that web applications, telnet servers and batch file transfers are up and running at full capacity. Accurate classification of Internet traffic is important in many areas such as network design, network management, and network security. One key challenge in this area is to adapt to the dynamic nature of Internet traffic, the improvements will not be enough to solve the problem because transmission speeds are expected to grow faster [1]. Consequently, architectural advances are needed to scale the performance to the required speeds. Historically, as the traffic increases, and as applications are going more and more complex, system administrators encounter a common bottleneck and a single server simply can't handle the load. An obvious way to solve this issue is to use multiple hosts to serve the same content [2]. Communication services such as web server-farms, database systems and grid computing clusters, routinely employ multi- server systems to provide a range of services to their customers. There are situations where some sites are heavily loaded and others are lightly loaded or idle. This results in poor overall system performance [3]. Therefore an important issue in such systems is to determine the server to which an incoming request should be routed to in order to optimize a given performance criterion. Load balancing is used in such multi-server systems in order to keep the server utilization as high as possible. Many load balancing mechanisms have been developed, and many approaches for classifying these methods also were introduced. Network load balancing forms the bridge between multiple units handling the same service and incoming request streams. It offers ease of administration and improved load distribution as well as high availability and failure recovery options. Load balancing of servers can be implemented in different ways. There are various algorithms used to distribute the load among the available servers. These include random allocation, round robin allocation and weighted round robin allocation [4]. 2. DESCRIPTION OF ALGORITHM A new algorithm for network load balancing has been proposed in this paper. This algorithm is based on how to distribute the traffic among the servers in fair way regardless of the network traffic, and how much the servers can serve in unit time. The proposed algorithm is concerned with checking the traffic, aggregating it and distributing the requested jobs between the servers by the network load balancer. The proposed algorithm is divided into three parts. A. Traffic Arrival The processes are the job or services which the server has to serve. The frequency at which the traffic arrives as well as the size of the traffic (i.e. number of requests) is not fixed. The incoming traffic is attached with the processes. It has been assumed that all the traffic has the same attribute and so all the processes also have the same attribute. B. Distribution of Traffic All the jobs (i.e. traffic) are passed to network load balancer for distribution to different servers, but not all the jobs will be immediately assigned to the servers. In some situation there are

some jobs that are stored in the network load balancer and will be distributed to the servers later. In this algorithm there are two parameters that play important role in distribution of jobs. These parameters are: LBQ (Load Balancing Queue) The LBQ is the parameter that is used to decide how many jobs will be stored in the network load balancer to be distributed in the next stage. The value of LBQ is calculated through the formula: LBQ = (LBQ+NUMBER OF JOBS) %NUMBER OF SERVERS. LBD (Load Distributed) The LBD is the parameter that is used to decide how many jobs will be distributed among the servers at every stage. The value of LBD is calculated through the formula: LBD = (LBQ+NUMBER OF JOBS) /NUMBER OF SERVERS. C. Traffic Served After the calculation of LBD and LBQ the traffic amount that is calculated from LBD is distributed among the servers. Each server will serve the requested traffic according to the number of jobs it can serve per unit of time. After this time the remainder jobs will be sent back to the network load balancer and will be added to the LBQ. By the end of the serve the number of stages is incremented by 1 and the number of serve for each server is incremented by 1. The formula for calculating the utilization is: Utilization of server= number of serves /number of stages. Step7. Utilization of server= number of serves /number of stages. 4. FLOWCHART OF PROPOSED ALGORITHM The flowchart for the dynamic load balancing algorithm has been shown in fig. 2 Fig 1. Block diagram of the proposed algorithm 3. STEPS OF THE ALGORITHM The different steps involved in the algorithm are: Step1: Enter the number of processes, arrival time of each process and the for each process. Step 2: Specify the number of servers and the they can serve per unit time. Initialize LBQ = 0, LBD = 0 and clock = 0. Step3. Repeat steps 4 to 6 till clock is less than 1000. Step4. If clock = arrival time (i) LBQ = (LBQ + Number of Jobs) %Number of Servers. (ii) LBD = (LBQ + Number of Jobs) /Number of Servers. (iii) Distribute the LBD values among the servers. After one unit of time the remaining job at the servers is sent back to the LBQ. Else go to step 5. Step5. Check LBQ and distribute the available jobs to the servers. Step6. Serve for each server = Serve for each server + 1. No. of stages = No. of stages + 1. Fig 2. Flow chart for the proposed algorithm 5. EXPERIMENTAL EVALUATION In t his project C#.net 2005 has been used for implementing the proposed algorithm. The algorithm has been checked for a number of inputs and results of five of them are shown here. Different values of the parameters have been taken to see the effect of each parameter on the algorithm Experiment 1 In this experiment the utilization of each server has been investigated by fixing the number of the jobs that each server can serve per unit of time and changing the of each process. We considered the arrival time for each process as given in table 1 with a time difference of 10 units

A Dynamic Load Balancing Algorithm For Web Applications Table1: Arrival Time of processes The that can be served by each server per unit time is given in table 2. Table2: Number of jobs served by each server We noted that as the increases the utilization of each server also increases proportionally. Because Server 3 can serve maximum number of the jobs per unit time so server 3 has best utilization. Table 3 shows the results of the experiment as utilization of different servers with changing load. The same result is also shown in the graphical form in Fig. 3. Table3. Utilization of different servers as the number of jobs Varies Fig.3:.The relation between utilization of each server and Experiment 2 In this experiment we investigated the utilization of each server when the processes are arriving at a constant time difference of one unit as shown in table 4 Table4: Arrival Time of processes The that can be served by each server per unit time is given in table 5. Table5: Number of jobs served by each server Fig 4: The relation between utilization of each server and Experiment 3 In this experiment we have considered that all the servers are at par and hence can serve equal per unit of time. We have taken this value to be equal to 2 jobs per unit time and have also taken the arrival times of different processes as 10, 11 and 12 respectively Again we note that the utilization increases with the increase in the. In this experiment, as all the servers can serve same number of jobs per unit time so they all have same utilization. We again note that the utilization increases in proportion to the served. Because Server 3 can serve maximum number of the jobs per unit time so server 3 has best utilization

Number of jobs in process 2=209 Number of jobs in process 3=159 We noted that the utilization is again proportionally increasing with the increase in the. Fig 5: The relation between utilization of each server and Experiment 4 In this experiment we investigated the utilization of each server by fixing the for each process and changing the number of the jobs that each serve can serve per unit time. We consider the arrival time for each process and the in each process as follows Arrival time for process 1 =10 Arrival time for process 2=11 Arrival time for process 3=12 Number of jobs in process 1=52 Table 6: Relationship between the that a server can serve per unit time and the utilization. Fig 6: The relation between utilization of each server and no. of jobs each server can serve. Arrival time for process 3=12 Number of jobs in process 1=52 Number of jobs in process 2=209 Number of jobs in process 3=159 We noted that the relation between the each server can serve per unit time and the number of stages is inversely proportional Experiment 5 In this experiment we investigated the number of stages in which the load balancer in able to complete the total distribution and execution of processes when the number of jobs for each process is fixed and the serving capacity of each server changes. The arrival time of the processes were considered as follows Arrival time for process 1 =10 Arrival time for process 2=11 Fig. 7: The relation between each server can serve per unit time and number of stages Continued on Page No. 180

A Dynamic Load Balancing Algorithm For Web Applications Continued from Page No. 176 Table 7: Results obtained from experiment number 5. No No of jobs can be serve per unit of time For each server Stages 1 2 70 2 3 47 3 4 35 4 5 28 5 6 24 6 7 20 7 8 18 8 9 16 9 10 14 10 11 13 6. CONCLUSION In this paper we have discussed a new algorithm of load balancing. Preliminary results show that this algorithm has the potential to significantly improve fairness of load balancing between the servers when ever the traffic is coming. The capacity of the server is of major importance in the serving of traffic request. The fairness is very important to increase the performance of the system as well as it gives out all the clients request in shortest time and help the system to be scalable. 7.0 REFERENCES [1]. K. G. Coffman,A. M. Odlyzko: Internet growth: Is there a Moore s Law for data Traffic,AT&T Labs Research, Revised version, June 4, 2001. [2]. Eitan Altman, Urtzi Ayesta and Balakrishna Prabhu, Load Balancing in Processor Sharing Systems Proceedings of the 3rd International Conference on Performance Evaluation Methodologies and Tools, October 20-24, 2008, Athens, Greece. [3]. M. Livny and M. Melman, Load balancing in homogeneous broadcast distributed systems, in Proc. Conj Performance, ACM, 1982, pp.47-55. [4]. http://content.websitegear.com/article/ load_balance_types. htm