A Review on an Algorithm for Dynamic Load Balancing in Distributed Network with Multiple Supporting Nodes with Interrupt Service Payal Malekar 1, Prof. Jagruti S. Wankhede 2 Student, Information Technology, Jawaharlal Darda Institute of Engineering and Technology yavatmal, payalmalekar91@gmail.com Assistant Professor, Information Technology, Jawaharlal Darda Institute of Engineering and Technology yavatmal, jswankhede86@gmail.com ABSTRACT Dynamic load balancing is very popular and recent technique that protects ISP networks from sudden congestion caused by load spikes or link failures or interrupt. Dynamic load balancing protocols, however, require techquenic for multiple paths. In a distributed network the performance of the system can depend on how the work can be distributed across participating nodes. Dynamic load balancing have many advantages over static load balancing in network, they are inevitably more complex. Static load balancing involved too much of overhead. But we cannot ignore their benefit also. Load balancing is seems to reduce the mean and standard deviation of task response times, especially during heavy and/or unbalanced workload. The performance is directly proportional to load index; queue-length. The reduction of average response time result into the increase in performance of network. Load balancing is still very effective when a large portion of work is immobile. Even light loads have benefits of load balancing. System instability is possible, but can be easily avoided. In the dynamic load balancing task is distributed randomly across the no of host. In one scheme network performance can be increased by having one centralized node to handle the uneven load. The centralize node have interrupt service routine to handle interrupt request which increase efficiency. Keywords: Load Balancing, Complexity, Migration, priority,host etc. 1. INTRODUCTION There is rapid growth of increase in computer user thus the number of resource sharing application and ultimately increased the amount of load across internet. One solution to this problem is to increase the size of server which process the request and another one is to distribute the load across the different server in effective manner. Distribution is termed as load balancing. The idea behind load balancing is that the migration of excessive load from heavily loaded node to lightly loaded node. The critical situation is that when to migrate load. This solution is typically based on local load situation: for example there is comparison of load on each host present at that network and the status of node to which load is going to transmitted. But the two nodes, each one having two task may not be equally loaded as in distributed environment; the nodes are of heterogeneous nature. Now, load estimation can be determined by means of processing capacity of the node.processing capacity does not mean only the processor processing speed ; it includes the overall configuration of node. Static load balancing can be use as preprocessor for computation in an application with heavy load. Other applications, in which the load is unpredictable or change during time and computation such as adaptive finite element methods, require dynamic load balancing which adjust the decomposition of load and the computation proceeds. There are various load balancing algorithm varying in their complexity. where complexity of algorithm is measured by the amount of communication used to approximate the least loaded node. In Static algorithms there is no collection of information and selection of node is probabilistic. while in dynamic algorithms there is collection of varying amounts of state information to make their decisions. The most
significant parameter in dynamic algorithm is the cost of transferring task from one node to another which is very high the system was found to be the cost of transferring a job from one node to another it is this cost that limits the dynamic algorithms, but dynamic algorithms which do collect varying amounts of information. As the decision is ultimately rely on information it collect.so more information results into more efficient decision. The complex balancing algorithm cannot keep up with rapidly changing information passed through system. The solution on this is to make their decision rely on the information that is passed between two nodes i.e. sender and receiver. Load balancing have below advantages. Average response time is reduced under job transfer overhead. It increases performance of each node and network. Small jobs will not subjected to starvation. There are two broad categories of load balancing algorithms. The first one is source-initiative algorithms, the hosts where jobs arrive at node and take the initiative to transfer.the another one is server-initiative algorithms, hosts able and wish to receive transferred jobs go out to find such jobs. Dynamic algorithm is complex but its benefits are much more than complexity. 2. APPROACHES FOR LOAD BALANCING Following are the approaches for load balancing. 2.1 PRIMARY APPROACH FOR DYNAMIC LOAD BALANCING A distributed system consists of various independent nodes connected usually by a local area network. As static load balancing have many disadvantage because jobs at a station is fixed. While job are in execution process dynamic load balancing does no process. The job are allocated to each nod e in network. Load at each node is calculated (as etc.) dynamically [1]. Fig-1: Initial model of dynamic Load balancing to migrate a overload from heavily loaded node to light weighted node
As shown in fig-1 the process are allocated as arrive to the available nodes or they are stored in queue. One by one process are allocated to the primary node if they are stored in the queue. Migration is done from heavily loaded node to lightly loaded nodes Process migration is depends on the network bandwidth and work load. Nodes are grouped together into cluster to reduce traffic. a light weighted node is checked in the same cluster, if suitable node not found then after nearby cluster is searched and after getting a required node transfer takes place if a protocol is satisfied for load transfer[1] 2.2 Centralized Approach For Load Balancing Due to congestion in network many times heavily loaded node unable to find node in its cluster. Nodes are not able to search the node far away from cluster. It would be better if heavily loaded node finds a temporary node in same cluster to handle the over load. So, one centralized node is provided in centralize approach in each cluster. In case of an load transfer whenever a primary node is over loaded then it search the other light weighted primary nodes, if available then the load transfer is take place to that node[3]. If such node is not available then the available centralize node is use to acco mmodate the overload of primary node. This centralized node is not assigned any process initially; it is given only the overload of primary nodes. Centralized node has some better structure and features as compared to other nodes in the cluster. Traffic between centralized node and primary nodes kept minimum to avoid network delay and increase performance. Fig-2: Load balancing with centralize node 2.3 Modified Approach For Dynamic Load Balancing In Centralized approach there is only single node to process the load at high speed by using switching but still there are some limitations. One approach is to remove the limitation is to divide the centralized node into many
small nodes called supporting nodes (SNs). Here also supporting node are not allotted load initially [2]. Many times supporting nodes is idle or they are not properly loaded is assigned to supporting nodes. This is wastage of power of supporting nodes. By making SN busy we can use the free time. So the next approach to this is supporting nodes given some load initially and SN maintains the priority queue [4]. Fig-3: Modified Approach with interrupt service for Load balancing In the above figure load is transferred from the primary node PN to the secondary node SN if PN is overloaded. The migration of process from primary node to secondary node is based on priority of the process [4]. 3. ALGORITHM FOR MODIFIED APPROACH These algorithms contain two types of node- Primary node and supporting nodes. As name suggest their functionality. Primary are main nodes and supporting are used to handle migrated overload. When a Primary Node is overloaded, it will search other primary node (light weighted) within a cluster if such node is found; overloaded primary node will maintain the load balancing with available primary node [2]. Algorithm: Ni: List of primary nodes SNj: List of supporting nodes Ps: supporting node priority index Pk: List of processes in Process Queue i, j, k є N, j< i At the start assume each node have some load 2. Some supporting nodes may have load Ni Load, Sj Load Procedure: Main ( ) I. Suppose a node Nt is heavily loaded with load ξ
where 0< t< i, ξ є Pk II. If ( Search_node ) Available_Node ξ Load (Nt) =Load (Nt) ξ Else Call Seacrh_S_node ( ) Available_s_node ξ Load (Nt) =Load (Nt) ξ IN_S (Available_s_node, ξ) Procedure: IN_S (SN_node, ξ) I. Priority is assigned to SN_node as Ps (SN_node) = t II. If t > Ps (RP) P_List RP Now RP ξ Else P_List ξ. Procedure: Search _node () I. for each Ni except node initiating the search_ node procedure Check the node with minimum load II. If Desired Node available return (index of available node) Procedure: Search _S_node () I. for each SNj except node initiating the search Node procedure Check the Supporting node with minimum load II. return (index of available Supporting node)
If suitable primary node is not found then Primary node will try to approach supporting node and will find suitable supporting node, then it will interrupt SN for execution of tits process [1]. SN will execute the ISR (Interrupt Service Routine) for handling the interrupt. In ISR, Each supporting node maintain a priority queue, each process is maintained assigned some priority. If the process from PN is having greater priority than the priority of current process and process running on SN is stored in priority queue with suitable priority and if the process from PN is running on S. Then Process from PN stored in priority queue [6]. 4. CONCLUSION A Modified Model has been formulated for dynamic distribution of load across the server. The first model is centralize model in which the centralize node is divided into many nodes. Switching of the process is depend on priority of the processes. Many times partitioned nodes remain idle. During idle time some load is allocated to that node. Whenever high priority process arrive migration is take place. Time is important factor while calculating migration cost. REFERENCE [1] Parveen Jain1, Daya Gupta2, Algorithm for Dynamic Load Balancing in Distributed Systems with Multiple Supporting Nodes By Exploting Interrupt Service, International Journal of Recent Trends in Engineering, Vol 1, No. 1, May 2009 [2] Srikanth Kandula Dina Katabi Shantanu Sinha Arthur Berger, Dynamic Load Balancing Without Packet Reordering [3] Hari Reddy High Performance Computing Solutions Development Systems and Technology group ibm, performance evaluation of static and dynamic load-balancing schemes for a parallel computational fluid dynamics software (cfd) application (fluent) distributed across clusters of heterogeneous symmetric multiprocessor systems. [4] Cortes A.,Ripoll A.,Senar M.,and Luque E Performance Comparision of Dynamic Load Balancing trategies for Distributed Computing, Proc.32nd Hawaii conf.system Science vol.8,p8041,1999. [5] Cybenko G., Dynamic load balancing of distributed memory multiprocessors, Journal of Parallel Distributed Computing, Vol 7, pp. 279-301, 2001. [6] Dhakal S., Hayat M. M.,Elyas M., Ghanem J. and Abdallah C. T., Load Balancing in Distributed Computing Over Wireless LAN:Effects of Network Delay, IEEE Communications Society / WCNC 2005, Vol. 2, pp.1755 1760, 2005.