Deign of Decentralized Load Balancing Algorithm for Cloud Environment Sarika Vaantrao Bodake 1, Dr. Radhakrihna Naik 2 ME CSE Student, CSE Department, MIT, Aurangabad, India 1 Profeor, CSE Department, MIT, Aurangabad, India 2 Abtract: Cloud computing i that the next generation of computation. Probably people will have everything they have on the cloud. Cloud computing provide reource to hopper on demand. The reource are alo oftware package reource or hardware reource. Cloud computing architecture quare meaure ditributed, parallel and erve the requirement of multiple purchaer in numerou ituation. Thi ditributed deign deploy reource ditributive to deliver ervice expeditiouly to uer in numerou geographical channel. Purchaer in a very ditributed etting generate requet haphazardly in any proceor. Therefore the major diadvantage of thi randomne i related to tak aignment. The unequal tak aignment to the proceor create imbalance i.e., a number of the proceor quare meaure over laden and a few of them quare meaure beneath loaded. The target of load balancing i to tranfer the load from over laden method to beneath loaded method tranparently. Load balancing i one in all the central problem in cloud computing. To realize high performance, minimum interval and high reource utilization magnitude relation we want to tranfer the tak between node in cloud network. Load balancing technique i employed to ditribute tak from over loaded node to beneath loaded or idle node. In following ection we tend to quare meaure dicu concerning cloud computing, load balancing technique and alo the planned work of our load balancing ytem.propoed load balancing algorithm i imulated on Cloud Analyt toolkit. Performance i analyzed on the parameter of overall repone time, data tranfer, average data center ervicing time and total cot of uage. Reult are compared with three exiting load balancing algorithm namely Round Robin, Equally Spread Current Execution Load, and Throttled. Reult on the bai of cae tudie performed how more data tranfer with minimum repone time. Keyword: Cloud Computing; Load balancing; Load balancing Algorithm; IaaS. I. CLOUD COMPUTING There i no correct definition for cloud computing, we will ay that cloud computing i aortment of ditributed erver that provide ervice on demand [8]. The ervice are alo oftware package or hardware reource a hopper would like. Primarily cloud computing have 3 major part [9]. Initial i hopper; the tip uer interact with hopper to avail the ervice of cloud. The hopper i alo mobile device, kinny purchaer or thick purchaer. Second element i information centre; thi can be aortment of erver hoting totally different application. Thi might exit at an overized ditance from the purchaer. Currently day a thought known a virtualization [6] [7] i employed to put in oftware package that permit multiple intance of virtual erver application. The third element of cloud i ditributed erver; thee quare meaure the element of a cloud that quare meaure gift throughout the web hoting totally different application. However a exploitation the applying from the cloud, the uer can feel that he' exploitation thi application from it own machine. Cloud computing provide 3 varietie [5] of ervice a oftware package a a Service (SaaS), Platform a a Service (PaaS) and Infratructure a a Service (IaaS). SaaS provide oftware package to hopper which require to not intalling on purchaer machine. PaaS provide platform to make aociate degree application like information. IaaS provide procedure power to uer to execute tak from another node. II. LOAD BALANCING In cloud ytem it' attainable that ome node to be heavily loaded and alternative are gently loaded [9]. Thi example will reult in poor performance. The goal of load balancing i ditribute the load among node in cloud etting. Load balancing i one in all the central problem in cloud computing [6]. For higher reource utilization, it' facinating for the load within the cloud ytem to be balanced [9] equally. Thu, a load balancing formula [1] trie to balance the whole ytem load by tranparently tranferring the employment from heavily loaded node to gently loaded node in a hot to make ure mart overall performance relative to ome pecific metric of ytem performance. Once conidering performance from the purpoe of read, the metric concerned i uually the interval of the procee. However, once performance i taken into account from the reource purpoe of read, the metric concerned i total ytem outturn [3]. I n ditinction to interval [2], outturn care with eeing that every one uer quare meaure treated fairly which all quare meaure creating progre. Copyright to IARJSET DOI 10.17148/IARJSET.2015.2503 19
To improve the performance of the ytem and high reource allocation magnitude relation we want load balancing mechanim in cloud. The characteritic of load balancing quare meaure [1] [5]: Ditribute load equally acro all the node. To realize a high uer atifaction. Improving the performance of the ytem. To cale back interval. To reach reource utilization magnitude relation. Let u take aociate degree example for higher than ited characteritic: Suppoe we've got developed one application and deploy it on cloud. Mean wherea thi application i extremely common. Thouand of individual quare meaure exploitation our application. Suppoe many uer exploitation thi application at contant time from ingle machine and that we didn't apply load balancing approach to our application. Thi point the actual erver i extremely buy to execute the uer tak and alternative erver quare meaure gently loaded or idle. The uer didn't atify a a reult of low repone and performance of the ytem. If we tend to apply load balancing on our application, we are able to ditribute ome uer tak to alternative node and that we will get the high performance and quicker interval. During thi method we will reach higher than characteritic of load balancing. A. Taxonomy of Load-Balancing Algorithm Static Load Balancing Algorithm Centralized Dynamic Ditributed There quare Fig. 1 meaure Taxonomy main of Load 2 clae Balancing of Formula load balancing [3] [4]. They re i) Static load balancing and ii) Dynamic load balancing. Static algorithm work tatically and don't contemplate the preent tate of node. Dynamic algorithm [4] work on current tate of node and ditribute load among the node. Static algorithm ue olely info concerning the common behavior of the ytem, ignoring the preent tate of ytem. On the oppoite hand, dynamic algorithm react to the ytem tate that change dynamically. Static load balancing [4] algorithm quare meaure le complicated a a reult of there' no got to maintain and method ytem tate info. However, the potential of tatic formula i procribed by the very fact that they are doing not react to the preent ytem tate. The attraction of dynamic algorithm that they quare meaure doing reply to ytem tate therefore are higher able to avoid thoe tate with unnecearily poor performance. Attributable to thi reaon, dynamic policie have coniderably bigger performance edge than tatic policie. However, ince dynamic algorithm [5] hould collect and react to ytem tate info, they're eentially a lot of complicated than tatic algorithm. III. RELATED WORK Numerou reearcher have propoed load balancing algorithm [2], [12], [13] for parallel and ditributed ytem, a well a for cloud computing etting [14]. For a dynamic load-balancing algorithm, it' unacceptable to oftentime exchange tate information due to the high communication overhead. In order to cale back the communication overhead, Martin et al. [23] tudied the reult of communication latency, overhead, and information meaure in a very cluter deign to oberve the impact on application performance. Anand et al. [2] planned aociate calculable load data programming algorithm (ELISA), and Mitzenmacher [24] analyzed the uefulne of the extent to that previou data are often ued to etimate the tate of the ytem. Arora et al. [21] propoed a localized load-balancing formula for a Grid etting. Though thi work trie to incorporate the communication latency between 2 node throughout the triggering method on their model, it didn't contemplate the actual price for employment tranfer. Our approach take the duty migration price under conideration for the loadbalancing call. In [15], [16], and [18], a ender proceor collect tanding information concerning neighboring proceor by communication with them at each loadbalancing intant. Thi will lead to frequent meage tranfer. For a large-cale cloud environment wherever communication latency i extremely giant, the tanding exchange at every load-balancing intant will lead to giant communication overhead. In our approach, the problem of frequent exchange of knowledge i mitigated by etimating the load, upported the ytem tate data received at ufficiently giant interval of your time. We have planned algorithm for a cloud etting that area unit upported the etimation approach a dole out in the deign of enzyme-linked-orbent erologic aay [2]. In ELISA, load equalization i carried out upported queue length. Whenever there' a ditinction in queue length, job are going to be migrated to the gently loaded proceor, ignoring the duty migration price. Thi cot become a vital iue once the communication latency i extremely Gantt like for a Grid etting and/or the job ize i Gantt. Each of our algorithm balance the load by conidering the duty migration price, that i primarily influenced by the out there information meaure between the ender and receiver node. Copyright to IARJSET DOI 10.17148/IARJSET.2015.2503 20
IV. PROBLEM DEFINITION In today competitive market, activity application ucce a uer interface alone i not any longer enough. Poor convenience price revenue, loyalty and whole image. Application leader quare meaure hifting buinecentric metric to ervice level management (SLM) [8] to bring IT nearer to buine. Our aim i to develop a calable CLOUD reolution [6] that i capable of delivering deire of Stock Broking firm while not compromiing on performance, meaurability and price. A. Feature We will be howing load balancing exploitation following option 1. Uer Level Load balancing on tock application 2. Cloud etup and application readying [8] 3. Obtaining Cloud tatitic and performance analyi of every node Uer: End uer interact with the erver to manage information related to the cloud. Uer are aign tak to the erver on which tock application i running. Server: On requeting of uer the ditributed erver will end the requet to load balancer to check whether any node i available or not. After getting repone from load balancer erver will migrate tak coming from uer to node on which tock application i running. Load Balancer: The load balancer monitor all node in cloud environment on which tock application i running. It calculate free RAM, free CPU and repone time of each node. Then it elect one node who RAM and CPU i le utilized and repone time i very low, and end migration link to erver. Stock Application: After electing proper node for execution of uer aigned tak that node will end repone to uer query. B. Sytem flow In figure 3 we are propoing the flow of our ytem 4. Reource watching [5] of Cloud Node 5. Deploying aociate degree application war file on cloud node conidering their proceor, RAM Uage exploitation cloud controller. V. SYSTEM DESIGN AND IMPLEMENTATION In thi ytem the dynamic cloud computing environment i ued, the intermediate node i ued to monitor the load of each VM in the cloud pool. In thi approach the uer can end the requet to the intermediate node. It i reponible for tranfer the client requet to the cloud. Here, the load i conidered a in term of CPU load with the amount of memory ued, delay or Network load. A. Architecture Propoed Load Balancing in Cloud Computing contain Uer, Server, Load Balancer, and Stock Application. Fig. 3 Sytem Flow C. Dynamic Load Algorithm (DLA) The DLA algorithm i ued in thi current project. The algorithm ue the ix phae for load balancing a under Fig. 2 Sytem Architecture 1) Get Load Statu of All the Node: Here, we et a cheduler which contain a Monitor to gain and read load tatu, and alo a Databae to tore the load tatu and work requet hitorical data of uer acce to the erver. Mot of the current method of node load tatu collection divided the ytem reource into everal type: CPU utilization, Memory, Dik I/O and network bandwidth etc. But with different ize of erver or provide different ervice we cannot propoe a unified et of thoe parameter. 2) Evaluate the Statu Of node: We et a threhold that when the reource utilization beyond the threhold, we can conider compute a an over-load node, alo if the reource utilization i under the threhold we know that Copyright to IARJSET DOI 10.17148/IARJSET.2015.2503 21
the node i in a light-load tatu ue and to repreent thoe two tatue. VI. PERFORMANCE EVALUATION AND COMPARISONS 3) Predict the Future Load Flow: Baed on the tatitic, ytem load tatu could how eaonal change, which help to predict future, load of node. 4) Benefit Etimate: When a load tatu of N i igned a which caued by tranient pike, in thi condition we cannot make the deciion that whether we hould perform migration. 5) Chooe Receiver Node: We ue the forward probability method to help u to chooe a receiver hot, every candidate node probability to receive a job or VM i mainly depend on the reult of load tatu evaluation. 6) Migration: Help migration of the heavily loaded node to the lighter one. The peudo code i given in figure 4. Peudo Code for DLA Input : Cloud, VM node, Tak and allocate the erver to the client, Threhold value Output: Solution() i.e. Tak Completion Begin Aign tak to one of the node i. Where i=1, 2, 3.. Etimate the ervice time for aigned tak. Alo determine the CPU and RAM utilization of node. If CPU and RAM utilization i below threhold and repone time i low then Start DLA Calculate the utilization of all node in cloud and ervice time. Compare all three component i.e. CPU and RAM utilization and ervice time. Select node that repone time i fater and utilization i above threhold. Ele Tranfer the tak to elected node. Continue execution of current tak to aigned node. For thi project there' would like of load teting tool to live performance of uer requet. Thi project i predicated on cloud etting we want cloud load teting tool. There quare meaure everal tool acceible online to live load on cloud node. For thi project we tend to quare meaure exploitation Load cloud load teting tool. Load Storm i on-line teting tool. The ummery of load teting reult' given below. HT ML Othe r * Total TABLE 1: SUMMARY OF THE RESULT Req uet Rep one (aver age ) Rep one (max ) RPS (aver age) 288 0.6 1.01 0.24 153 1 181 9 0.22 0.72 1.28 0.28 1.01 1.52 Th rou gh put (av era ge) 23. 05 kb/ 10. 31 kb/ 33. 36 kb/ Tot al Tra nfe r 0.03 GB 0.01 GB 0.4 GB *Other include javacript, c, image, pdf, tak migration etc. (any content ort except hypertext mark-up language and xml) Th The reult' hown below picture. Fig. 4 DLA Fig. 5 All Page Completion Time Copyright to IARJSET DOI 10.17148/IARJSET.2015.2503 22
Reponae Time (m) Total Data Tranfer (GB) ISSN (Online) 2393-8021 Some of the load balancing technique are teted and monitored on cloud analyt. Here Round Robin, Equally Spread Current Execution Load, and Throttled algorithm are ued. Thee algorithm are teted on cloud analyt imulation tool and the reult i given below table. Sr. No. TABLE 2: COMPARISON RESPONSE TIME Overall Repone Time in Name of m Algorithm Avg. Min Max 1 DLA 280 33.36 1010 2 Round Robin 292.79 39.33 607.82 2.5 2 1.5 1 0.5 0 5 10 15 20 Virtual Uer RR Throtteled DLA 3 Equally Spread Current Execution Load 292.84 37.83 608.77 4 Throttled 292.79 38.51 597.84 1200 1000 800 600 400 200 0 Avg Min Max Fig. 7 Comparion Data Tranfer VII. CONCLUSION Cloud Computing ha wide been adopted by the buine, although there quare meaure everal ubiting problem like Server Conolidation, Load balancing, Energy Management, Virtual Machine Migration, etc. that haven't been comprehenive addreed. Central to thoe problem i that the iue of load balancing, that' needed to ditribute the urplu dynamic native employment equally to all or any the node within the whole Cloud to realize a high ued gratification and reource utilization magnitude relation. It neverthele acertain that each computing reource i ditributed expeditiouly and fairly. Subiting Load balancing technique that are tudied principally fixate on reducing overhead, accommodation replication time and ameliorative performance etc., however none of the technique ha thought-about the execution time of any tak at the run time. Therefore, there' a requiite to develop uch load balancing technique that may ameliorate the performance of cloud computing together with mot reource utilization. REFERENCES Fig. 6 Comparion Chart Repone Time Total data tranferred in between erver and virtual uer are given below. The data tranfer i given in Giga Byte TABLE 3: COMPARISON DATA TRANSFER Virtual Uer ESCEL RR Throttled DLA 5 0.31 0.32 0.32 0.3 10 0.5 0.48 0.51 0.55 15 1.38 1.37 1.38 1.45 20 1.98 1.98 1.96 2 [1] Hongbin Liang, Lin X. Cai, Dijiang Huang, Xuemin (Sherman) Shen, and Daiyuan Peng, An SMDP-Baed Service Model for Inter domain Reource Allocation in Mobile Cloud Network, IEEE Tranaction on Vehicular Technology, Vol. 61, No. 5, pp. 2222-2232, 2012. [2] Zhenhuan Gong, Prakah Ramawamy, Xiaohui Gu, Xiaoong Ma, SigLM: Signature-Driven Load Management for Cloud Computing Infratructure, IEEE Tranaction on Grid Technology, Vol. 60, No. 5, pp. 978-986, 2011. [3] Jianying Luo, Lei Rao, and Xue Liu, Temporal Load Balancing with Service Delay Guarantee for Energy Cot Optimization in Internet Data Center, IEEE Tranaction on Parallel and Ditributed Sytem, pp. 1002-1011, 2013. [4] Tim Dornemann, Ernt Juhnke, Bernd Freileben, On-Demand Reource Proviioning for BPEL Workflow Uing Amazon Elatic Compute Cloud, 9th IEEE/ACM International Sympoium on Cluter Computing and the Grid, pp. 140-147, 2010. Copyright to IARJSET DOI 10.17148/IARJSET.2015.2503 23
[5] Ruchir Shah, Bhardwaj Veeravalli, and Manoj Mira, On the Deign of Adaptive and Decentralized Load-Balancing Algorithm With Load [6] Daniel Warneke, and Odej Kao, Exploiting Dynamic Reource Allocation for Efficient Parallel Data Proceing in the Cloud, IEEE Tranaction on Parallel and Ditributed Sytem, Vol. 22, No. 6, pp. 985-997, 2011. [7] Mladen A. Vouk, Cloud Computing Iue, Reearch and Implementation, IEEE Proceeding of the ITI 30th Int. Conf. on Information Technology Interface, pp. 31-40, 2011. [8] Chun-Cheng Lin, Hui-Hin Chin, and Der-Jiunn Deng, Dynamic Multiervice Load Balancing in Cloud-Baed Multimedia Sytem, IEEE Sytem Journal, pp. 1-10, 2013. [9] Yunhua Deng and Rynon W.H. Lau, On Delay Adjutment for Dynamic Load Balancing in Ditributed Virtual Environment, IEEE Tranaction on Viualization and Computer Graphic, Vol. 18, No. 4, pp. 529-537, 2012. [10] Yi Lua, Qiaomin Xie, Gabriel Kliot, Alan Geller, Jame R. Laru, Albert Greenberg, Join-Idle-Queue: A novel load balancing algorithm for dynamically calable web ervice, Elevier Publication Performance Evaluation, pp. 1056 1071, 2011. [11] Jun Wang, Qiangju Xiao, Jiangling Yin, and Pengju Shang, DRAW: A New Data-gRouping-AWare Data Placement Scheme for Data Intenive Application with Interet Locality, IEEE Tranaction on Magnetic, Vol. 49, No. 6, pp. 2514-2520, 2013. [12] Jiann-Liang Chen, Yanuariu Teofilu Laroa and Pei-Jia Yang, Optimal QoS Load Balancing Mechanim for Virtual Machine Scheduling in Eucalyptu Cloud Computing Platform, IEEE Proceeding 2nd Baltic Congre on Future Internet Communication, pp. 214-221, 2012. [13] Brighten Godfrey, Karthik Lakhminarayanan, Soneh Surana, Richard Karp, and Ion Stoica, Load Balancing in Dynamic Structured P2P Sytem, IEEE Infocom, pp. 1-8,2004. [14] Qi Zhang, Lu Cheng, Raouf Boutaba, Cloud Computing: State-of-the- Art and Reearch Challenge, Springer Publication, pp 7-18, 2010. [15] Shamollah Ghanbari, Mohamed Othman, A Priority baed Job Scheduling Algorithm in Cloud Computing, Elevier publication International Conference on Advance Science and Contemporary Engineering, pp. 778-785, 2012. [16] Giueppe Aceto, Aleio Botta, Walter de Donato, Antonio Pecape, Cloud monitoring: A urvey, Elevier Publication Computer Network, pp. 2093 2115, 2013. [17] Marc Eduard Frincu, Scheduling highly available application on cloud environment, Elevier Publication Future Generation Computer Sytem, pp. 1-16, 2012. [18] L.D. Dhineh Babu, P. Venkata Krihna, Honey Bee Behaviour Inpired Load Balancing of Tak in Cloud Computing Environment, Elevier Publication Applied Soft Computing, pp. 1-12, 2013. [19] Tin-Yu Wu, Wei-Tong Lee, Yu-San Lin, Yih-Sin Lin, Hung- Lin Chan, Jhih-Siang Huang, Dynamic Load Balancing Mechanim baed on Cloud Storage, Proceeding of IEEE, pp. 102-106, 2012. Etimation for Computational Grid Environment, IEEE Tranactionon Parallel and Ditributed Sytem, Vol. 18, No. 12, pp. 1675-1686, 2010. [20] John Harauz, Lorti M. Kaufinan. Bruce Potter, Data Security in the World of Cloud Computing, IEEE Security & Privacy, Co-publihed by the IEEE Computer and Reliability Societie, pp. 61-64, 2009. [21] Yahpalinh Jadeja and Kirit Modi, Cloud Computing - Concept, Architecture and Challenge, International Conference on Computing, Electronic and Electrical Technologie, Co-publihed by the IEEE, pp. 887-880, 2012. [22] Ramgovind S, Eloff MM, Smith E, The Management of Security in Cloud Computing, Information Security for South Africa (ISSA), Copublihed by the IEEE, pp. 1-7, 2010. [23] Rajkumar Buyya, Chee Shin Yeo, Srikumar Venugopal, Jame Broberg, and Ivona Brandic, Cloud Computing and Emerging IT Platform: Viion, Hype, and Reality for Delivering Computing a the 5th Utility, Future Generation Computer Sytem, Volume 25, Number 6, Elevier Science, pp. 599-616, 2009. [24] Rajkumar Buyya, Rajiv Ranjan and Rodrigo N. Calheiro, Modeling and Simulation of Scalable Cloud Computing Environment and the CloudSim Toolkit: Challenge and Opportunitie, Proceeding of the 7th High Performance Computing and Simulation Conference IEEE Pre, pp. 1-11, 2009. Copyright to IARJSET DOI 10.17148/IARJSET.2015.2503 24