Efficient Load Balancing in Cloud: A Practical Implementation



Similar documents
Manjrasoft Market Oriented Cloud Computing Platform

Figure 1. The cloud scales: Amazon EC2 growth [2].

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction

High Performance Cluster Support for NLB on Window

Web DNS Peer-to-peer systems (file sharing, CDNs, cycle sharing)

Dependency Free Distributed Database Caching for Web Applications and Web Services

A Real-Time Cloud Based Model for Mass Delivery

Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform

Performance And Scalability In Oracle9i And SQL Server 2000

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Auto-Scaling Model for Cloud Computing System

Objectives. Distributed Databases and Client/Server Architecture. Distributed Database. Data Fragmentation

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

Group Based Load Balancing Algorithm in Cloud Computing Virtualization

A Middleware Strategy to Survive Compute Peak Loads in Cloud

Bernie Velivis President, Performax Inc

ENHANCED HYBRID FRAMEWORK OF RELIABILITY ANALYSIS FOR SAFETY CRITICAL NETWORK INFRASTRUCTURE

Real-Time Analysis of CDN in an Academic Institute: A Simulation Study

An Introduction to Cloud Computing Concepts

The Three-level Approaches for Differentiated Service in Clustering Web Server

Big Data Mining Services and Knowledge Discovery Applications on Clouds

Manjrasoft Market Oriented Cloud Computing Platform

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES

Cloud Computing: Meet the Players. Performance Analysis of Cloud Providers

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

PipeCloud : Using Causality to Overcome Speed-of-Light Delays in Cloud-Based Disaster Recovery. Razvan Ghitulete Vrije Universiteit

Grid Computing Vs. Cloud Computing

Network-Aware Scheduling of MapReduce Framework on Distributed Clusters over High Speed Networks

Performance Analysis of Web based Applications on Single and Multi Core Servers

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

- An Essential Building Block for Stable and Reliable Compute Clusters

SLA Driven Load Balancing For Web Applications in Cloud Computing Environment

An Efficient Use of Virtualization in Grid/Cloud Environments. Supervised by: Elisa Heymann Miquel A. Senar

@IJMTER-2015, All rights Reserved 355

Rackspace Cloud Databases and Container-based Virtualization

CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications

Chapter 19 Cloud Computing for Multimedia Services

High Availability Design Patterns

Dynamic Resource allocation in Cloud

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

Market Oriented and Service Oriented Architecture of Cloud Storage

Chapter 7. Using Hadoop Cluster and MapReduce

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Affinity Aware VM Colocation Mechanism for Cloud

Network Performance Between Geo-Isolated Data Centers. Testing Trans-Atlantic and Intra-European Network Performance between Cloud Service Providers

ABSTRACT. KEYWORDS: Cloud Computing, Load Balancing, Scheduling Algorithms, FCFS, Group-Based Scheduling Algorithm

1.0 Hardware Requirements:

Efficient Data Management Support for Virtualized Service Providers

Case Study - I. Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008.

How To Model A System

The International Journal Of Science & Technoledge (ISSN X)

EC2 Performance Analysis for Resource Provisioning of Service-Oriented Applications

An Implementation of Load Balancing Policy for Virtual Machines Associated With a Data Center

Multi-level Metadata Management Scheme for Cloud Storage System

Cloud Infrastructure Planning. Chapter Six

be architected pool of servers reliability and

A Survey on Load Balancing and Scheduling in Cloud Computing

CiteSeer x in the Cloud

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

VMWARE WHITE PAPER 1

The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang

Optimization of Cluster Web Server Scheduling from Site Access Statistics

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

Optimal Service Pricing for a Cloud Cache

Configuration Management of Massively Scalable Systems

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing

Parallels Plesk Automation

Cisco Application Networking for Citrix Presentation Server

DELL s Oracle Database Advisor

ZEN NETWORKS 3300 PERFORMANCE BENCHMARK SOFINTEL IT ENGINEERING, S.L.

Efficient Data Replication Scheme based on Hadoop Distributed File System

Dynamic Resource management with VM layer and Resource prediction algorithms in Cloud Architecture

Load Balancing using DWARR Algorithm in Cloud Computing

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

White Paper. Cloud Native Advantage: Multi-Tenant, Shared Container PaaS. Version 1.1 (June 19, 2012)

XMPP A Perfect Protocol for the New Era of Volunteer Cloud Computing

Performance Tuning and Optimizing SQL Databases 2016

CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server

Investor Newsletter. Storage Made Easy Cloud Appliance High Availability Options WHAT IS THE CLOUD APPLIANCE?

How To Monitor A Server With Zabbix

ACHIEVING 100% UPTIME WITH A CLOUD-BASED CONTACT CENTER

Exploring Oracle E-Business Suite Load Balancing Options. Venkat Perumal IT Convergence

COMLINK Cloud Technical Specification Guide DEDICATED SERVER

GUEST OPERATING SYSTEM BASED PERFORMANCE COMPARISON OF VMWARE AND XEN HYPERVISOR

Introduction to Big Data Training

Fair Scheduling Algorithm with Dynamic Load Balancing Using In Grid Computing

DEDICATED MANAGED SERVER PROGRAM

A Survey Study on Monitoring Service for Grid

Software-Defined Networks Powered by VellOS

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Liferay Portal Performance. Benchmark Study of Liferay Portal Enterprise Edition

Deploying Business Virtual Appliances on Open Source Cloud Computing

Research on Job Scheduling Algorithm in Hadoop

High Availability Essentials

SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS

Transcription:

Efficient Load Balancing in Cloud: A Practical Implementation Shenzhen Key Laboratory of Transformation Optics and Spatial Modulation, Kuang-Chi Institute of Advanced Technology, Software Building, No. 9 Gaoxinzhong 1st Road, High- Tech Industrial Estate, Nanshan District, Shenzhen, Guangdong, P. R. China {anirban.kundu, guanxiong.xu, ruopeng.liu}@kuang-chi.org Abstract In this paper, a load balancing strategy in heterogeneous environment of a Cloud is going to be developed. It consists of several high-end servers and cluster computers. A set of Web servers is to be utilized to create the effect of Web based scenario. Load balancers are being used to control the propagation of the user oriented requests towards a specific sub-network of proposed Cloud. Exhaustive experimentations are to be exhibited in the paper to demonstrate systems' high performance. The goal of setting up the overall system network is to meet the high demand of services from the clients' devices in a concurrent manner. Experimental results have been demonstrated using proposed Cloud architecture based on space, CPU, disk, memory, and network. Test results of Windows clients are generated using freeware "Jmeter" for maximum of 2000 concurrent users. In case of Linux clients, test results are generated using freeware "ab" for maximum of 1000 concurrent users. Maximum 2000 users are used for the sake of showing the results in a concise way. Keywords: Cloud, Load Balancing, Web Server, Manager Node, Virtual Machine 1. Introduction Web server is referred to either the hardware (computer/machine) or the software (application) which is responsible for execution and delivering the Web materials accessing through the Internet [1]. Users are connected to outer world through World Wide Web (WWW). A person could fetch data from Web servers of particular organizations which are physically situated in different geographical locations within the world. Distributed computing [2] and network communication [3] are the major factors to handle this type of situations. A typical Web server is shown in Figure 1. Web server of a big organization is having number of servers for specific activities. Web pages [4] and other related documents [5] are stored in Web servers as clusters [6]. A lot of researches have been conducted in the field of Web services [7] in which Web activities are mixed with the service perspectives [8]. In case of data oriented Web activities, users' have to communicate with database servers which are situated behind the Web servers. It means users' could make queries to specific Web Search Engine, and as a result the Search Engine would communicate with the database servers to fetch datasets based on the specific users' queries. It means Web servers and database servers are very important issues in case of Web technology [9] and Web services [10]. Figure 1. Typical structure of a Web server based cluster In case of huge data related activities, communication between a number of servers is highly required in runtime. These servers typically deal with distinct phases of activities with a common goal to achieve. In this scenario, load balancing [11] concept comes into the picture for making a stable communication all along the duration of the specific users' requests. It is used to distribute a concentrated work load across a number of computers and other resources of a single network and/or International Journal of Advancements in Computing Technology(IJACT) Volume 5, Number 12, August 2013 43

multiple networks [12]. In real-time scenario, if the computers or some other resources have workload beyond its upper threshold, then it would be dispersed among a set of the particular device to overcome a system crash due to overloading and to utilize more devices for specific activities to finish it with less time complexity [13]. Now-a-days, service oriented Web activities are popular in Web industry. It is known as Cloud. Cloud [14] means the usage of computing hardware and software based resources which are available in any geographical locations and accessible over a network such as Internet [15]. At the same time, it is not transparent to the users. Users could be able to access data storage and/or other resources of the Cloud without having knowledge of its physical locations and logical behaviors. The result of a user's query could be thrown in front of his/her desktop after executing the submitted task [16-17]. Figure 2. Typical structure of a database cluster [18] A MySQL Cluster [18-19] consists of a set of computers, known as hosts, each running one or more processes. These processes are known as nodes. These nodes are responsible for inclusion of MySQL servers to access Network Database (NDB) data, data nodes for storage of the data, one or more management servers, and other specialized data access programs. The relationship of these components in a MySQL Cluster is shown in Figure 2. All these programs work together to form a MySQL Cluster. When data is stored by the NDB storage engine, the tables and table data are stored in data nodes. Such tables are directly accessible from all other MySQL servers (SQL nodes) in the cluster. Thus, in a specific data storage application in a cluster, if one application updates any data to a MySQL server, then all other MySQL servers are also being updated based on that particular query immediately following specific scheduling strategies [20]. There are broadly three types of cluster nodes in a minimal MySQL Cluster configuration, such as Management node, Data node, and SQL node. A brief overview of these nodes are mentioned as follows: Management node - This type of node is used to manage other nodes within the particular MySQL Cluster. It is utilized to provide configuration related data, specific "start" & "stop" nodes, backup facilities, and so on. Management node is devoted to manage the configuration of other nodes. Management node should be started first before initiating other types of nodes within the cluster. Data node - This type of node is used to store cluster data. Total number of data nodes is dependent on the number of replicas, and the number of fragments. For example, four data nodes are required, if two replicas are there in the system network, and in each replica if there are two fragments. One replica is sufficient for data storage, but it does not provide redundancy. Thus, it is recommended to have 2 (or more) replicas to provide redundancy and high availability. SQL node - Cluster data is accessed using SQL nodes. In case of MySQL Cluster, an SQL node is a typical MySQL server which deals with queries. This type of node is used to designate any application which accesses MySQL Cluster data. Therefore, in this paper, an efficient load balancing technique has been shown in details along with the experimental results in practical situations. Rest of the paper is designed as follows: Section 2 describes proposed work; Section 3 depicts experimental results along with system performance and test results; Section 4 concludes the paper. 44

2. Proposed Work Figure 3. Proposed Framework of Cloud using Load Balancing Technique In proposed Cloud, a heterogeneous and robust network environment has been established by setting up a basic system of the databases and the servers. It is made sure that the databases and the servers within the Cloud should communicate without any error having no delays of any kind. At the same time, full-proof Cloud system having "24 X 7" online server based services has been successfully achieved. So, a number of servers is used to control each activity in the Cloud. Typically, there are several servers to be used for controlling a particular Web based activity. Similarly, database activities are also being controlled in the network using a set of servers using replication technique to avoid any type of system failure in real-time scenario. If one server fails, another server from the specific set would handle the particular case. Therefore, the overall Cloud network exhibits 100% efficiency all the time. A set of network managers are being exploited to control the load balancing of worker systems. Each worker is connected to a set of application servers which are responsible for the execution of actual user based queries. The goal of setting up a number of servers for each activity is to meet the huge demand of services from client devices at precisely the same time. It is well known that a Web server or a database server has a maximum limit based on the system configurations for processing user based queries at any time instance. So, in order to offer huge amount of responses to the particular service, more servers have been used to execute at minimal time as a balancing factor of the user requests. Manager nodes, having specific serial numbers, are connected to each other for controlling system failure. A particular manager node controls the overall distribution of user requests based on the specific applications at any particular time. All other managers at that time are in a listening state. Those managers don't react unless they receive particular error signal from the active manager. Therefore, only one manager is responsible to take decision for all the activities at any particular instance. As the active manager receives the request, it searches its mapping table to find out the specific server(s) to execute it. Initially the user does not know which server is free or busy. So, this type of generic structure is highly 45

required to handle and stabilize the whole system network in a balanced way. Then, manager node assigns the server(s) for the particular request/query using specific load balancing strategies. In this approach, two levels of manager nodes have been used. First level is used for general load balancing of user requests between HTTP servers and Tomcat servers. Second level of load balancing is achieved for database related queries between database servers and data nodes. Figure 3 is the pictorial representation of proposed Cloud having connections between Physical Machines, Data Storages and VM Network. VM Network consists of Web servers, MYSQL based Data Managers, MySQL based Data Servers, MySQL based Data Nodes. Virtual machine network has been utilized in the design to ease interactive operations. Related virtual machines (servers) of a particular physical machine could be easily identified using this figure. The target has been successfully achieved to handle 1000 to 2000 users' requests per second using only 3 physical machines of the high-end cluster which has 100 nodes (servers). Therefore, if all the 100 nodes are used, applying proposed methodology, then higher efficiency could be achieved. System network has been tuned as per requirements to get benefit using least number of machines. Figure 4. Storage View of Proposed Cloud Structure with SCSI Adapter and SCSI Volume (Local ATA disk) connected by Data Center Figure 4 shows the orientation of the Cloud structure mainly focused on the data storage. Data center is the hub which controls the information propagation between different nodes. Here node should be a server or data storage or SCSI device or virtual machine. If a storage system is failed to transmit data through specific path, then manager system redirects the data following another path based on runtime decision. For simplicity, only three IP addresses have been considered in this paper to show all types of activities. Interactions between separate modules for handling users' requests over Web in proposed Cloud structure are shown in Figure 5. Load balancing factor is used to balance the load on Web servers using common Internet Protocol (IP) address in Domain Name Server (DNS) of the proposed Cloud. Therefore, several Web servers (physical machines or logical machines) could be connected using the mapping between external IP address and available internal IP addresses of the Cloud. Web servers are further connected to Tomcat servers using different kinds of workers, such as, (i) load balance worker, (ii) actual worker, (iii) status worker. Load balance worker uses specific types of methods to maintain proper balance for better distribution of works among the actual workers through specific host, port 46

and load balance factors. Actual workers execute and control the information using specific host. Status worker shows the overall activities like a status manager of the proposed Cloud. After selection of Tomcat servers, another load balancer module selects the specific MySQL server among the list available based on the real-time load of that server. MySQL servers are connected to data nodes for handling the data transactions to store into the physical storage, or, to fetch data from the physical storage. MySQL manager is responsible to monitor the MySQL servers and MySQL data nodes for synchronization. MySQL manager nodes are connected to each other for controlling system failure. If one MySQL manager fails, then automatically the next manager would take care of the situation. In the mean time, the crashed manager could be fixed and rebooted as required. MySQL data nodes typically save data into repositories maintaining proper backup copies in different partitions of node groups as primary and/or backup replicas. Typical boot-up sequences of MySQL cluster are as follows: (i) Manager Nodes; (ii) Data Nodes; (iii) MySQL servers; Figure 5. Interactions between separate modules of proposed Cloud framework for handling users' queries Figure 6. VM Resources of Physical Machine (10.10.10.150) Figure 6, Figure 7, and Figure 8 are pictorial representations of virtual resources of each considered physical machines in the cluster having specific IP addresses. Each physical machine is utilized as a set of servers of different kind. 47

Figure 7. VM Resources of Physical Machine (10.10.10.152) Figure 8. VM Resources of Physical Machine (10.10.10.153) In next section, the experimental results have been shown using proposed Cloud framework. 3. Experimental Results In this paper, experimental section deals with varieties of system parameters with different perspectives to exhibit better and suitable performance having good quality of research results. One month observation of the proposed Cloud system network has been depicted with the help of distinct graphs dealing with space, memory, usage, time, virtual machine, and physical machine. "Jmeter" and "ab" are two well known standards available in the Internet. These standards are being used to show the optimum results of the proposed Cloud. 3.1. System Performance Figure 9 shows the summary for the proposed data storages in terms of 'GB'. Total three data stores have been created and utilized in the proposed framework as shown in this figure. Further, Figure 10 and Figure 11 are related to CPU usage in the proposed Cloud system network. It would be possible to monitor specific CPUs in terms of '%' and/or 'MHz'. 48

Figure 9. Monthly Summary for proposed Data stores Figure 10. Monthly Summary of CPU (%) Usage for a Physical Server Machine in Cloud Figure 11. Monthly Summary of CPU (MHz) Usage for a Physical Server Machine in Cloud Different viewpoints of disk usage are shown in Figure 12, Figure 13, and Figure 14. Disk (KBps) usage for a physical server machine in Cloud is depicted in Figure 12. Different latencies are compared in Figure 13. Highest value of the latency is pointed out to focus on the maximum waiting time of a particular process due to queue based scheduling system. Disk usage (Number-Top 10) is another vital information for a user and also for the service provider for signifying performance during a specific period (refer Figure 14). 49

Figure 12. Monthly Summary of Disk (KBps) Usage for a Physical Server Machine in Cloud Figure 13. Monthly Summary of Disk (ms) Usage for a Physical Server Machine in Cloud Figure 14. Monthly Summary of Disk (Number-Top10) Usage for a Physical Server Machine in Cloud Figure 15. Monthly Summary of Memory (%) Usage for a Physical Server Machine in Cloud 50

Figure 16. Monthly Summary of Memory (Balloon) Usage for a Physical Server Machine in Cloud Figure 17. Monthly Summary of Memory (MBps) Usage for a Physical Server Machine in Cloud Figure 18. Monthly Summary of Network (Mbps) Usage for a Physical Server Machine in Cloud Figure 19. Space Utilization for Proposed Cloud Data stores Figure 15 shows that the maximum memory (%) usage for a physical server over a month is in between 24% to 25%. Figure 16 and Figure 17 are the representations of memory usage (balloon & MBps) for a physical server in Cloud. Memory utilization is negligible in the proposed Cloud while handling different users' requests over a specific time period. 51

Figure 18 is the highlight of the summary of network usage for a physical server in Cloud. Maximum 42 Mbps bandwidth is required in this case. Therefore, high bandwidth is not at all required in this load balancing approach. Figure 19 is the representation of the space utilization for the proposed Cloud data storages. Enough free spaces are available after applying load balancing. Figure 20. Virtual Machine operations within Cloud in one month Figure 20 shows the measurement of VM power on count, VM power off count, vmotion count, and storage vmotion count. Virtual machine operations within the proposed Cloud is effectively evaluated using this type of measurement. 3.2. Test Results Several test results have been shown in this sub-section to represent proposed Cloud superiority in different aspects. In all the graphs, the superior quality of load balancing has been demonstrated using different parameters of the Cloud system network. Figure 21. Summary Report of Jmeter Time vs. Number of Samples Figure 22. Summary Report of Jmeter Throughput per second vs. Number of Samples 52

Figure 21 is the graphical view between time and number of samples using "Jmeter" standard. Figure 22 shows the graph between throughput per second number of samples used in the testing. Figure 23 is the performance graph representing standard deviation and number of samples. Figure 24 and Figure 25 are the parts of aggregate reports of "Jmeter" standard. The graph using the values of median and number of samples are shown in Figure 24; whereas in Figure 25, 90% line is the major focus. Figure 23. Summary Report of Jmeter Standard Deviation vs. Number of Samples Figure 24. Aggregate Report of Jmeter Median vs. Number of Samples Figure 25. Aggregate Report of Jmeter 90% Line vs. Number of Samples "ab" benchmark tool is used in this paper to show the performance of "Apache" based Hypertext Transfer Protocol (HTTP) server. It is typically designed to show an impression of how the current Apache installation performs within servers. It shows the capability of serving number of requests per second for the Apache installation. Percentage of requests served using "ab" standard is shown in Figure 26 representing a graph of "Time" vs. "Concurrent Load". 4. Conclusion Figure 26. Percentage of requests served using "ab" standard A load balancing strategy has been proposed in heterogeneous environment of Cloud based network. Load balancers have been utilized for controlling distinct segments of the Cloud without any failure. Proposed strategy has successfully controlled propagation of user oriented requests towards a specific sub-network of Cloud network. High performance is achieved using monitoring strategy maintaining the set of computers as controllers. Users' requests have been 53

executed in a concurrent manner. Experimental results have been demonstrated using proposed Cloud architecture based on space, CPU, disk, memory, and network. 5. Acknowledgment The work is supported by the introduction of innovative R&D team program of Guangdong Province (No. 2011D024), Shenzhen Innovative R&D Team Program (Peacock Plan) (No. KQE201106020031A) & Shenzhen Science and Technology Plan (No. JC201005280651A). References [1] http://en.wikipedia.org/wiki/web_server [2] A. AuYoung, B. Chun, A. Snoeren, A. Vahdat, Resource allocation in federated distributed computing infrastructures, In Proceedings of the 1st Workshop on Operating System and Architectural Support for the Ondemand IT Infrastructure (OASIS 2004), Boston, USA, October 2004. [3] Scott Pakin, The Design and Implementation of a Domain-Specific Language for Network Performance Testing, IEEE Transactions on Parallel and Distributed Systems, Vol. 18, No. 10, October 2007. [4] https://en.wikipedia.org/wiki/web_page [5] Apostolos Antonacopoulos, Web Document Analysis: Challenges and Opportunities, World Scientific, 2003. [6] Bhuvan Urgaonkar, Prashant J. Shenoy, Sharc: Managing CPU and Network Bandwidth in Shared Clusters, IEEE Transactions on Parallel and Distributed Systems, Vol. 15, No. 1, January 2004. [7] K. Keahey, I. Foster, T. Freeman, X. Zhang, Virtual workspaces: Achieving quality of service and quality of life in the Grid, Scientific Programming, 13(4):265-275, October 2005. [8] D. Benslimane, S. Dustdar, A. Sheth, Services Mashups: The New Generation of Web Applications, IEEE Internet Computing, 10 (5): 13 15, 2008. [9] Benjamin Eckart, Xubin He, Qishi Wu, Changsheng Xie, A Dynamic Performance-Based Flow Control Method for High-Speed Data Transfer, IEEE Transactions on Parallel and Distributed Systems, Vol. 21, No. 1, January 2010. [10] Anirban Kundu, Ruma Dutta, Debajyoti Mukhopadhyay, Generation of SMACA and its Application in Web Services, 9 th International Conference on Parallel Computing Technologies (PaCT 2007), Pereslavl-Zalessky, Russia, Lecture Notes in Computer Science, Springer-Verlag, Germany, September 3-7, 2007. [11] N. Nehra, R. Patel, Distributed parallel resource co-allocation with load balancing in grid computing, Journal of Computer Science and Network Security, 2007. [12] S. Chau, A. Wai, C. Fu, Load balancing between computing clusters, 4th Conference on Parallel and Distributed Computing Applications and Technologies, 2003. [13] V. Kun-Ming, V. Yu, C. Chou, Y. Wang, Fuzzy-based dynamic load-balancing algorithm, Journal of Information, Technology and Society, 2004. [14] Peter Wayner, Cloud versus cloud A guided tour of Amazon, Google, AppNexus and GoGrid, InfoWorld, July 21, 2008. [15] D. E. Irwin, J. S. Chase, L. E. Grit, A. R. Yumerefendi, D. Becker, K. Yocum, Sharing networked resources with brokered leases, In Proceedings of the 2006 USENIX Annual Technical Conference (USENIX 2006), Boston, USA, June 2006. [16] A. Weiss, Computing in the Clouds, NetWorker, 11(4):16-25, Dec. 2007. [17] R. Buyya, C. S. Yeo, S. Venugopal, Market oriented cloud computing: Vision, hype, and reality for delivering IT services as computing utilities, In Proceedings of 10th IEEE International Conference on High Performance Computing and Communications, 2008. [18] http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-overview.html [19] http://wiki.kaavo.com/index.php/mysql_cluster [20] Dave Dice, Ori Shalev, Nir Shavit, Transactional locking II, In Proc. International Symposium on Distributed Computing, Springer Verlag, 2006. 54