Towards a Load Balancing in a Three-level Cloud Computing Network

Similar documents
A Survey on Load Balancing Technique for Resource Scheduling In Cloud

A Hybrid Load Balancing Policy underlying Cloud Computing Environment

A SURVEY ON LOAD BALANCING ALGORITHMS FOR CLOUD COMPUTING

Load Balancing with Tasks Subtraction

A Review of Load Balancing Algorithms for Cloud Computing

CDBMS Physical Layer issue: Load Balancing

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

A Survey Of Various Load Balancing Algorithms In Cloud Computing

International Journal Of Engineering Research & Management Technology

Figure 1. The cloud scales: Amazon EC2 growth [2].

A NEW APPROACH FOR LOAD BALANCING IN CLOUD COMPUTING

A Review on Load Balancing Algorithm in Cloud Computing

Simulation of Dynamic Load Balancing Algorithms

A Survey on Load Balancing and Scheduling in Cloud Computing

Extended Round Robin Load Balancing in Cloud Computing

Energy Efficiency in Cloud Data Centers Using Load Balancing

Grid Computing Vs. Cloud Computing

Improved Hybrid Dynamic Load Balancing Algorithm for Distributed Environment

International Journal of Advance Research in Computer Science and Management Studies

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

@IJMTER-2015, All rights Reserved 355

2 Prof, Dept of CSE, Institute of Aeronautical Engineering, Hyderabad, Andhrapradesh, India,

Keywords Load balancing, Dispatcher, Distributed Cluster Server, Static Load balancing, Dynamic Load balancing.

Load Balanced Min-Min Algorithm for Static Meta-Task Scheduling in Grid Computing

Effective Load Balancing Based on Cloud Partitioning for the Public Cloud

A Survey on Load Balancing Techniques Using ACO Algorithm

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT

A Game Theory Modal Based On Cloud Computing For Public Cloud

International Journal of Computer & Organization Trends Volume21 Number1 June 2015 A Study on Load Balancing in Cloud Computing

HYBRID ACO-IWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUD-BASED PARAMETER SWEEP EXPERIMENTS

A Review of Customized Dynamic Load Balancing for a Network of Workstations

A Study on the Application of Existing Load Balancing Algorithms for Large, Dynamic, Heterogeneous Distributed Systems

Load Balancing to Save Energy in Cloud Computing

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*

AN EFFICIENT LOAD BALANCING APPROACH IN CLOUD SERVER USING ANT COLONY OPTIMIZATION

The International Journal Of Science & Technoledge (ISSN X)

Various Schemes of Load Balancing in Distributed Systems- A Review

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

LOAD BALANCING ALGORITHM REVIEW s IN CLOUD ENVIRONMENT

EXISTING LOAD BALANCING TECHNIQUES IN CLOUD COMPUTING: A SYSTEMATIC RE- VIEW

A Novel Switch Mechanism for Load Balancing in Public Cloud

Distributed Dynamic Load Balancing for Iterative-Stencil Applications

Comparison on Different Load Balancing Algorithms of Peer to Peer Networks

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

Dual Mechanism to Detect DDOS Attack Priyanka Dembla, Chander Diwaker 2 1 Research Scholar, 2 Assistant Professor

A Load Balancing Model Based on Cloud Partitioning for the Public Cloud

A Survey on Load Balancing Algorithms in Cloud Environment

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

LOAD BALANCING AND EFFICIENT CLUSTERING FOR IMPROVING NETWORK PERFORMANCE IN AD-HOC NETWORKS

An Architecture Model of Sensor Information System Based on Cloud Computing

A Comprehensive Analysis of Existing Load Balancing Algorithms in Cloud Network

Statistics Analysis for Cloud Partitioning using Load Balancing Model in Public Cloud

Importance of Load Balancing in Cloud Computing Environment: A Review

Power Aware Load Balancing for Cloud Computing

IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES

Group Based Load Balancing Algorithm in Cloud Computing Virtualization

The Load Balancing Strategy to Improve the Efficiency in the Public Cloud Environment

Public Cloud Partition Balancing and the Game Theory

A Comparison of General Approaches to Multiprocessor Scheduling

Journal of Theoretical and Applied Information Technology 20 th July Vol.77. No JATIT & LLS. All rights reserved.

A QoS-driven Resource Allocation Algorithm with Load balancing for

International journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online

A Load Balancing Method in SiCo Hierarchical DHT-based P2P Network

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment

Load Balancing in Distributed Data Base and Distributed Computing System

Cloud Computing for Agent-based Traffic Management Systems

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

Dynamic resource management for energy saving in the cloud computing environment

A Game Theoretic Approach for Cloud Computing Infrastructure to Improve the Performance

Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

An Optimization Model of Load Balancing in P2P SIP Architecture

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations

Design of an Optimized Virtual Server for Efficient Management of Cloud Load in Multiple Cloud Environments

Cloud Partitioning of Load Balancing Using Round Robin Model

Efficient DNS based Load Balancing for Bursty Web Application Traffic

Introduction to Networks

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

Load Balancing Strategy of Cloud Computing based on Artificial Bee

Load Balancing in Fault Tolerant Video Server

A NETWORK CONSTRUCTION METHOD FOR A SCALABLE P2P VIDEO CONFERENCING SYSTEM

Available online at Available online at

ISSN (Print): , ISSN (Online): , ISSN (CD-ROM):

DynamicCloudSim: Simulating Heterogeneity in Computational Clouds

QUALITY OF SERVICE METRICS FOR DATA TRANSMISSION IN MESH TOPOLOGIES

Transcription:

Towards a Load Balancing in a Three-level Cloud Computing Network Shu-Ching Wang, Kuo-Qin Yan * (Corresponding author), Wen-Pin Liao and Shun-Sheng Wang Chaoyang University of Technology Taiwan, R.O.C. scwang@cyut.edu.tw kqyan@cyut.edu.tw s9833901@cyut.edu.tw sswang@cyut.edu.tw Abstract Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamically scalable and often virtualized resources are provided as a service over the Internet has become a significant issue. The cloud computing refers to a class of systems and applications that employ distributed resources to perform a function in a decentralized manner. Cloud computing is to utilize the computing resources (service nodes) on the network to facilitate the execution of complicated tasks that require large-scale computation. Thus, the selecting nodes for executing a task in the cloud computing must be considered, and to exploit the effectiveness of the resources, they have to be properly selected according to the properties of the task. However, in this study, a two-phase scheduling algorithm under a three-level cloud computing network is advanced. The proposed scheduling algorithm combines OLB (Opportunistic Load Balancing) and LBMM (Load Balance Min-Min) scheduling algorithms that can utilize more better executing efficiency and maintain the load balancing of system. Keywords-Distributed System; Cloud Computing; Scheduling; Load Balancing I. INTRODUCTION Today, network bandwidth and hardware technology advance continuously to keep pace with the vigorous development of the Internet. The new concept of cloud computing allows for more applications for internet users [2,9,10]. Cloud computing is currently used many commodity nodes that can cooperate to perform a specific service together. In addition, the internet applications are continuously enhanced with multimedia, and vigorous development of the device quickly occurs in the network system [2,9]. Thus, applications associated with network integration have gradually attracted considerable attention. In a cloud computing environment, users can access the operational capability faster with internet application [7], and the computer systems have the high stability to handle the service requests from many users in the environment. However, the internet infrastructure is continuous grow that many application services can be provided in the Internet. In a distributed computing system, components allocated to different places or in separate units are connected so that 978-1-4244-5539-3/10/$26.00 2010 IEEE they may collectively be used to greater advantage [6]. In addition, cloud computing has greatly encouraged distributed system design and application to support useroriented service applications [10]. Furthermore, many applications of cloud computing can increase user convenience, such as YouTube [10]. The Internet platform of cloud computing provides many applications for users, just like video, music et al. Therefore, how to utilize the advantage of cloud computing and make each task to obtain the required resources in the shortest time is an important topic. However, in this study, a twophase load balancing algorithm that combine OLB and LBMM scheduling algorithm is proposed. Moreover, the agent mechanism is used to collect the related node information to achieve efficient utilization resource and enhance work efficiency. The rest of this paper is organized as follows. The literature review is discussed in Section 2. Section 3 presents in detail the proposed LBMM scheduling algorithm. The research environment and process are discussed in Section 4. Section 5 describes the structure and the method of the system. Finally, Section 6 concludes this paper. II. LITERATURE REVIEW Cloud Computing is a kind of distributed computing where massively scalable IT-related capabilities are provided to multiple external customers as a service using internet technologies [13]. The cloud providers have to achieve a large, general-purpose computing infrastructure; and virtualization of infrastructure for different customers and services to provide the multiple application services. Furthermore, the ZEUS Company develops software that can let the cloud provider easily and cost-effectively offer every customer a dedicated application delivery solution [14]. The ZXTM software is much more than a shared load balancing service and it offers a low-cost starting point in hardware development, with a smooth and cost-effective upgrade path to scale as your service grows [,14]. The ZEUS provided network framework can be utilized to develop new cloud computing methods [14], and is utilized in the current work. In this network composition that can support the network topology of cloud computing used in our study [,13]. According to the ZEUS network framework and in consequence of the properties of cloud computing structure, a three-level hierarchical topology is 108

adopted to our investigate framework. According to the whole information of each node in a cloud computing environment, the performance of the system will be managed and enhanced. There are several methods can collect the relevant information of node that includes broadcasting, the centralized polling and agent. Agent is one of the technologies used extensively in recent years. It has inherent navigational autonomy and can ask to be sent to some other nodes. In other words, agent should not have to be installed on every node the agent visits, it could collect related information of each node participating in cloud computing environment, such as CPU utilization, remaining CPU capability, remaining memory, transmission rate, etc. Therefore, when agent is dispatched, it does not need any control or connection, and travel flow can be reducing in maintaining the system []. However, in this study, the agent is used to gather the related information, and reduce the resources wasting and cost. There are different characteristics of each scheduling algorithm. Opportunistic Load Balancing (OLB) is to attempt each node keep busy, therefore does not consider the present workload of each computer. OLB assigns each task in free order to present node of useful.the advantage is quite simple and reach load balance but its shortcoming is not consider each expectation execution time of task, therefore the whole completion time (Make span) is very poor [2,8]. In other words, OLB dispatches unexecuted tasks to currently available nodes at random order, regardless of the node s current workload [1,4]. Minimum Execution Time (MET) assigns each job in arbitrary order to the nodes on which it is expected to be executed fastest, regardless of the current load on that node. MET tries to find good job-node pairings, but because it does not consider the current load on a node it will often cause load imbalance between the nodes and not adapt application in the heterogeneity computer system [1,3]. Minimum Completion Time (MCT) assigns each job in arbitrary order to the nodes with the minimum expected completion time for the job [7]. The completion time is simply the ETC, but this is a much more successful heuristic as both execution times and node loads are considered [1,3]. Min-Min scheduling algorithm establishes the minimum completion time for every unscheduled job, and then assigns the job with the minimum completion time to the node that offers it this time [9]. Min-min uses the same mechanism as MCT. However, because it considers the minimum completion time for all jobs at each round, it can schedule the job that will increase the overall make span the least. Therefore, it helps to balance the nodes better than MCT. In addition, spirit of Min-min is that every composed of the best is all minimum completion time for allocation resource. Because of OLB scheduling algorithm is very simply and easy to implement and each computer often keep busy. In our research, the OLB scheduling algorithm is used to assigns the job and divides the task into subtask in a three level cloud computing network. In addition, in order to provide the working load balance of each computer in the system, the Min-Min scheduling algorithm will be improved in this investigates on which it is expected to be efficiently reducing execution time of each node. III. LBMM CHEDULING ALGORITHM In this study, the Load Balance Min-Min (LBMM) scheduling algorithm is proposed that takes the characteristic of the Min-Min scheduling algorithm as foundation. The performance of Min-Min scheduling algorithm is considered to minimize the completion time of all works. However, the biggest weakness of Min-Min scheduling algorithm is it only considers the completion time of each task at the node but does not consider the work loading of each node. Therefore, some nodes maybe always get busy but some nodes maybe still idle. The proposed LBMM will improve the load unbalance of the Min-Min and reduce the execution time of each node effectively. In addition, the multi-level hierarchical network topology can decrease the data store cost [5]. However, a higher lever will increase the cost of network management. Therefore, in our study, a three-lever hierarchical framework (as shown in Figure 1) is used. The third level is the service node that used to execute subtask. The second level is the service manager that used to divide the task into some logical independent subtasks. The first level is the request manager that used to assign the task to a suitable service manager. Figure 1. Three-level framework In order to reach load balance and decrease execution time for each node in the three-level cloud computing network, the OLB and LBMM scheduling algorithms are integrated in our study. The progression of LBMM scheduling algorithm is shown in Figure 2. The concept of LBMM scheduling algorithm is to distribute task among each service manager into some subtasks to be executed in a suitable service node. LBMM considers the execution time of each subtask on each service node. Each subtask will be figured out the execution time on different service nodes (N 11 ) via agent. According to the information gathering by agent, each service manager chooses the service node of shortest execution time to execute different subtasks and records it into the Min-time array. Finally, the Min-time array of each subtask is recorded that is a set of minimal execution time on certain service nodes. In the meanwhile, service manager chooses the service node from Min-time array. This means the th subtask on service node is performed first. Therefore, the subtask will be distributed to service node. Since the subtask has been distributed to service node to be performed, the subtask will be deleted from subtask queue. Meanwhile, the Min- 109

time array will be rearranged, and the service node is put on the last one of Min-time array. In our study, an integrated scheduling algorithm is provided that combines OLB and LBMM scheduling algorithm in a three-level cloud computing network. According to the properties of the proposed integrated scheduling algorithm, the load balance and the execution time of each node is considered. 1: 2: 3: 4: 5: 6: According to the requirement of subtask X i to choose the minimal execution time service node from C service nodes, and to form a Min-time service node set, where X i is the total number of subtasks, C is the total number of nodes insides in executable subtask service nodes set. To choose service node from Min-time set in which has the shortest execution time, where is the identifier number of node. Assign subtask to service node. Remove the complete executed subtask from the needed to be executed task set, where is represented the identifier of subtask. Rearrange the Min-time array, and the service node is put on the last one. Repeat 1 to 5, until all subtasks execute completely. Figure 2. The progression of LBMM scheduling algorithm IV. THE PROPOSED METHOD There are several heterogeneous nodes in a cloud computing system. Namely, each node has different capability to execute task; hence, only consider the CPU remaining of the node is not enough when a node is chosen to execute a task. Therefore, how to select an efficient node to execute a task is very important in a cloud computing. Due to the task maybe has different characteristic for user to pay execution. Hence it is need some of the resources of specific, for instance, when implement organism sequence assembly, it is probable have to big requirement toward memory remaining [11]. And in order to reach the best efficient in the execution each tasks, so we will aimed tasks property to adopt a different condition decision variable in which it is according to resource of task requirement to set decision variable. In this study, an agent mainly collects related information of each node participating in this cloud computing system, such as remaining CPU capability, remaining memory, and transmission rate. After all these data are collected, they will be provided to the manager and assist it in maintaining the load balancing of the system. The factors are defined as following: V1 = The remaining CPU capability; V2 = The remaining memory; and V3 = Transmission rate To make the manager select appropriate nodes effectively, all of the nodes (includes service manager and service node) in the system will be evaluated by the threshold that is derived from the demand for resource needed to execute the task. The service manager that passes the threshold of service manager considered effective, and will be the candidate of effective nodes by manager. The service nodes that pass the threshold of service node considered effective, and will be the candidate of effective nodes by service manager. A. Threshold of service manager The cloud computing environment is composed of heterogeneous nodes, where the property of each node may greatly differ. In other words, the computing capability provided by the CPU, the available size of memory, and transmission rate are different. In addition, cloud computing utilizes the resources of each node, so the available resource of each node may vary in a busy condition. From the perspective of task completion time, the available CPU capacity, the available size of memory, and transmission rate are the three decisive factors for the duration of execution. Thus, in our study, the available CPU capacity, the available size of memory, and transmission rate were taken as the threshold for estimating service manager values. An example instance of specific as follows [11]: 1) The remaining CPU capability 490 MB/s 2) The remaining memory 203698 KB/s 3) Transmission Rate 7.09 MB/s B. Threshold of service node When a service manager passes the threshold of service manager according to the requirement of a task, then the service nodes that are managed by this service manager have the capability to execute this task. However, in a cloud computing environment, the composition of nodes is dynamic, every node is likely to enter a busy state at any time and thus increase the execution time of task then lead to lower its performance. Thus, the threshold of service node is used choose the better service node. The progression is divided into four steps as follow: 1: To calculate the average execution time of each subtask. 2: If the required execution time of a subtask is less than or equal to the average execute time then carry out the subtask to execute normally. 3: If the required execution time of a subtask is greater than the average execute time then the executing time is set to (the execution time is too long so that cannot to be considered). The other nodes that had been executed will reenter into the system to participate the execution of subtask. 4: Repeat 1 to 3, until all subtasks have been executed completely. In our research, the tasks can be assigned to execute quickly by the proposed integrated scheduling algorithm and the effective service nodes can be chosen by the threshold in a three-level cloud computing network. The proposed two-phase scheduling algorithm integrates OLB and LBMM to assist in the selection for effective service nodes. First, a queue is used to store tasks that need to be carried out by manager (N 0 ), then the OLB scheduling algorithm within threshold of service manager is used to assign task to the service managers in second layer (N 1,N 2,N 3,N 4,N 5 ). However each consignation carries out of the task have different characteristic, so the restriction of 110

node selection is also different. An agent is used to collect the related information of each node. According to the property of each task, the threshold value of each node is evaluated and a service node will be assigned. However, in order to avoid the execution time of some is too long and affect system performance, threshold of service node is used to choose the suitable service node to execute subtask. V. AN EXAMPLE TO BE EXECUTED BY USING THE PROPOSED SCHEDULING ALGORITHM In this section, an example to be executed by using the proposed two-phase scheduling algorithm in a three-level cloud computing network is given. The proposed scheduling algorithm combines OLB and LBMM scheduling algorithm that can utilize more better executing efficiency and maintain the load balancing of system. A queue is used to store tasks that need to be carried out by manager. In the first phase, the OLB scheduling algorithm is used to assign task to the service manager by the manager. In the second phase, the LBMM scheduling algorithm is used to choose the suitable service node to execute subtask by the service manager. The assumptions of the proposed scheduling algorithm are shown in follow: 1) The transmission time can be gotten. 2) The time that each job needs to carry out can be predicted [11]. 3) Each task can be divided into several independent subtasks, and each subtask can be executed completely by the assigned service node. 4) The number of service nodes is greater than or equal to the number of subtasks. An example of five tasks need to be processing is given to discuss the two-phase scheduling algorithm in a threelevel cloud computing network. 1: Task A, B, C, D and E needed to be carried out are collected and deposited them in the working queue by manager node N 0 as shown in Figure 3. An agent is used to collect the related information of each node, as shown in TABLE I. According to the property of each task, each node is evaluated by request manager using the threshold of service manager. A B C D E Figure 3. Working queue TABLE I. Remaining CPU (MB/s) RELATED INFORMATION OF EACH NODE Remaining Memory (MB/s) Transmission rate (MB/s) N 1 503 456 29.04 N 2 250 350.46 N 3 490 203 7.09 N 4 750 230 23.25 N 5 8 322 21.03 2: According to the threshold of service manager, the request manager dispatches the task needed to be executed to the suitable service manager by using OLB scheduling algorithm. Therefore, the task A can be assigned to N 1 2 3 4, or N 5 and the task B can be assigned to node N 1 2 3 4, or N 5 (to remove the service manger node that has already carried out task A), and so on. 3: If the threshold of service manager of task A is: 5) The remaining CPU capability 500 MB/s 6) The remaining memory 256 MB/s 7) Transmission rate 20 MB/s According to the information collected by agent in TABLE II, the task A will be assigned to node N 1 to execute. 4: When a task is assigned to service manager, the task will be divided into several independent subtasks by logic unit. For example, task A is divided into four subtasks. 5: Service manager computes the execution time at different service node of each subtask by using LBMM as shown in TABLE II. If the subtask A 1 has the minimum execution time at service node N, then the Min-Time = (A 1 N ) =14s written. The Min-Time is an array that represents a set of minimum execution times, as shown in equation (1). TABLE II. EXECUTION TIME OF EACH SUBTASK WITHIN TASK A AT DIFFERENT SERVICE NODE BEFORE DISPATCHING (FIRST) N 11 N N 13 N 14 Threshold (Avg) A 1 18 14 38 26 24 A 2 14 24 18 17 A 3 26 18 66 42 41 A 4 19 20 24 36 24.75 A1 A2 A3 A4 Min-Time = = (1) 11 14 18 19 6: Service manager calculates the threshold value of each subtask (Avg), and compares with the minimum execution time of each subtask. In this case, The Avg of subtask A 1 is 14 (24), that the execution time is less than the threshold of service node, the subtask can be executed normally; the Avg of subtask A 2 is (17), that the execution time is less than the threshold of service node, the subtask can be executed normally, and so on. 7: The minimum execution time of subtask is found by service manager from Min-Time array. Therefore, the corresponding subtask is A 2 and the corresponding service node is N. Therefore, subtask A 2 is carried out by node N. The subtask A 2 is deleted from the set of needed to be executed subtasks, and the execution time of the remained subtasks is updated. It is indicated in TABLE III. TABLE III. EXECUTION TIME OF EACH SUBTASK WITHIN TASK A AT DIFFERENT SERVICE NODE BEFORE DISPATCHING (SECOND) N 11 N 13 N 14 Threshold (Avg) A 1 18 38 26 24 A 3 26 66 42 41 A 4 19 24 36 24.75 8: After 7, the subtask A 2 is deleted from the set of subtasks and the service node N is ranked in last one. All executed nodes can reenter the system, when all other service nodes are assigned to work. Now, the minimum 111

execution time (Min-Time) of each subtask is compared with the threshold value (Avg) by service manager. The service node set of a Min-time is shown in equation (2). The (A 1 11 ) is found, the corresponding subtask is A 1, and the corresponding service node is N 11. The subtask A 1 is deleted from the set of needed to be executed subtasks, and the execution time of the remained subtasks is updated. It is indicated in TABLE IV. A1 11 18 Min-Time= = (2) A3 11 26 A4 11 19 TABLE IV. EXECUTION TIME OF EACH SUBTASK WITHIN TASK A AT DIFFERENT SERVICE NODE BEFORE DISPATCHING (THIRD) N 13 N 14 Threshold (Avg) A 3 66 42 41 A 4 24 36 24.75 9: After 8, the subtask A 1 is deleted from the set of subtasks and the service node N 11 is ranked in last one. In addition, the minimum execution time (Min-Time) of each subtask is compared with the threshold value (Avg) by service manager. However, the execution time of subtask A 3 exceeds the threshold of service node (4241), hence, the minimum execution time will be set up that is unable to carry out this subtask in the best time. The service node set of a Min-time is shown in equation (3). The (A 4 13 ) is found. The subtask A 4 is deleted from the set of needed to be executed subtasks, and the execution time of the remained subtasks is updated. It is indicated in TABLE V. Min-Time= A3 14 = (3) A4 13 24 TABLE V. EXECUTION TIME OF EACH SUBTASK WITHIN TASK A AT DIFFERENT SERVICE NODE BEFORE DISPATCHING (FOURTH) N 14 Threshold (Avg) A 3 42 41 10: After 9, the subtask A 4 will be deleted from set of subtasks and the service node N 13 is ranked in last one. While the minimum execution time of subtask A 3 exceeds the threshold of service node, therefore, all executed service nodes will reenter the system again as shown in TABLE VI. Finally, the service node N will execute the subtask A 3. TABLE VI. EXECUTION TIME OF EACH SUBTASK WITHIN TASK A AT DIFFERENT SERVICE NODE BEFORE DISPATCHING (FIFTH) N 11 N N 13 N 14 Threshold (Avg) A 3 26 18 66 42 41 Min-Time = A = 18 (4) 3 To maintain the load balancing of the cloud computing system, the proposed two-phase load balancing algorithm can utilize more better executing efficiency and maintain the load balancing of system. VI. CONCLUSION In this study, the OLB scheduling algorithm is used to attempt each node keep busy, and the goal of load balance can be achieved. However, the proposed LBMM scheduling algorithm that modified from Min-Min scheduling algorithm can make the minimum execution time of each task on cloud computing environment. The goal of this study is to reach load balancing by OLB scheduling algorithm, which makes every node in working state. Besides, in our research, the LBMM scheduling algorithm is also utilized to make the minimum execution time on the node of each task and the minimum whole completion time is obtained. However, the load balancing of three-level cloud computing network is utilized, all calculating result could be integrated first by the secondlevel node before sending back to the management. Thus, the goal of loading balance and better resources manipulation could be achieved. Furthermore, in a generalized case, the cloud computing network is not only static, but also dynamic. On other hand, our proposed method will be extending to maintain and manage when the node is underlying three-level hierarchical cloud computing network in future work. ACKNOWLEDGMENT This work was supported in part by the Taiwan National Science Council under Grants NSC96-2221-E-324-021 and NSC97-2221-E-324 007-MY3. REFERENCES [1] R. Armstrong, D. Hensgen, and T. Kidd, The Relative Performance of Various Mapping Algorithms is Independent of Sizable Variances in Run-time Predictions, 7th IEEE Heterogeneous Computing Workshop (HCW '98), pp. 79-87, 1998. [2] F.M. Aymerich, G. Fenu and S. Surcis, An Approach to a Cloud Computing Network, the First International Conference on the Applications of Digital Information and Web Technologies, pp. 113-118, August 2008. [3] T.D. Brauna, et al., A Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Distributed Computing Systems, Journal of Parallel and Distributed Computing, Vol. 61, Issue 6, pp. 810-837, 2001. [4] R.F. Freund, et al., Scheduling Resources in Multi-user, Heterogeneous, Computing Environments with SmartNet, 7th IEEE Heterogeneous Computing Workshop (HCW'98), pp. 184-99, 1998. [5] L. Garces-Erice, et al., Hierarchical Peer-to-peer Systems, Proceedings of ACM/IFIP International Conference on Parallel and Distributed Computing (Euro-Par), 2003. [6] F. Halsall, Data Links, Computer Networks and Open Systems. 4th ed., Addison-Wesley Publishers, pp. 1-5, 1995. [7] G., Ritchie and J., Levine, A Fast, Effective Local Search for Scheduling Independent Jobs in Heterogeneous Computing Environments, Journal of Computer Applications, Vol. 25, Issue 5, pp. 1190-1192, 2005. [8] K. Shin, et al., Grapes: Topology-based Hierarchical Virtual Network for Peer-to-Peer Lookup Services, Parallel Processing Workshops Proceedings, pp. 159-164, 2002. [9] A. Vouk, Cloud Computing- Issues, Research and Implementations, Information Technology Interfaces, pp. 31-40, June 2008. [10] L.H. Wang, J. Tao and M. Kunze, Scientific Cloud Computing: Early Definition and Experience, the 10th IEEE International 1

Conference on High Performance Computing and Communications, pp. 825-830, 2008. [11] K.Q., Yan, et al., A Hybrid Load Balancing Policy Underlying Grid Computing Environment. Computer Standards & Interfaces, Vol. 29, Issue 2, pp 161-173, 2007. [] Load Balancing, Load Balancer, http://www.zeus.com/products /zxtmlb/index.html, January 2010. [13] What is Cloud Computing?, http://www.zeus.com/cloud_ computing/cloud.html, January 2010. [14] ZXTM for Cloud Hosting Providers, http://www.zeus.com/cloud_ computing/for_cloud_providers.html, January 2010. 113