Load Balance Scheduling Algorithm for Serving of Requests in Cloud Networks Using Software Defined Networks



Similar documents
Resource Allocation Avoiding SLA Violations in Cloud Framework for SaaS

A Review on Load Balancing In Cloud Computing 1

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

A Novel Switch Mechanism for Load Balancing in Public Cloud

Distributed and Dynamic Load Balancing in Cloud Data Center

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration

Efficient Service Broker Policy For Large-Scale Cloud Environments

IMPROVED LOAD BALANCING MODEL BASED ON PARTITIONING IN CLOUD COMPUTING

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

CDBMS Physical Layer issue: Load Balancing

Efficient and Enhanced Load Balancing Algorithms in Cloud Computing

LOAD BALANCING OF USER PROCESSES AMONG VIRTUAL MACHINES IN CLOUD COMPUTING ENVIRONMENT

Energy Constrained Resource Scheduling for Cloud Environment

Utilizing Round Robin Concept for Load Balancing Algorithm at Virtual Machine Level in Cloud Environment

A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Data Center Selection

Performance Evaluation of Round Robin Algorithm in Cloud Environment

A Survey on Load Balancing and Scheduling in Cloud Computing

A Novel Approach for Load Balancing In Heterogeneous Cellular Network

Load Balancing using DWARR Algorithm in Cloud Computing

The International Journal Of Science & Technoledge (ISSN X)

AN ADAPTIVE DISTRIBUTED LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING

Load Balancing in cloud computing

AN EFFICIENT LOAD BALANCING ALGORITHM FOR CLOUD ENVIRONMENT

Comparative Analysis of Load Balancing Algorithms in Cloud Computing

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

Multilevel Communication Aware Approach for Load Balancing

Fair Scheduling Algorithm with Dynamic Load Balancing Using In Grid Computing

A NOVEL LOAD BALANCING STRATEGY FOR EFFECTIVE UTILIZATION OF VIRTUAL MACHINES IN CLOUD

Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load

Reallocation and Allocation of Virtual Machines in Cloud Computing Manan D. Shah a, *, Harshad B. Prajapati b

A Comparison of Four Popular Heuristics for Load Balancing of Virtual Machines in Cloud Computing

Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis

Comparative Analysis of Load Balancing Algorithms in Cloud Computing

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

Figure 1. The cloud scales: Amazon EC2 growth [2].

Multi-dimensional Affinity Aware VM Placement Algorithm in Cloud Computing

IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

A Comparative Study of Load Balancing Algorithms in Cloud Computing

Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

Keywords: PDAs, VM. 2015, IJARCSSE All Rights Reserved Page 365

2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment

ACO Based Dynamic Resource Scheduling for Improving Cloud Performance

Effective Virtual Machine Scheduling in Cloud Computing

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm

International Journal of Engineering Research & Management Technology

Small is Better: Avoiding Latency Traps in Virtualized DataCenters

HYBRID ACO-IWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUD-BASED PARAMETER SWEEP EXPERIMENTS

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS

Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems

An Intelligent Framework for Vehicular Ad-hoc Networks using SDN Architecture

Analysis of Service Broker Policies in Cloud Analyst Framework

A REVIEW PAPER ON LOAD BALANCING AMONG VIRTUAL SERVERS IN CLOUD COMPUTING USING CAT SWARM OPTIMIZATION

Energetic Resource Allocation Framework Using Virtualization in Cloud

Round Robin with Server Affinity: A VM Load Balancing Algorithm for Cloud Based Infrastructure

An Approach to Load Balancing In Cloud Computing

A Novel Approach for Efficient Load Balancing in Cloud Computing Environment by Using Partitioning

Survey of Load Balancing Techniques in Cloud Computing

ADAPTIVE LOAD BALANCING FOR CLUSTER USING CONTENT AWARENESS WITH TRAFFIC MONITORING Archana Nigam, Tejprakash Singh, Anuj Tiwari, Ankita Singhal

SCHEDULING IN CLOUD COMPUTING

An Efficient Cloud Service Broker Algorithm

A Scheme for Implementing Load Balancing of Web Server

Affinity Aware VM Colocation Mechanism for Cloud

Efficient Parallel Processing on Public Cloud Servers Using Load Balancing

Load Balancing for Improved Quality of Service in the Cloud

A Real-Time Cloud Based Model for Mass Delivery

High Performance Cluster Support for NLB on Window

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

An Energy Efficient Server Load Balancing Algorithm

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

Optimal Service Pricing for a Cloud Cache

Grid Computing Vs. Cloud Computing

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Simulation-based Evaluation of an Intercloud Service Broker

Dynamic Scheduling and Pricing in Wireless Cloud Computing

Public Cloud Partition Balancing and the Game Theory

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction

SERVICE BROKER ROUTING POLICES IN CLOUD ENVIRONMENT: A SURVEY

Cloud Computing Simulation Using CloudSim

International Journal of Advance Research in Computer Science and Management Studies

Dual Mechanism to Detect DDOS Attack Priyanka Dembla, Chander Diwaker 2 1 Research Scholar, 2 Assistant Professor

CloudAnalyst: A CloudSim-based Tool for Modelling and Analysis of Large Scale Cloud Computing Environments

Task Scheduling in Hadoop

Mobility Management Framework in Software Defined Networks

Exploring Resource Provisioning Cost Models in Cloud Computing

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

A Comparative Study on Load Balancing Algorithms with Different Service Broker Policies in Cloud Computing

Infrastructure as a Service (IaaS)

Roulette Wheel Selection Model based on Virtual Machine Weight for Load Balancing in Cloud Computing

SDN/Virtualization and Cloud Computing

Transcription:

Load Balance Scheduling Algorithm for Serving of Requests in Cloud Networks Using Software Defined Networks Dr. Chinthagunta Mukundha Associate Professor, Dept of IT, Sreenidhi Institute of Science & Technology, Hyderabad, India. P. Gayathri Associate Professor, Dept of CSE, KGRCET, Moinabad, India. Dr. I. Surya Prabha Professor, Department of IT, Institute of Aeronautical Engineering, Hyderabad, India. Abstract Present days Cloud Computing becomes a popular computing area for providing the computing resources. Cloud Computing offers the services like Infrastructure, Platform, Software, Communication and Monitoring. Generally Cloud Networks works based on the zones in particular location. If the area is near by the requested client then the then that particular zone serve for the request. Sometimes some of the zones are busy and sometime some zones are idle in use. In That case the resources provided in the cloud zones are not utilized properly. To avoid that problem In this paper we proposed that a concept called Load balancing scheduling algorithm for the cloud networks using Software Defined Networks (SDN). This algorithm works on the basis of equal distribution of client requests to all the zones available to the particular cloud networks. Load balancing algorithm targets to optimize the utilization of the resource by maximizing the throughput, minimizing the response time, and avoiding overloading of any single resource. Software Defined Networks are used to apply the Software functionality to the Hardware devices like switches routers etc. By using Software Defined Networks the algorithm is placed in the cloud routers then the router will decide the zone for the cloud client based on the resource availability in the cloud networks. Keywords: Communication, Infrastructure, Software Defined Networks, zones, cloud computing, load balancing Introduction By considering Cloud networks overhead, techniques for load balancing are of significant importance. Load balancing directly affects the application and service availability for cloud clients. Load balancing algorithm targets to take maximum advantage of the resource by maximizing the throughput, minimizing the response time, and avoiding overloading of any single resource. To avoid heavy-traffic cloud networks flux and reduce the risk on a single server, many data centers are provided dedicated hardware methods to enable load balancing in order to support a large number of users in bulk. The hardware systems that are used in cloud networks are expensive to procure, can be technically Challenging to be deployed and it may require human interaction to work consistently. To avoid manual intervention in the operation of cloud networks we introduce a specialized network called software defined networks. Software-defined networking is a type of networking provides a easy, convenient, flexible network flow control method with a nominal investment to reduce cost and increase benefit for large number of cloud clients. It manages data flow by means of software implementation of switches and routers. When a data flow arrives at a switch and router a flow table lookup has to be carried out. Flow tables are widely used in Software Defined Networks. For each network flow, the headers and counters will be revised if flow changes are required or actions are imposed. By recording the header information into a database, an OpenFlow router can process the data flow according to the header records. Based on the Software Defined Network model with a centralized controller, an OpenFlow router is designed for different rules to control the network traffic by organizing the header records. The flow control system will theoretically make it possible to define an algorithm to balance the network load. This Paper aims at to present a new cloud computing model for dealing load balance in cloud networks by using load balancing scheduling algorithm with software defined networks. The SDN controller can manage all in bound and out bound traffic of each server in cloud networks through routers. By deploying dynamically extensible load balancing Strategy in the SDN routers an efficient model is proposed to reduce the packet delay in traditional communication networks and guarantee the reliability for huge cloud customers, continuity and timeliness of their business. In comparison with existing load balancing algorithm this algorithm solves issues like more cost, low reliability and less extensibility. This technique aims at maximizing the resources of cloud networks available in different zones unlike in traditional cloud networks utilization of cloud networks are not maximize and sometimes it may be idle based on the client requests. Fig 1 shows the architecture of SDN layers with routers as a network devices. 3910

Figure 1: Architecture of SDN with Routers as a Network Devices The cloud network offers three service models. First service is Infrastructure as a service model offers computing resources like processing, storage, network bandwidth etc. Second service is Plat form as a Service model offers platform for the developers to host their application for testing. Third Service is Software as a Service model offers applications for the end customers. Related Work The main aim of load balancing has always been to efficiently and fairly distribute every computing resource and improve resource utilization over the cloud networks. Many researchers in the past have proposed different scheduling algorithms like static, dynamic and mixed scheduling strategies. Generally in all proposed resource allocation algorithms if the resources is free in nearest zones then the resource is allocated for the cloud client otherwise request is discarded irrespective of other zone availability of the resources. Some load balancing algorithms allocate the resources if available otherwise it puts the requests in a FIFO queue. In Native Strategy if we have x services in a software workflow and t is the time for execute each software service. A set of n number of virtual computing resources may be allocated the network bandwidth is also allocated based on the requirement of the data transfers between two services. This strategy considers only the stream line execution and it serves as a baseline for the performance. In a FIFO Strategy an assumption is made that on every computing resource, all the services can be deployed. The scheduler may ask any resource to compute any task. Due to infrastructure, redeployment of services is not necessary. It is considered as an optimal strategy. In this case, the same bandwidth is allocated to all the links in the infrastructure. Due to data transfer time, when the bandwidth is small, the total cost is high, but when the bandwidth increases, then both cost and execution time decrease. The optimization is used to approximate the optimal bandwidth. This strategy only works for the identical resources and bandwidth is not optimized between each pair of resources. The optimized strategy divides the execution of the workflow into multiple stages and at each stage, bandwidth and resources are allocated independently. And also in every stage, the minimization algorithm is executed to allocate the optimum number of resources for the services involved in the current stage. An algorithm is required to determine the number of stages. Directed execution graphs (DAG) are made up of the workflow of services. The DAG is divided into the execution stages.. A special virtual infrastructure executes the execution stages. In every execution stage, the infrastructure is reconfigured for deployment for services involved in the stage. The resources are allocated based on the number of invocations required for each service. In Autonomous age based load balancing algorithm for dynamic load balancing. It uses three agents; Load agent, Channel agent and Migration agent to define its load balancing scheme. The load and channel agents are static while Migration agents represent an ant. The load agent manages data policy and maintained all details of datacenter and at the same time responsible for calculating the load on every available virtual machine (VM) when new task are allocated in a datacenter. The load agent is supported with a VM_Load Fitness table. The fitness table organizes the list of all details of virtual machines properties in a datacenter such as id, memory, and fitness value and load status of all virtual machines. When the load agent completed the controlling policy, the channel agent controls the transfer policy, selection policy and location policy. Upon request received from the load agents, the channel agent initiates a communication with the migration agents. The migration agent then moved to other datacenters and communicates with load agent of the datacenter to enquire about the status of VMs presents. Upon receiving the status, it communicates to its parent channel agent. This approach reduces service time and overcome the challenge of overloaded virtual machines. In this hybrid scheduling algorithm based on throttled and equally spread current execution load balancing algorithm. The hybrid algorithm initializes all VMs allocation status to AVAILABLE in the VM state List. With the hash map list size available to determine overloaded and under loaded VM. When a data centre collects a new request to be processed, the Datacenter manager queries new load balancer for next allocation. If the hash map list size is less than the state list size, a VM is allocated otherwise the load balancer has to wait until the virtual machine gets free. When the request is been processed by a datacenter, and Datacenter Manager receives the cloudlet response, it notices the load balancer of the VM de-allocation. Finally, the load balancer updates the status of the VM in VMs state list and hash map list. A Cloud Analyst was used to implement the algorithm. The result shows a significant improvement in response time, and the datacenter processing time as well as the cost of processing request compared to the existing load balancing algorithm. The nature inspired pre-emptive task scheduling for load balancing in cloud networks considered balancing preemptive independent task on virtual machines (VMs) based on honey bee foraging behavior. To consider the pre-emption of task based on priority, when virtual machine are over loaded, the task with the highest priority is removed to under loaded virtual machine. Upon arrival of the migrated task to the under 3911

loaded virtual machine, its priority and the required execution time is then compared with the priority and the execution time of the already running task in the under loaded virtual machine. If the migrated task has higher priority and the execution time is less than the already running executing task, then the executing task is paused and pre-empted until migrated task finishes executing, the pre-empted task now return to the state of execution, otherwise, the running task has to finish executing. This approach works better for a static cloud environment where high speed connection is required. At present, many research studies on load based resources scheduling are based upon dynamic migration of services from one request to another for serving more number of client requests. Sandpiper systems carry out hotspot probing and dynamic monitoring on the utility of system s memory resources, CPU and network bandwidth. It also provides black box and white-box resource monitoring methods. The system's main focus is to determine how to dispose hotspots through the remapping of resources in cloud networks migration and how to define hotspot memory. Load based Distributed Resource Scheduler (DRS) is a tool that balances and distributes computing volume by using available resources in a cloud environment. Thus, using dynamic migration all of the above systems can achieve system load balance; but frequent dynamic migration would employ a large number of resources that might lead to degrading the entire system performance. The main advantage of load balancing in cloud computing helps to guaranteed that customers needs are achieved at a quick response time, keeping the make span of the system at good state, ensured better throughput, energy conservation and also coordinating management of resource pool across cloud networks. When data centre becomes overload with processing task, the benefit of load balancing migrate task to available data centres that are underutilized for processing. Load balancing helps to boost productivity of cloud datacenter while maintaining service level agreement (SLA). Some benefits of load balancing in cloud computing is to ensure that: Resources are available on demand Saving of energy if less amount of requests Processing time is reduced Efficient utilization of resources upon high or low Ensures better communication across network Low cost Software-Defined Networking (SDN) is an emerging network architecture where network control is decoupled from forwarding and it is directly programmable Per this definition, SDN is defined by two characteristics, namely decoupling of control and data planes, and programmability on the control plane. Nevertheless, neither of these two signatures of SDN is totally new in network architecture, as detailed in the following. First, several previous efforts have been made to promote network programmability. One example is the concept of active networking that attempts to control a network in a real-time manner using software. Switch Ware is an active networking solution, allowing packets flowing through a network to modify operations of the network dynamically. Similarly, software routing suites on conventional PC hardware, such as Click, XORP, Quagga, and BIRD also attempt to create extensible software routers by making network devices programmable. Behavior of these network devices can be modified by loading different or modifying existing routing software. Second, the spirit of decoupling between control and data planes has been proliferated during the last decade. all of the decision making is accomplished by the centralized SDN controller. SDN facilitates efficient coordination between the cellular and Wi- Fi networks. Proposed System In this paper we present a technique for load balancing in cloud using software defined networks. By applying software defined networks to the cloud networks we can effectively organize client requests in the cloud computing. Scheduling mechanisms for cloud computing is increased in day to day by cloud popularity. Scheduling is the process of mapping tasks to available resources on the basis of tasks characteristics and requirements. It is an essential aspect in efficacious working of cloud as many task parameters need to be considered for proper scheduling. The available resources should be utilized efficiently without affecting the service parameters of cloud. Scheduling process in cloud can be generalized into three stages namely 1. Discovery of Resources and filtering-sdn router Discover the resources available in the Cloud, if the resources are available then resources are filter based on the client requests. 2. Selecting Resources-If target resource is selected based on certain client parameters of task and resource. Then resource is selected for allocation. 3. Submission of Task-Resource is submitted to the client task based o the client request. Algorithm 1. Define n is number of request to access the resources in the cloud networks. 2. Define z is number of zones available to a particular cloud networks. 3. Each request is allocated to one zone based on zone availability. 4. If all zones are busy then it checks the zone that contains less number of requests then the request is allocated to the zone having less number of requests. 5. After allocating each request to the zones the Request Allocation Table (RAT) in each router is updated with new allocation details. 6. Then updated Request Allocation Table is forwarded to each router in the network and each router is updated with new data. 7. All the routers are working based on the routing technology in the networking. Above algorithm describes the total working process of Load balancing algorithm in Software Defined Networks like router and switches in the cloud networks. 3912

balancing scheduling algorithm without zones considerations in the cloud. Table 1: Request Allocation Table S. No Zone 1 Zone2 Zone3 Zone4 1 Req1[3] Req2[2] Req3[1] Req4[4] 2 Req5[6] Req6[3] Req7[7] Req8[9] Figure 2: Resource allocation Mechanism in cloud networks using software defined networks Fig 2 demonstrates the working process of load balancing scheduling algorithm in cloud networks using software defined networks. Cloud networks are equipped with the hardware devices like routers and switches that are software enabled. That is the routers and switches are able to work on some software algorithms and are operated with software enabled interfaces. Routers in the cloud are installed with an algorithm for organizing the client request to allocate the resources in the cloud based on the zone availability. When the client send any request for accessing the resources in the cloud networks based on zone availability the routers in the cloud networks accepts the request and allocate the resource available in the zones and based on the allocation of the resource for the client request each router update the Request Allocation Table in the router and all the routers are updated with the same Request Allocation Table. If any zone is free then the request is allocated for the zone otherwise the request is allocated for the zone that has less number of requests in the queue as First In First Out(FIFO) basis. For Example Amazon is the cloud service provider; it provides all types of cloud services like Infrastructure, Platform, and Software on the basis of pay per use. Amazon has different zones based on the geographic locations like US zone, UROPE zone, ASIA zone etc based on the availability of locations to continue the services even if failure are occur in some zones. In some zone usage of cloud resource are more and some zones the usage of cloud is less. In this case some of the zone are having less amount resource utilization and some zones having more resource utilization. So to equally distribute the load for all the zones available in the cloud we propose above algorithm called load balance scheduling algorithm for cloud networks using software defined networks. Simulation The simulations have been conducted to evaluate the proposed system in large-scale networks. The proposed system is simulated by taking some sample request and zones with some time stamps by default by comparing with normal load In the above Request Allocation Table each request is allocated to the entire zone available in the cloud each request has defined with some time specified. In this scenario to complete all the client requests zone1 takes 9 msec, zone2 takes 5 msec, zone3 takes 8 msec and zone4 takes 13 msec. With this consideration all the zones are completed the request processing with the max time of 13 msec. But when we consider the normal load balance scheduling algorithm for allocation of resources in the cloud networks without zones each time one request is submitted to the cloud at once after completion of one request it submits another request so it takes more time in our scenario it is 35 msec. Even if the cloud supports multitenant architecture for serving multiple requests at some point of time more burden on the cloud when number of requests are increased. So to avoid that drawback in the cloud we introduce a new load balancing scheduling algorithm for cloud networks using software defined networks. Conclusion In this paper we present a new version of load balancing scheduling algorithm for cloud networks using software defined networks with zone availability. This technique improves the performance of cloud networks to process the client requests to allocate the resources in the cloud. Various terminologies and concepts were thoroughly looked and thus explained. This paper will give way to more future findings regarding the scheduling techniques in a cloud environment. In this paper, a SDN load balancing scheme based on variance analysis is proposed for supporting dynamic workloads from massive cloud clients by using the advantage of the separation of SDN control layer and data layer. We have described the concrete algorithm and every part of the load balancing module. Meanwhile, we test the method and give the result of comparison in SDN environment. Based on the OpenFlow technology, we present the method of variance analysis to monitor data traffic of each server port and dynamically redirect traffic dynamically with the OpenFlow technology. References [1] Y. Wu, G. Min, and L. T. Yang, Performance analysis of hybrid wireless networks under bursty and correlated traffic, IEEE Transactions on Vehicular Technology, vol. 62, no. 1, pp. 449-454, 2013. [2] D. Zhang, H. Huang, J. Zhou, F. Xia, and Z. Chen, Detecting hot road mobility of vehicular Ad Hoc networks, Mobile Networks and Applications, vol. 3913

18, no. 6, pp. 803-813, 2013. [3] M. Dong, T. Kimata, K. Sugiura, and K. Zettsu, Quality-of-Experience (QoE) in emerging mobile social networks, IEICE Transactions on Information and Systems, vol. 97, no. 10, pp. 2606-2612, 2014. [4] H. Kim and N. Feamster, Improving network management with software defined networking, IEEE Communications Magazine, vol. 51, no. 2, pp. 114-119, 2013. [5] Tharam Dillon and Chen Wu and Elizabeth Chang, Cloud Computing: Issues and Challenges 24th IEEE International Conference on Advanced Information Networking and Applications 2010. [6] Lejeune. J, Arantes. L, Sopena. J, Service Level Agreement for Distributed Mutual Exclusion in Cloud Computing 12 th IEEE/ACM International Conference on May 2012, pp 180-187. [7] K Lee, A Member, J Lee, S Member, Y Yi, Mobile data offloading: how much can Wi-Fi deliver? IEEE/ACM Trans. Netw. 21(2), 536-551 (2013). [8] Cisco Visual Networking Index: global mobile data traffic forecast update, 2014-2019. Accessed 3 February 2015. [9] JG Andrews, S Buzzi, W Choi, SV Hanly, ALACK Soong, JC Zhang, What will 5G be? IEEE J. Sel. Areas Commun. 32(6), 1065-1082 (2014). [10] A Aijaz, H Aghvami, M Amani, A survey on mobile data offloading: technical and business perspectives. IEEE Wireless Commun. 20(2), 104-112 (2013). [11] A. Dogan and F. Ozguner, Matching and scheduling algorithms for minimizing execution time and failure probability of applications in heterogeneous computing, IEEE Transactions on Parallel and Distributed Systems, pp. 308-323, 2002. [12] Vincent C. Emeakaroha, Ivona Brandic, Michael Maurer, Ivan Breskovic, SLA-Aware Application Deployment and Resource Allocation in Clouds, 35th IEEE Annual Computer Software and Application Conference Workshops, 2011, pp. 298-303. [13] S. K. Garg, R. Buyya, and H. J. Siegel, Time and cost trade off management for scheduling parallel applications on utility grids, Future Generation. Computer System, 26(8):1344-1355, 2010. [14] S. Pandey, L. Wu, S. M. Guru, and R. Buyya, A particle swarmoptimization-based heuristic for scheduling workflow applications in cloud computing environments, in AINA 10: Proceedings of the2010, 24th IEEE International Conference on Advanced Information Networking and Applications, pages 400-407, Washington, DC, USA, 2010, IEEE Computer Society. 3914