Deadline Based Task Scheduling in Cloud with Effective Provisioning Cost using LBMMC Algorithm



Similar documents
AMONG the programming paradigms available for

SCORE BASED DEADLINE CONSTRAINED WORKFLOW SCHEDULING ALGORITHM FOR CLOUD SYSTEMS

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

A SURVEY ON WORKFLOW SCHEDULING IN CLOUD USING ANT COLONY OPTIMIZATION

Cloud Computing Simulation Using CloudSim

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms

AN EFFICIENT AUDIT SERVICE OUTSOURCING FOR DATA IN TEGRITY IN CLOUDS

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

Infrastructure as a Service (IaaS)

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

OCRP Implementation to Optimize Resource Provisioning Cost in Cloud Computing

Fig. 1 WfMC Workflow reference Model

Profit Based Data Center Service Broker Policy for Cloud Resource Provisioning

Performance Gathering and Implementing Portability on Cloud Storage Data

HYBRID ACO-IWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUD-BASED PARAMETER SWEEP EXPERIMENTS

A Study on the Cloud Computing Architecture, Service Models, Applications and Challenging Issues

NetworkCloudSim: Modelling Parallel Applications in Cloud Simulations

Performance Evaluation of Round Robin Algorithm in Cloud Environment

Exploring Resource Provisioning Cost Models in Cloud Computing

Comparison of PBRR Scheduling Algorithm with Round Robin and Heuristic Priority Scheduling Algorithm in Virtual Cloud Environment

Optimal Service Pricing for a Cloud Cache

Task Scheduling Algorithm for Map Reduce To Control Load Balancing In Big Data

AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD

Dynamic resource management for energy saving in the cloud computing environment

Multilevel Communication Aware Approach for Load Balancing

A Load Balancing Model Based on Cloud Partitioning for the Public Cloud

Round Robin with Server Affinity: A VM Load Balancing Algorithm for Cloud Based Infrastructure

Optimizing the Cost for Resource Subscription Policy in IaaS Cloud

A Survey on Load Balancing and Scheduling in Cloud Computing

International Journal of Digital Application & Contemporary research Website: (Volume 2, Issue 9, April 2014)

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

DESIGN OF AGENT BASED SYSTEM FOR MONITORING AND CONTROLLING SLA IN CLOUD ENVIRONMENT

Efficient and Enhanced Load Balancing Algorithms in Cloud Computing

Energetic Resource Allocation Framework Using Virtualization in Cloud

Optimal Multi Server Using Time Based Cost Calculation in Cloud Computing

Cost Effective Selection of Data Center in Cloud Environment

AN EFFICIENT LOAD BALANCING APPROACH IN CLOUD SERVER USING ANT COLONY OPTIMIZATION

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

Resource Allocation Based On Agreement with Data Security in Cloud Computing

Application Deployment Models with Load Balancing Mechanisms using Service Level Agreement Scheduling in Cloud Computing

Mobile Cloud Computing: Critical Analysis of Application Deployment in Virtual Machines

Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud

Dynamically optimized cost based task scheduling in Cloud Computing

Power Aware Live Migration for Data Centers in Cloud using Dynamic Threshold

Efficient Qos Based Resource Scheduling Using PAPRIKA Method for Cloud Computing

Environments, Services and Network Management for Green Clouds

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm

Dynamic Round Robin for Load Balancing in a Cloud Computing

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Efficient Service Broker Policy For Large-Scale Cloud Environments

Reallocation and Allocation of Virtual Machines in Cloud Computing Manan D. Shah a, *, Harshad B. Prajapati b

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

DynamicCloudSim: Simulating Heterogeneity in Computational Clouds

A Survey on Cloud Computing

QoS Based Scheduling of Workflows in Cloud Computing UPnP Architecture

Resource Allocation Avoiding SLA Violations in Cloud Framework for SaaS

Survey on Scheduling Algorithm in MapReduce Framework

How To Balance In Cloud Computing

Manjra Cloud Computing: Opportunities and Challenges for HPC Applications

Load Balancing using DWARR Algorithm in Cloud Computing

Agent Based Framework for Scalability in Cloud Computing

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS

Cloud Computing Service Models, Types of Clouds and their Architectures, Challenges.

CHAPTER 7 SUMMARY AND CONCLUSION

Cloud Analyst: An Insight of Service Broker Policy

IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT

An Implementation of Load Balancing Policy for Virtual Machines Associated With a Data Center

International Journal of Scientific & Engineering Research, Volume 6, Issue 5, May ISSN

A SURVEY ON MAPREDUCE IN CLOUD COMPUTING

Enhancing the Scalability of Virtual Machines in Cloud

SURVEY ON THE ALGORITHMS FOR WORKFLOW PLANNING AND EXECUTION

Design and simulate cloud computing environment using cloudsim

Resource Provisioning Cost of Cloud Computing by Adaptive Reservation Techniques

Performance Analysis of VM Scheduling Algorithm of CloudSim in Cloud Computing

Resource Management In Cloud Computing With Increasing Dataset

Throtelled: An Efficient Load Balancing Policy across Virtual Machines within a Single Data Center

A Survey Paper: Cloud Computing and Virtual Machine Migration

Load Distribution in Large Scale Network Monitoring Infrastructures

Creation and Allocation of Virtual Machines for Execution of Cloudlets in Cloud Environment

Simulation-based Evaluation of an Intercloud Service Broker

Utilizing Round Robin Concept for Load Balancing Algorithm at Virtual Machine Level in Cloud Environment

MAXIMIZING RESTORABLE THROUGHPUT IN MPLS NETWORKS

ANALYSIS OF WORKFLOW SCHEDULING PROCESS USING ENHANCED SUPERIOR ELEMENT MULTITUDE OPTIMIZATION IN CLOUD

Profit-driven Cloud Service Request Scheduling Under SLA Constraints

International Journal of Advance Research in Computer Science and Management Studies

Cloud Service Negotiation Techniques

Effective Resource Allocation For Dynamic Workload In Virtual Machines Using Cloud Computing

Transcription:

Deadline Based Task Scheduling in Cloud with Effective Provisioning Cost using LBMMC Algorithm Ms.K.Sathya, M.E., (CSE), Jay Shriram Group of Institutions, Tirupur Sathyakit09@gmail.com Dr.S.Rajalakshmi, Associate Professor/CSE, Jay Shriram Group of Institutions, Tirupur mrajislm@gmail.com Abstract- In cloud computing environment, the major issue is cost provisioning problem. To reduce the impact of performance variation of cloud resources in the deadlines of workflows, proposed a algorithm called EIPR, which takes into consideration the behavior of cloud resources during the scheduling process and also applies replication of tasks to increase the chance of meeting application deadlines. As the execution of the workflow in the Cloud incurs financial cost, the workflow may also be subject to a budget constraint. Although the budget constraint of a workflow may be a limiting factor to its capability of being scaled across multiple resources in Clouds, its structure itself also imposes significant limitation. This new method called Location based Minimum Migration in Cloud (LBMMC).This method monitors all the virtual machines in all cloud locations centre. It identifies the state virtual machines whether it is sleep, idle, or running (less or over) state. And also checks number of tasks running and number of tasks can able to run. When a virtual machine is considered to be overloaded requiring live migration to one or more VMs from the nearby location cloud centers. When a virtual machine is considered to be under loaded (minimum task running) requiring live migration to one or more VMs (which are already running to perform some jobs) from the nearby location cloud centers. Location based VM selection policy (algorithm) has to be applied to carry out the selection process. Index Terms Cloud computing, EIPR, LBMMC, Migration Fig.1 Architecture of Cloud Computing I.INTRODUCTION Cloud computing has been envisioned as the next generation information technology (IT) architecture for enterprises, due to its long list of unprecedented advantages in the IT history: on-demand self-service, ubiquitous network access, location independent resource pooling, rapid resource elasticity, usage-based pricing and transference of risk. As a disruptive technology with profound implications, cloud computing is transforming the very nature of how businesses use information technology. One fundamental aspect of this paradigm shifting is that data are being centralized or outsourced to the cloud. From users perspective, including both individuals and IT enterprises, storing data remotely to the cloud in a flexible on-demand manner brings appealing benefits: relief of the burden for storage management, universal data access with location independence, and avoidance of capital expenditure on hardware, software, and personnel maintenances, etc., 167 A. Scheduling The primary benefit of moving to Clouds is application scalability. Unlike Grids, scalability of Cloud resources allows real-time provisioning of resources to meet application requirements. Cloud services like compute, storage and bandwidth resources are available at substantially lower costs. Usually tasks are scheduled by user requirements. New scheduling strategies need to be proposed to overcome the problems posed by network properties between user and resources. New scheduling strategies may use some of the conventional scheduling concepts to merge them together with some network aware strategies to provide solutions for better and more efficient job scheduling. Initially, scheduling algorithms were being implemented in grids. Due to the reduced performance faced in grids, now there is a need to implement scheduling in cloud. The primary benefit of moving to Clouds is application scalability. Unlike Grids, scalability of Cloud resources allows real-time provisioning of resources to meet application requirements. This enables workflow management systems to readily meet Quality of- Service (QoS) requirements of applications, as opposed to the traditional approach that required advance reservation of resources in global multi -user Grid environments. Cloud services like compute, storage and bandwidth resources are available at substantially lower costs. Cloud applications often require very complex execution environments. These environments are difficult to create on grid resources. In

addition, each grid site has a different configuration, which results in extra effort each time an application needs to be ported to a new site. Virtual machines allow the application developer to create a fully customized, portable execution environment configured specifically for their application. Traditional way for scheduling in cloud computing tended to use the direct tasks of users as the overhead application base. The problem is that there may be no relationship between the overhead application base and the way that different tasks cause overhead costs of resources in cloud systems. For large number of simple tasks this increases the cost and the cost is decreased if we have small number of complex tasks. B. Scientific Workflows Research in execution of scientific workflows in Clouds either try to minimize the workflow execution time ignoring deadlines and budgets or focus on the minimization of cost while trying to meet the application deadline. It implement limited contingency strategies to correct delays caused by underestimation of tasks execution time or fluctuations in the delivered performance of leased public Cloud resources. To mitigate effects of performance variation of resources on soft deadlines of workflow applications, an algorithm is proposed that uses idle time of provisioned resources and budget surplus to replicate tasks. Typical Cloud environments do not present regular performance in terms of execution and data transfer times. This is caused by technological and strategic factors and can cause performance variation of up to 30 percent for execution times and 65 percent for data transfer time. Fluctuations in performance of Cloud resources delay tasks execution, what also delays such tasks successors. If the delayed tasks were part of the critical path of the workflow, it will delay its completion time, and may cause its deadline to be missed. Workflow execution is also subject to delays if one or more of the virtual machines fail during task execution. However, typical Cloud infrastructures offer availability above 99.9 percent; therefore, performance degradation is a more serious concern than resource failures in such environments. II.OBJECTIVE The objective of the project is to minimum number of virtual machine is allocated to complete the user tasks within the deadline based on the migration of tasks. Less number of should be selected, to avoid more processing cost for migration process. Migration of Virtual machine tasks to reduce the user provisioning cost, by finishing the user s tasks in their deadline. III. EXISTING SYSTEM Typical Cloud environments do not present regular performance in terms of execution and data transfer times. This is caused by technological and strategic factors and can cause performance variation of up to 30 percent for execution times and 65 percent for data transfer time. Fluctuations in performance of Cloud resources delay tasks execution, what also delays such tasks successors. If the delayed tasks were part of the critical path of the workflow, it will delay its completion time, and may cause its deadline to be missed. Workflow execution is also subject to delays if one or more of the virtual machines fail during task execution. However, typical Cloud infrastructures offer availability above 99.9 percent, therefore performance degradation is a more serious concern than resource failures in such environments. Here, this algorithm that uses idle time of provisioned resources to replicate workflow tasks to mitigate effects of performance variation of resources so that soft deadlines can be met. The amount of idle time available for replication can be increased if there is budget available for replica execution, which allows provisioning of more resources than the minimal necessary for meeting the deadline. Therefore, this algorithm applies a deadline constraint and variable budget for replication to achieve its goals. To reduce the impact of performance variation of public Cloud resources in the deadlines of workflows, we proposed a new algorithm, called EIPR, which takes into consideration the behavior of Cloud resources during the scheduling process and applies replication of tasks to increase the chance of meeting application deadlines. A.EIPR Algorithm The existing algorithm performs following steps 1. Combined Provisioning and Scheduling 2. Data-Transfer Aware Provisioning Adjust 3. Task Replication Combined Provisioning and Scheduling The first step of the EIPR algorithm consists in the determination of the number and type of VMs to be used for workflow execution as well as start and finish time of each VM (provisioning) and the determination of ordering and placement of tasks on such allocated resources (scheduling). The provisioning and scheduling problems are closely related, because the availability of VMs affects the scheduling, and the scheduling affects finish time of virtual VMs. Therefore, a more efficient scheduling and provisioning can be achieved if both problems are solved as one rather than independently. Data-Transfer Aware Provisioning Adjust In the second step of the EIPR algorithm, the provisioning decision performed in Step 1 is adjusted to account for the aforementioned factors. For each nonentry task scheduled as first task of a virtual machine, and for each non-exit task scheduled as the last task of a 168

virtual machine, the algorithm meets the required communication time by setting the start time of the machine earlier than st of the first task, and/or setting the end time of the machine later than the finish time of the last task, depending on where the extra time is required. Finally, the beginning of the first allocation slot of each virtual machine is anticipated by the estimated deployment and boot time for virtual machines, which was accounted for during the scheduling process. Task Replication The task replication process works as a semi-active replication technique for fault tolerance, with the difference that here tasks are replicated across performance- independent hosts rather than failureindependent locations. As in the case of replication for fault tolerance, in our approach replication is also managed by a single entity. The process operates as follows. Initially, the algorithm generates new VMs based on the available replication budget. For selection of the type of VM to be used for the new VM, the list of already provisioned VMs is sorted in descending order of number of scheduled tasks. Starting from the beginning of the sorted list, the first VM type whose cost is smaller than the available budget is chosen. The chosen VM is replicated, provisioning information is updated, the VM is moved to the end of the list, the available budget is updated, and the search is restarted. The process is repeated while the budget admits new instantiations. The new provisioned VM and the correspondent time slot reproduce the same start and end times than the original VM. Next, a list of all possible idle slots is created. This list includes both paid slots, which are slots were the VM is actually provisioned and no task is scheduled on it; and unpaid slots, which are the times between the start time and the deadline were the VM is not provisioned Once paid and unpaid slots are identified, the slots are sorted in increasing order of size. The first unpaid slot on the list is inserted after the last paid slot, so paid time slots have precedence over unpaid in the slots selection 3. More Energy consumed and high cost for the providers side, when more virtual machines are allocated. IV.PROPOSED SYSTEM The proposed method called Location based Minimum Migration in Cloud (LBMMC).This method monitors all the virtual machines in all cloud locations centre. It identifies the state virtual machines whether it is sleep, idle, or running (less or over) state. And also checks number of tasks running and number of tasks can able to run. When a virtual machine is considered to be overloaded requiring live migration to one or more VMs from the nearby location cloud centers. When a virtual machine is considered to be under loaded (minimum task running) requiring live migration to one or more VMs (which are already running to perform some jobs) from the nearby location cloud centers. Location based VM selection policy (algorithm) has to be applied to carry out the selection process. The main contributions are shown below, Migration of Virtual machine tasks where less number of tasks is running, which occupies extra energy and power consumption. Migration of Virtual machine tasks where more number tasks are running, which gives overhead to the system performance. Less number of migrations should be selected, to avoid more processing cost for migration process. Migration of Virtual machine tasks to reduce the user provisioning cost, by finishing the user s tasks in their deadline. A. Advantages The advantages of proposed system are 1. Minimum number of Virtual machines is allocated to complete the user tasks in the deadline. 2. Migrations of tasks in Virtual machines take minimum cost and energy. 3. The underload and overload of tasks are considered while allocation of tasks in Virtual machines. B. Data Flow Diagram B. Disadvantages Fig.2 Paid And Unpaid Idle Time Slots The disadvantages of existing system are 1. Workflow execution is also subject to delays if one or more of the virtual machines fail during task execution. 2. Takes more number of Virtual Machines to complete the user tasks in their deadlines. 169

A.Home Screen to Get Input VI.SCREENSHOTS Fig.3 Data Flow Diagram of LBMMC Algorithm C. System Architecture Fig.5 Home Screen to Get Input B. Users Resource Assumption Fig.4 System Architecture of LBMMC Algorithm C. Users Task Input Fig.6 Users Resource Allocation V. MODULE DESCRIPTION A. Resource Assumption and Task Analyze For developing one cloud environment, have to analyze the requirements for the resources and the number of resources. In this module, the resource assumption takes place which means need to mention the number of resources and the resources configuration (Processor, RAM, Storage, Capacity, etc). To analyze the task, the task from the user is analyzed and forward for the allocation process. 170

Fig.7 Users Task Input D. Calculating Security Demand and Resource Display Fig.8 Calculating Security Demand and Resource Display E. Users Resource Allocation Energy consumed and high cost for the providers side, when more virtual machines are allocated. To overcome this, LBMMC algorithm was proposed. This algorithm involves A.)Resource Assumption and Task Analyze, B.)Allocate resource for jobs, C).Model Analyze for Migration, D.)Migration of tasks. In this paper, presented a resource assumption and task analyze. The resource assumption takes place which means need to mention the number of resources and the resources configuration. To analyze the task, the task from the user is analyzed and forward for the allocation process. In the future work, the resources are allocated for the user given jobs. The provision of these computational resources is controlled by a provider, and resources are allocated in an elastic way, according to consumers needs. To accommodate unforeseen demands on the infrastructure in a scalable and elastic way, the process of allocation and reallocation in Cloud Computing must be dynamic. The trust levels are given for each all resources and the user will give the task with the security demands to run the task in resources. Based on the user demanded security, the resources are allocated for each user jobs. Model Analyze takes place for migration process, where what all the tasks to be migrated. Location based VM selection policy (algorithm) has to be applied to carry out the selection process. The tasks are migrated based on the analyze of the last process. VIII.REFERENCES [1].Rodrigo N.Calheiros, Rajkumar Buyya, Meeting Deadlines of Scientific Workflows in Clouds with Tasks Replication, IEEE Transactions on Parallel And Distributed Systems,VOL.25, NO.7,JULY 2014. [2]. R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility, Future Gener. Comput. Syst., vol. 25, no. 6, pp. 599-616, June 2009. Fig.9 Users Resource Allocation VII. CONCLUSION AND FUTURE WORK To reduce the impact of performance variation of public Cloud resources in the deadlines of workflows, proposed a existing algorithm called EIPR, which takes into consideration the behavior of Cloud resources during the scheduling process and also applies replication of tasks to increase the chance of meeting application deadlines. In this algorithm, Workflow execution is also subject to delays if one or more of the virtual machines fail during task execution. Takes more number of Virtual Machines to complete the user tasks in their deadlines. More 171 [3]. Z. Shi and J.J. Dongarra, Scheduling Workflow Applications on Processors with Different Capabilities, Future Gener. Comput.Syst., vol. 22, no. 6, pp. 665-675, May 2006. [4]. M. Mao and M. Humphrey, Auto-Scaling to Minimize Cost and Meet Application Deadlines in Cloud Workflows, in Proc. Int l Conf. High Perform. Compute., Netw., Storage Anal. (SC), 2011, p. 49. [5]. S. Abrishami, M. Naghibzadeh, and D. Epema, Deadline-Constrained Workflow Scheduling Algorithms for IaaS Clouds, Future Gener. Comput. Syst., vol. 29, no. 1, pp. 158-169, Jan. 2013.

[6].R. Abbott and H. Garcia-Molina, Scheduling Real- Time Transactions, ACM SIGMOD Rec., vol. 17, no. 1, pp. 71-81, Mar. 1988. [7]. K.R. Jackson, L. Ramakrishnan,K.Muriki, S. Canon, S. Cholia,J. Shalf, H.J.Wasserman, and N.J.Wright, Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud, in Proc. 2nd Int l Conf. CloudCom, 2010,pp. 159-168. [8]. J. Yu, R. Buyya, and K. Ramamohanarao, Workflow Scheduling Algorithms for Grid Computing, in Metaheuristics for Scheduling in Distributed Computing Environments, F. Xhafa and A.Abraham, Eds. New York, NY, USA: Springer-Verlag, 2008. [9]. A.Hirales-Carbajal, A. Tchernykh, R. Yahyapour, J.L. Gonza lez-garcı a,t. Ro blitz, and J.M. Ramı rez- Alcaraz, Multiple Workflow Scheduling Strategies with User Run Time Estimates on a Grid, J. Grid Comput., vol. 10, no. 2, pp. 325-346, June 2012. and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms, Softw., Pract. Exper., vol. 41, no. 1,pp. 23-50, Jan. 2011. AUTHORS BIOGRAPHY K.Sathya received her B.E degree in Avinashilingam Institute for Home Science and Higher Education for Women, Coimbatore, India and currently pursuing M.E degree in Jay Shriram Group of Institutions, Tiruppur, India. Her research interests include Cloud Computing, Advanced Database, Computer networks Operating System.System Dr.S. Rajalakshmi received her B.E. degree in Periyar University, Salem, India and M.E. degree in Anna University, Chennai, India and Ph.D. in Data mining in Anna University, India. Currently she is working as an Associate Professor in Jay Shriram Group of Institutions, Tirupur, India. Her research interests include Data mining, Cloud Computing and Big Data. [10]. C. Lin and S. Lu, SCPOR: An Elastic Workflow Scheduling Algorithm for Services Computing, in Proc. Int l Conf. SOCA, 2011, pp. 1-8. [11]. C.J.Reynolds, S.Winter, G.Z.Terstyanszky, T. Kiss,P.Greenwell, S. Acs, and P. Kacsuk, Scientific Workflow Makespan Reduction through Cloud Augmented Desktop Grids, in Proc. 3rd Int l Conf. CloudCom, 2011, pp. 18-23. [12]. M. Xu, L. Cui, H. Wang, and Y. Bi, A Multiple QoS Constrained Scheduling Strategy of Multiple Workflows for Cloud Computing, in Proc. Int l Symp. ISPA, 2009, pp. 629-634. [13]. K. Plankensteiner and R. Prodan, Meeting Soft Deadlines in Scientific Workflows Using Resubmission Impact, IEEE Trans. Parallel Distrib. Syst., vol. 23, no. 5, pp. 890-901, May 2012. [14]. R. Prodan and T. Fahringer, Overhead Analysis of Scientific Workflows in Grid Environments, IEEE Trans. Parallel Distrib. Syst., vol. 19, no. 3, pp. 378-393, Mar. 2008. [15]. W. Chen and E. Deelman, WorkflowSim: A Toolkit for Simulating Scientific Workflows in Distributed Environments, in Proc. 8th Int l Conf. E- Science, 2012, pp. 1-8. [16]. P. Chevochot and I. Puaut, Scheduling Fault- Tolerant Distributed Hard Real-Time Tasks Independently of the Replication Strategies, in Proc. 6th Int l Conf. RTCSA, 1999, p. 356. [17]. R.N. Calheiros, R. Ranjan, A. Beloglazov, C.A.F.D. Rose, and R. Buyya, CloudSim: A Toolkit for Modeling 172