Performance Evaluation of Infrastructure as Service Clouds with SLA Constraints



Similar documents
A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment

Project Networks With Mixed-Time Constraints

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

Cloud Auto-Scaling with Deadline and Budget Constraints

QoS-based Scheduling of Workflow Applications on Service Grids

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Politecnico di Torino. Porto Institutional Repository

An Alternative Way to Measure Private Equity Performance

The Greedy Method. Introduction. 0/1 Knapsack Problem

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Complex Service Provisioning in Collaborative Cloud Markets

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A Secure Password-Authenticated Key Agreement Using Smart Cards

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

Multiple-Period Attribution: Residuals and Compounding

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

An Interest-Oriented Network Evolution Mechanism for Online Communities

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING

Resource Scheduling in Desktop Grid by Grid-JQA

Dynamic Pricing for Smart Grid with Reinforcement Learning

J. Parallel Distrib. Comput.

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

Resource Sharing Models and Heuristic Load Balancing Methods for

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

Sciences Shenyang, Shenyang, China.

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Profit-Maximizing Virtual Machine Trading in a Federation of Selfish Clouds

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING

Preventive Maintenance and Replacement Scheduling: Models and Algorithms

A heuristic task deployment approach for load balancing

Adaptive and Dynamic Load Balancing in Grid Using Ant Colony Optimization

The OC Curve of Attribute Acceptance Plans

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

Enabling P2P One-view Multi-party Video Conferencing

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

Dynamic Scheduling of Emergency Department Resources

DEFINING %COMPLETE IN MICROSOFT PROJECT

IMPACT ANALYSIS OF A CELLULAR PHONE

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Calculating the high frequency transmission line parameters of power cables

A Performance Analysis of View Maintenance Techniques for Data Warehouses

Traffic State Estimation in the Traffic Management Center of Berlin

Real-Time Process Scheduling

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines

Ants Can Schedule Software Projects

What is Candidate Sampling

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays

An ILP Formulation for Task Mapping and Scheduling on Multi-core Architectures

Checkng and Testng in Nokia RMS Process

Optimal Pricing for Integrated-Services Networks. with Guaranteed Quality of Service &

Price Competition in an Oligopoly Market with Multiple IaaS Cloud Providers

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems

Research of Network System Reconfigurable Model Based on the Finite State Automation

A Programming Model for the Cloud Platform

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs

Hosting Virtual Machines on Distributed Datacenters

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy

Risk Model of Long-Term Production Scheduling in Open Pit Gold Mining

Cloud-based Social Application Deployment using Local Processing and Global Distribution

Study on Model of Risks Assessment of Standard Operation in Rural Power Network

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

A generalized hierarchical fair service curve algorithm for high network utilization and link-sharing

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Pricing Model of Cloud Computing Service with Partial Multihoming

Performance Analysis and Comparison of QoS Provisioning Mechanisms for CBR Traffic in Noisy IEEE e WLANs Environments

Transcription:

Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour 2 Computer Scence Department, CICESE Research Center, Ensenada, BC, Mexco 2 GWDG Unversty of Göttngen, 3777 Göttngen, Germany {alezama, chernykh}@ccese.edu.mx, ramn.yahyapour@gwdg.de Abstract. In ths paper, we present an expermental study of ob schedulng algorthms n nfrastructure as a servce type n clouds. We analyze dfferent system servce levels whch are dstngushed by the amount of computng power a customer s guaranteed to receve wthn a tme frame and a prce for a processng tme unt. We analyze dfferent scenaros for ths model. These scenaros combne a sngle servce level wth sngle and parallel machnes. We apply our algorthms n the context of executng real workload traces avalable to HPC communty. In order to provde performance comparson, we make a ont analyss of several metrcs. A case study s gven. Keywords. Cloud computng, nfrastructure as a servce, qualty of servce, schedulng. Evaluacón del desempeño de servcos de nfraestructura en nubes con restrccones de acuerdos de nvel de servco (SLA) Resumen. En el presente artículo, mostramos un estudo expermental sobre algortmos de calendarzacón en servcos de nfraestructura en nubes. Analzamos dferentes nveles de servcos que se dstnguen por la cantdad de poder computaconal que al usuaro se le garantza recbr dentro de un perodo de tempo y el preco por undad de procesamento. Analzamos dferentes escenaros para este modelo. Estos escenaros combnan un únco nvel de servco en una sola máquna y en máqunas paralelas. Utlzamos nuestros algortmos para la eecucón de muestras de cargas de trabao reales dsponbles para la comundad de HPC. Con el fn de proveer una comparacón en el desempeño, realzamos un análss conunto de varas métrcas. Presentamos un caso de estudo. Palabras clave. Computacón en nube, servco de nfraestructura en nube, caldad de servco, calendarzacón. Introducton Infrastructure as a servce type n clouds allows users to take advantage of computatonal power on-demand. The focus of ths knd of clouds manages vrtual machnes (VMs) created by users to execute ther obs on the cloud resources. However, n ths new paradgm, there are ssues that prevent ts wdespread adopton. The man concern s ts necessty to provde Qualty of Servce (QoS) guarantees []. The use of Servce Level Agreements (SLAs) s a fundamentally new approach for ob schedulng. In ths approach, schedulers are based on satsfyng QoS constrants. The man dea s to provde dfferent levels of servce, each addressng a dfferent set of customers for the same servces, n the same SLA, and establsh blateral agreements between a servce provder and a servce consumer to guarantee ob delvery tme dependng on the selected level of servce. Bascally, SLAs contan nformaton such as the latest fnsh tme of the ob, reserved tme for ob executon, number of CPUs requred, and prce per tme unt. The shftng emphass of the Grd and Clouds towards a servce-orented paradgm led to the adopton of SLA as a very mportant concept, but at the same tme led to the problem of fndng the strngent SLAs.

42 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour There has been sgnfcant amount of research on varous topcs related to SLAs: admsson control technques [2]; ncorporaton of the SLA nto the Grd/Cloud archtecture [3]; specfcatons of SLAs [4, 5]; usage of SLAs for resource management; SLA-based schedulng [6], SLA profts [7]; automatc negotaton protocols [8]; economc aspects assocated wth the usage of SLAs for servce provson [9], etc. Lttle s known about the worst case effcency of SLA schedulng solutons. There are only very few theoretcal results on SLA schedulng, and most of them address real tme schedulng wth gven deadlnes. Baruah and Hartsa [] dscuss the onlne schedulng of sequental ndependent obs on real tme systems. They presented the algorthm ROBUST (Resstance to Overload By Usng Slack Tme) whch guarantees a mnmum slack factor for every task. The slack factor f of a task s defned as a rato of ts relatve deadlne to ts executon tme requrement. It s a quanttatve ndcator of the tghtness of the task deadlne. The algorthm provdes an effectve processor utlzaton (EPU) of (f-)/f durng the overload nterval. He shows that gven enough processors, on-lne schedulng algorthms can be desgned wth performance guarantees arbtrarly close to that of optmal unprocessor schedulng algorthms. A more complete study s presented n [] by Schwegelshohn et al. The authors theoretcally analyze the sngle (SM) and the parallel machne (PM) models subect to obs wth sngle (SSL) and multple servce levels (MSL). Ther analyss s based on the compettve factor whch s measured as the rato of the ncome of the nfrastructure provder obtaned va the schedulng algorthm to the optmal ncome. They provde worst case performance bounds of four greedy acceptance algorthms named SSL-SM, SSL-PM, MSL-SM, MSL-PM, and two restrcted acceptance algorthms MSL-SM-R, and MSL-PM-R. All of them are based on adaptaton of the preemptve EDD (Earlest Due Date) algorthm for schedulng obs wth deadlnes. In ths paper, we make use of IaaS cloud model proposed n []. To show practcablty and compettveness of the algorthms, we conduct a comprehensve study of ther performance and dervatves usng smulaton. We take nto account an mportant ssue that s crtcal for practcal adopton of the schedulng algorthms: we use workloads based on real producton traces of heterogeneous HPC systems. We study two greedy algorthms: SSL-SM and SSL-PM. SSL-SM accepts every new ob for a sngle machne f ths ob and all prevously accepted obs can be completed n tme. SSL-PM accepts obs consderng all avalable processors n parallel machnes. Key propertes of SLA should be observed to provde benefts for real nstallatons. Snce SLAs are often consdered as successors of servce orented real tme paradgm wth deadlnes, we start wth a smple model wth a sngle servce level on a sngle computer, and extend t to a sngle SLA on multple computers. One of the most basc models of SLA provdes relatve deadlne as a functon of the ob executon tme wth a constant servce level parameter of usage. Ths model does not match every real SLA, but the assumptons are nonetheless reasonable. It s stll a vald basc abstracton of SLAs that can be formalzed and automatcally treated. We address an onlne schedulng problem. The obs arrve one by one and after the arrval of a new ob the decson maker must resolve whether he reects ths ncomng ob or schedules t on one of the machnes. The problem s onlne because the decson maker has to resolve t wthout nformaton about the followng obs. For ths problem, we measure the performance of the algorthms by a set of metrcs whch ncludes the compettve factor and the number of accepted obs. 2 Schedulng Model 2. Formal Defnton In ths work, we consder the followng model. A user submts obs to a servce provder, whch has to guarantee some level of servce (SL). Let S [ S, S2,..., S,..., Sk] be the set of servce levels offered by the provder. For a gven servce level S the user s charged at a cost u per unt of

Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 43 executon tme dependng on the urgency of the submtted ob. u max{ u} denotes the max maxmum cost. The urgency of the ob s denoted by the slack factor f. The total number of obs submtted to the system s n r. Each ob J from the released ob set J [ J, J2,, J ] s descrbed by a tuple r n r ( r, p, S, d ) : ts release date r, ts executon tme p, and the SL S. The deadlne of each ob d s calculated at the release of the ob as d r f p. The maxmum deadlne s denoted by d max{ d }. The processng tme max of the ob p becomes known at tme r. Once the ob s released, the provder has to decde, before any other ob arrves, whether the ob s accepted or not. In order to accept the ob J the provder should ensure that some machne n the system s capable of completng J before ts deadlne. In the case of acceptance, further obs should prevent that the ob J msses ts deadlne. Once a ob s accepted, the scheduler uses some heurstc to schedule the ob. Fnally, the set of accepted obs J [ J, J2,, J n ] s a subset of J r where n s the number of obs successfully accepted and executed. 2.2 Metrcs We used several metrcs to evaluate the performance of our schedulng algorthms and SLAs. In contrast to tradtonal schedulng problems, the classc schedulng metrcs such as C max become rrelevant n evaluatng the system performance of systems scheduled through SLAs. One of the obectve functons represents the goal of the nfrastructure provder who wants to maxmze hs total ncome. Job J wth servce level S generates ncome u p n the case of acceptance and zero otherwse. The compettve factor c v n u p V A * s defned as a rato of total ncome generated by an algorthm to optmal ncome VA *. Due to maxmzaton of ncome, a larger compettve factor s better than a smaller one. Note that n our evaluaton of experments, we use the upper bound of the optmal ncome ˆV A nstead of the optmal ncome as we are, * n general, not able to determne the optmal ncome. * * V A Vˆ A m n( u p, u d m) n r max max max The frst bound s the sum of the processng tmes of all released obs multpled by the maxmum prce per unt executon of all avalable SLAs. The second bound s the maxmum deadlne of all released obs multpled by the maxmum prce per unt executon value and the number of machnes n the system. Due to our admsson control polcy, the system does not execute obs whose deadlne cannot be reached; therefore, ths second bound s also an upper bound of the maxmum processng tme n whch the system can execute work. In our experments we analyze SSL-SM and SSL-PM algorthms, snce only one SL s used; we do not take u max nto account to calculate the compettve factor. We also calculate the number of reected obs and use t as a measure of the capacty of the system to respond to the ncomng flow of obs. Fnally, we calculate the mean watng tme of the obs wthn the system as n MWT ( c p ), where c s the n completon tme of the ob. 3 Expermental Setup 3. Algorthms In our experments, we use SSL-SM and SSL-PM algorthms based on the EDD (Earlest Deadlne Deadlne) algorthm, whch gves prorty to obs accordng to ther deadlne. The obs that have been admtted but not yet completed are ntroduced n a queue. The obs are ordered n non-decreasng deadlnes. For ther executon,

%RJ 44 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour obs are taken from the head of the queue. When a new ob s released, t s placed n the queue accordng to ts deadlne. EDD s an optmal algorthm for mnmzng lateness n a sngle machne system. In our case, t corresponds to mnmzng the number of reected obs. Gupta and Paln [2] showed that there cannot exst an algorthm wth a compettve rato greater than (/ f ) ò wth m machnes, and ò s arbtrary small for the problem of allocatng obs on a hard real-tme schedulng model n whch a ob must be completed f t was admtted for executon. They proposed an algorthm that acheves a compettve rato of at least ( / f ) and demonstrated that ths s an optmal scheduler for hard real-tme schedulng wth m machnes. The admttance test also proposed by them conssts n verfyng that all the already accepted obs whose deadlne s greater than that of the ncomng ob wll be completed before ther deadlne s met. 3.2 Workload In order to evaluate the performance of SLA schedulng, we performed a seres of experments usng traces of HPC obs obtaned from the Parallel Workloads Archve (PWA) [3] and the Grd Workloads Archve (GWA) [4]. These traces are logs from real parallel computer systems, and they gve us a good nsght n how our proposed schemes wll perform wth real users. Predomnance of low parallel obs n real logs s well known. Even though some obs n the traces requre multple processors, we consder that n our model the machnes have enough capacty to process them, so we can abstract ther parallelsm. Snce we assume that IaaS clouds are a promsng alternatve to computatonal centers, we can expect that workload submtted to clouds wll have smlar characterstcs to the ones submtted to actual parallel and grd systems. In our log, we consdered nne traces from DAS2 (Unversty of Amsterdam), DAS2 (Delft Unversty), DAS2 (Utrecht Unversty), DAS2 (Leden Unversty), KTH, DAS2 (Vre Unversty), HPC2N, CTC, and LANL. Detals of the log characterstcs can be found n the PWA [3] and GWA [4]. To obtan vald statstcal values, 3 experments wthn one week perod were smulated for each SLA. We calculated ob deadlnes based on the real processng tme of the obs. 4 Expermental Results 4. Sngle Machne Model For the frst set of experments wth a sngle machne system scheme, we performed experments for 2 values of the slack factor:, 2, 5,, 5, 2, 25, 5,, 2, 5 and. Although we do not expect that a real SLA provdes slack factors greater than 5, large values are mportant to study expected system performance when slack factors tend to nfnty. Fgures -5 show smulaton results of SSL-SM algorthm. They present percentage of reected obs, total processng tme of accepted obs, mean watng tme, mean number of nterruptons per ob, and mean compettve factor. 99.6 99.4 99.2 99 98.8 98.6 98.4 98.2 98 97.8 97.6 Percentage of Reected Jobs Fg.. Percentage of reected obs for SSL-SM algorthm

seconds seconds Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 45 x 6 Total Processng Tme x 5 Mean Watng Tme 6 2 4 2 8 8 6 6 4 4 2 2 Fg. 2. Total processng tme for SSL-SM algorthm Fgure shows the percentage of reected obs for the SSL-SM algorthm. We see that the number of reected obs decreases whle the slack factor ncreases. Large values of slack factor ncrease the flexblty to accept new obs by delayng the executon of already accepted ones. In the case when a slack factor s equal to, the system cannot accept new obs untl the ob n executon s completed. We observe that the percentage of reected obs wth a slack factor of s a bt lower than that wth values of slack factor from 2 to 25. However, t does not mean that ths slack factor allows the system to execute more computatonal work as we see n Fgure 2. Fgure 2 shows the total processng tme of accepted obs for the gven slack factors. We see that the processng tme ncreases as the slack factor ncreases, meanng that the scheduler s able to explot the ncreased flexblty of the obs. Fgure 3 shows mean watng tme versus the slack factor. It demonstrates that an ncrease of total processng tme causes an ncrease of watng tme. We also evaluate the mean number of nterruptons per ob; these results are showed n Fgure 4. We see that for small slack factors the Fg. 3. Mean watng tme of obs for SSL-SM algorthm number of nterruptons s greater than that for larger slack factors. Mean values are below nterrupton per ob. Moreover, f a slack factor s more than, the number of nterruptons per ob s stable and vary between.2 and.3. Ths fact s mportant; keepng the number of nterruptons low prevents the system overhead. Fgure 5 shows the mean compettve factor. It represents the nfrastructure provder obectve to maxmze hs total ncome. Note that a larger compettve factor s better than a smaller one. When the slack factor s equal to, the compettve factor s.85. Once the slack factor s ncreased to 5, we obtan better compettve factors. When the slack factor s equal to 5, the mean compettve factor has ts maxmum value of.94. Passng ths pont, the compettve factor decreases when the slack factor s equal to 2. We consder that at ths pont the deadlnes of the obs are much larger than ther processng tme. If the slack factor s between 2 and 5, the compettve factor s ncreased agan because the maxmum deadlne gets close to the sum of processng tmes. When the deadlne of all obs tends to nfnty, the completve factor s optmal as expected.

Interruptons/Job %RJ 46 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour.7.6.5 Mean Number of Interruptons per Job 99 98 97 Percentage of Reected Jobs.4 96.3.2. 95 94 93 One machne Two machnes Three machnes 92 Fg. 4. Mean number of nterruptons per ob for SSL- SM algorthm Fg. 6. Percentage of reected obs for SSL-PM algorthm.9.8 Compettve Factor end, hstorcal workload wthn a gven tme nterval can be analyzed to determne an approprate slack factor. The tme nterval for ths adustment should be set accordng to the dynamc characterstcs of the workload and n the IaaS confguraton. c V.7.6.5.4 2 5 Fg. 5. Mean compettve factor of SSL-SM algorthm In a real cloud scenaro, the slack factor can be dynamcally adusted n response to changes n the confguraton and/or the workload. To ths 4.2 Multple Machne Model In ths secton, we present the results of SSL-PM algorthm smulatons on two and three machnes. We plotted the SSL-SM results to analyze the change of the system performance when the number of machnes vares. Fgures 6- show the percentage of reected obs, total processng tme of accepted obs, mean watng tme, mean number of nterruptons per ob, effcency and mean compettve factor. Fgure 6 presents the percentage of reected obs. It can be seen that an ncrease of the number of machnes has a lmted effect on the acceptablty of obs when the slack factor s small. However, larger values of slack factor have greater mpact on the number of accepted obs. Fgure 7 shows the total processng tme of accepted obs. The processng tme s ncreased

seconds seconds Interruptons/Job Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 47 2 x 7 Total Processng Tme.7 Mean Number of Interruptons per Job 8.6.5 One machne Two machnes Three machnes 6.4 4.3 2 2 5 Fg. 7. Total processng tme for SSL-PM algorthm 9 x 5 8 7 6 5 4 3 2 Mean Watng Tme One machne Two machnes Three machnes One machne Two machnes Three machnes Fg. 8. Mean watng tme for SSL-PM algorthm as more machnes are added to the system. However, doublng and trplng the processng capacty do not cause the same ncrease n the processng tme. Ths effect can be clearly seen when the slack factor s large. We conclude that an ncrease n the processng capacty wll be more effectve wth smaller slack factors. Fgure 8 shows the mean watng tme when slack factor.2. Fg. 9. Mean number of nterrupton for SSL-PM algorthm vares. We see that an ncrease of the total processng tme, as a result of larger slack factors, also causes an ncrease of watng tme. Addtonally, addng more machnes to the system makes the ncrease of the mean watng tme less sgnfcant. Fgure 9 shows the mean number of nterruptons per ob. We see that an ncrease of the number of machnes ncreases the number of nterruptons. Ths ncrease s not consderable, and s stablzed as the slack factor s ncreased. The number of nterruptons s maxmal wth a slack factor of 2 for all three models. Fgure shows the executon effcency. Ths metrc ndcates the relatve amount of useful work whch the system executes durng the nterval between the release tme of the frst ob and the completon of the last ob. We see that a decrease of effcency, at least wth moderate slack factors, manly depends on the number of machnes. Fgure presents the compettve factor whle the slack factor vares. We see that for the two and three machne system confguraton the maxmum compettve factor s obtaned wth a slack factor of 2. As we already mentoned, n the case of a sngle machne confguraton the best compettve

Effcency 48 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour Executon effcency USD 7 Executon cost per hour.95.9 One machne Two machnes Three machnes 6 5 One machne Two machnes Three machnes.85 4.8 3.75 2.7.65 2 5 Fg.. Executon effcency for SSL-PM algorthm Fg. 2. Executon cost per hour.9.8 Compettve Factor clearly seen when the slack factor s 2 for a sngle machne confguraton, and for two and three machnes. In the cases of two and three machnes confguraton, for the slack factor greater than 5, the compettve factor almost reached the optmal value. c V.7 4.3 Executon Costs.6.5 One machne Two machnes Three machnes.4 2 5 Fg.. Compettve factor for SSL-PM algorthm factors are obtaned wth a slack factor of 2 and 5. We can also observe that when the slack factor s ncreased, the compettve factor s decreased. Ths happens untl the slack factor becomes large enough to create a sgnfcant dfference between ob deadlnes and ther processng tmes. Ths s In the IaaS scenaro, cloud provders offer computer resources to customers on a pay-asyou-go bass. The prce per tme unt depends on the servces selected by the customer. Ths charge depends not only on the prce the user s wllng to accept, but also on the cost of the nfrastructure mantenance. In order to estmate ths charge, we propose a tarff functon that depends on the slack factor. We frst take nto account that the provder needs to recover the mantenance cost from the executon of obs. We assume that the provder pays a flat rate for the use/mantenance of the resources. The total mantenance cost of ob processng co can be calculated usng the expresson ( ) t

Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 49 n r m p u u m can be calculated as. The cost per tme unt co u co u cot n p, where n r p s the sum of processng tmes of all released obs, u u s the prce per unt of mantenance, m s the number of machnes, and n p s the sum of processng tmes reached by the algorthm. We consder that u u s equal to 8.5 cents per hour, whch s the prce that Amazon EC2 charges for a small processng unt [5]. Fgure 2 shows the executon cost per hour when the slack factor vares. As t can be seen, the cost of processng obs wth a small slack factor s larger than the executon of obs wth a looser slack factor. Moreover, the costs are larger f fewer machnes are used. The reason s that a system wth less machnes and a small slack factor reects most of the obs wthn a gven nterval, so the executon s costly. Therefore, confguratons that execute more obs have lower costs per executon tme unt. Clear proft s generated f cost per tme unt s ncremented. 5 Conclusons and Future Work The use of Servce Level Agreements (SLAs) s a fundamentally new approach for ob schedulng. Accordng to ths approach, schedulng s based on satsfacton of QoS constrants. The man dea s to provde dfferent levels of servce, each addressng a dfferent set of customers. Whle a large number of servce levels leads to hgh flexblty for customers, t also produces a sgnfcant management overhead. Hence, a sutable tradeoff must be found and adusted dynamcally, f necessary. Whle theoretcal worst case IaaS schedulng models begn to emerge, fast statstcal technques appled to real data are effectve as have been shown emprcally. In ths paper, we presented an expermental study of two greedy acceptance algorthms, namely, SSL-SM and SSL-PM, wth known worst case performance bounds. They are based on the adaptaton of the preemptve EDD algorthm for ob schedulng wth dfferent servce levels on dfferent number of machnes. Our study results n several contrbutons. Frstly, we dentfed several servce levels to make schedulng decsons wth respect to ob acceptance; secondly, we consdered and analyzed two test cases on a sngle machne and on parallel machnes; thrdly, we estmated the cost functon for dfferent servce levels; then, we showed that the slack factor can be dynamcally adusted n response to changes n the confguraton and/or the workload. To ths end, the past workload wthn a gven tme nterval can be analyzed to determne an approprate slack factor. The tme nterval for ths adaptaton depends on the dynamcs of the workload characterstcs and IaaS confguraton. Though our model of IaaS s smplfed, t s stll a vald basc abstracton of SLAs that can be formalzed and treated automatcally. In ths paper, we explored only a few scenaros of usng SLAs. The IaaS clouds are usually large scale and vary sgnfcantly. It s not possble to satsfy all QoS constrants from the servce provder perspectve f a sngle servce level s used. Hence, a balance between the number of servce levels and the number of resources needs to be found and adusted dynamcally. A system can have several specfc servce levels (e.g., Bronze, Slver, Gold) and algorthms to keep the system wth QoS specfed n SLA. However, further study of algorthms for multple servce classes and the resource allocaton algorthms s requred to assess ther actual effcency and effectveness. Ths wll be the subect of future work to acheve a better understandng of servce levels n IaaS clouds. Moreover, other scenaros of the problem wth dfferent types of SLAs and workloads wth a combnaton of obs wth and wthout SLA stll need to be addressed. Also, as future work, we wll consder the elastcty of slack factors n order to ncrease proft whle provdng better QoS to users.

4 Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour References. Garg, S.K., Gopalayengar, S.K., & Buyya, R. (2). SLA-based Resource Provsonng for Heterogeneous Workloads n a Vrtualzed Cloud Datacenter. th nternatonal conference on Algorthms and Archtectures for parallel processng (ICA3PP'), Melbourne, Australa, 37 384. 2. Wu, L., Garg, S.K., & Buyya, R. (2). SLAbased admsson control for a Software-as-a- Servce provder n Cloud computng envronments. th IEEE/ACM Internatonal Symposum on Cluster, Cloud and Grd Computng (CCGrd 2), Newport Beach, CA., USA, 95 24. 3. Patel, P., Ranabahu, A., & Sheth, A. (29). Servce Level Agreement n Cloud Computng (Techncal Report). Oho Center of Excellence n Knowledge-enabled Computng. 4. Andreux, A., Czakowsk, K., Dan, A., Keahey, K., Ludwg, H., Nakata, T., Pruyne, J., Rofrano, J., Tuecke, S., & Xu, M. (24). Web servces agreement specfcaton (WS-Agreement), (GFD- R-P.7). Global Grd. 5. Revew and summary of cloud servce level agreements. (s.f.). Retreved from http://www.bm.com/developerworks/cloud/lbrary/cl -rev2sla.html. 6. Wu, L., Garg, S.K., & Buyya, R. (2). SLA- Based Resource Allocaton for Software as a Servce Provder (SaaS) n Cloud Computng Envronments. th IEEE/ACM Internatonal Symposum on Cluster, Cloud and Grd Computng (CCGrd 2), Newport Beach, CA,USA, 95 24. 7. Fretas, A.L., Parlavantzas, N., & Pazat, J.L. (2). Cost Reducton Through SLA-drven Self- Management. Nnth IEEE European Conference on Web Servces (ECOWS), Lugano, Swtzerland, 7 24. 8. Slagh, G.C., Şerban, L.D., & Ltan, C.M. (2). A Framework for Buldng Intellgent SLA Negotaton Strateges under Tme Constrants. Economcs of Grds, Clouds, Systems, and Servces, Lecture Notes n Computer Scence, 6296, 48 6. 9. Macías, M., Smth, G., Rana, O., Gutart, J., & Torres, J. (2). Enforcng Servce Level Agreements Usng an Economcally Enhanced Resource Manager. Economc Models and Algorthms for Dstrbuted Systems, Autonomc Systems, 9 27.. Baruah, S.K. & Hartsa, J.R. (997). Schedulng for overload n real-tme systems. IEEE Transactons on Computers, 46(9), 34 39.. Schwegelshohn, U. & Tchernykh, A. (22). Onlne Schedulng for Cloud Computng and Dfferent Servce Levels. IEEE 26th Internatonal Parallel and Dstrbuted Processng Symposum Workshops, Shangha, Chna, 67 74. 2. Gupta, B.D. & Pals, M.A. (2). Onlne real-tme preemptve schedulng of obs wth deadlnes on multple machnes. Journal of Schedulng, 4(6), 297 32. 3. D. Fetelson (28). Parallel Workloads Archve. Algorthms and Archtectures for. 4. Iosup, A., L, H., Jan, M., Anoep, S., Dumtrescu, C., Wolters, L., & Epema, D.H. (28). The Grd Workloads Archve. Future Generaton Computer Systems, 24(7), 672 686. 5. Amazon Servces. (23). Precos deamazon EC2 Retreved from http://aws.amazon.com/ec2/prcng/. Anuar Lezama Barquet obtaned a degree n Electrc and Electronc Engneer from the Natonal Autonomous Unversty of Mexco (UNAM). He receved hs M.S. n Computer Scence from the CICESE Research Center n 22. Hs nterests nclude parallel computng, schedulng, and cloud computng. Andre Tchernykh s a researcher at the Computer Scence Department, CICESE Research Center, Ensenada, Baa Calforna, Mexco. From 975 to 99 he was wth the Insttute of Precse Mechancs and Computer Technology of the Russan Academy of Scences (Moscow, Russa). He receved hs Ph.D. n Computer Scence n 986. In CICESE, he s a coordnator of the Parallel Computng Laboratory. He s a member of the Natonal System of Researchers of Mexco

Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants 4 (SNI), Level II. He leads a number of natonal and nternatonal research proects. He served as a program commttee member of several professonal conferences and a general co-char for nternatonal conferences on Parallel Computng Systems. Hs man research nterests nclude schedulng, load balancng, adaptve resource allocaton, scalable energy-aware algorthms, green grd and cloud computng, ecofrendly P2P schedulng, mult-obectve optmzaton, schedulng n real tme systems, computatonal ntellgence, heurstcs, metaheurstcs, and ncomplete nformaton processng. Ramn Yahyapour s executve drector of the GWDG Unversty of Göttngen. He has done research n Clouds, Grd and Servce-orented Infrastructures for several years. Hs research nterests are n resource management. He s a steerng group member and on the Board of Drectors n the Open Grd Forum. He has partcpated n several natonal and European research proects. Also, he s a scentfc coordnator of the FP7 IP SLA@SOI and was a steerng group member n the CoreGRID Network of Excellence. Artcle receved on 22/2/23; accepted /8/23.