An Energy Aware Framework for Virtual Machine Placement in Cloud Federated Data Centres

Size: px
Start display at page:

Download "An Energy Aware Framework for Virtual Machine Placement in Cloud Federated Data Centres"

Transcription

1 An Energy Aware Framework for Vrtual Machne Placement n Cloud Federated Data Centres Corentn Dupont* CREATE-NET Trento, Italy cdupont@create-net.org Govann Gulan HP Italy Innovaton Centre Mlan, Italy gulan@hp.com Faben Hermener OASIS Team, INRIA CNRS I3S Unversty of Sopha-Antpols faben.hermener@nra.fr Thomas Schulze Unversty of Mannhem Mannhem, Germany schulze@nformatk.un-mannhem.de Andrey Somov CREATE-NET Trento, Italy asomov@create-net.org ABSTRACT Data centres are powerful ICT facltes whch constantly evolve n sze, complexty, and power consumpton. At the same tme users and operators requrements become more and more complex. However, exstng data centre frameworks do not typcally take energy consumpton nto account as a key parameter of the data centre s confguraton. To lower the power consumpton whle fulfllng performance requrements we propose a flexble and energy-aware framework for the (re)allocaton of vrtual machnes n a data centre. The framework, beng ndependent from the data centre management system, computes and enacts the best possble placement of vrtual machnes based on constrants expressed through servce level agreements. The framework s flexblty s acheved by decouplng the expressed constrants from the algorthms usng the Constrant Programmng (CP) paradgm and programmng language, basng ourselves on a cluster management lbrary called Entropy. Fnally, the expermental and smulaton results demonstrate the effectveness of ths approach n achevng the pursued energy optmzaton goals. Categores and Subject Descrptors D.4.7 [Organzaton and Desgn]: Dstrbuted systems General Terms Algorthms, Desgn, Performance, Expermentaton. Keywords Constrant Programmng, Cloud Computng, Data Centre, Resource Management, Energy Effcency, Vrtualzaton, Servce Level Agreement. 1. INTRODUCTION Data centres are ICT facltes amed at nformaton processng and computng/telecommuncaton equpment hostng purposes for scentfc and/or busness customers. Untl recently, data centre operaton management has been entrely focused on Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. e-energy 2012, May , Madrd, Span. Copyrght 2012 ACM /12/05...$ mprovng metrcs lke performance, relablty, and servce avalablty. However, due to the rse of servce demands, data centres evolve n complexty and sze. Ths and the contnuous ncrease of energy cost have prompted the ICT communty to add energy effcency as a new key metrc for mprovng data centres facltes. Ths trend was further boosted by the acknowledgement that the ICT sector s carbon emssons are ncreasng faster than n any other doman [1]. Therefore researchers and IT companes have been solcted to fnd energy-aware strateges for the operaton of data centres [2]. To tackle ths problem, a number of energy-aware approaches have been recently proposed n the lterature and research projects lke e.g. workload consoldaton [3][5], optmal placement of workload [6], schedulng of applcatons [1][7], detecton of more power effcent servers [8], and the reducton of power consumpton by coolng systems [4]. It should be noted, however, that most of the energy-aware approaches and resource management algorthms for data centres consder only specfc research problems and ntegrate typcal constrants not takng some mportant factors nto account: Data centres have complex and quckly changng confguratons; Data centres are not homogeneous n terms of performance, management capabltes, and energy effcency; Data centres must comply wth a number of users and operators requrements. Due to the growng number of constrants and ther complexty we need to separate them from the resource management algorthm(s) to secure the two-folded objectve: Beng able to add or modfy a constrant wthout changng the algorthms Beng able to test and actvate a new algorthm wthout havng to re-mplement every constrant wthn t. In ths paper we propose and dscuss a flexble energy-aware framework to address the problem of energy-aware allocaton/consoldaton of Vrtual Machnes (VMs) n a cloud * The authors are lsted n alphabetc order.

2 data centre. The core element of the framework s the optmzer whch s able to deal wth (1) Servce Level Agreement (SLA) requrements, (2) dfferent data centres nterconnected n a federaton, each wth ther own characterstcs, as well as (3) two dfferent end objectves, namely mnmzng energy consumpton or CO 2 emssons. Ths framework s developed and tested wthn the FIT4Green project [23], funded by the Commsson of the European Unon, whose man goal s reducng the drect energy consumpton of ICT resources of a data centre by 20%. In practce, t reles on Constrant Programmng (CP) paradgm and the Entropy open source lbrary [13] to compute the energy-aware placement of VMs. Ths approach enables the adaptaton of new constrants n a flexble manner (see Fgure 1) wthout redesgnng the underlyng algorthm. <Read> Constrants: Generc Algorthm A XML fle <Read> Generc Algorthm B Fgure 1. Constrant programmng: The reuse of constrants n the algorthms and vce versa. The CP paradgm provdes a doman specfc language to express the constrants. In our case t wll be desgned specfcally for expressng data centre constrants. By usng ths language we can acheve the mportant goal of separatng two dfferent realms: (1) The realm of the data centre doman specfc knowledge, expressed n the constrants, and (2) the realm of the optmzaton knowledge, expressed n the algorthms. The optmzer ams at computng a confguraton; an assgnment of the VMs to the nodes; that mnmzes the overall energy consumpton of a federaton of data centres whle satsfyng the dfferent SLAs. In practce, the optmzer uses a power objectve model to estmate the energy consumpton of a confguraton and extends Entropy, a flexble consoldaton manager based on Constrant Programmng, to compute the optmzed confguratons. Ths paper s organzed as follows: Secton 2 ntroduces related work on the subject. Secton 3 wll present the proposed software archtecture whch ncludes the power objectve model, heurstcs search, and the translaton of the SLA nto constrants. The expermental results obtaned n a cloud data centre testbed wthn Hewlett Packard premses as well as complementary scalablty evaluaton wll be dscussed n Secton 4. Fnally, we conclude the paper n Secton 5 and dscuss our future work n Secton RELATED WORK In ths secton we brefly revew recent mddleware and frameworks for SLAs translaton nto constrants and heurstcbased approaches. Besdes, we overvew some work based on the CP paradgm, amed at power savng n data centres. 2.1 SLA CONSTRAINTS SLAs are referred to as the textual contract sgned by a costumer and a servce provder that guarantee a certan Qualty of Servce (QoS). In the context of data centre servces these encompass, among other terms, hardware related descrptons, performance related metrcs and avalablty guarantees as well as przes and penaltes. Besdes the textual document wrtten n natural language, SLAs are nowadays more and more coped wth n a machne readable manner. Reasons for ths can be seen n the ncrease of complexty as well as, drven by the development of agent based technologes, a much hgher automaton of the barganng process. However, one downsde of ths evoluton s the rse of a set of hghly heterogeneous technologes and standards. XML schemas, RDF and other ontologcal languages have been defned to be used n the context of all parts of the SLA lfe-cycle. For montorng and bllng, several mddleware (e.g. Globus Toolkt, Uncore) and resource manager (e.g. PBSPRO, Torque, Mau) have been developed over the last decades. Furthermore frameworks capturng the whole SLA lfecyle (e.g. BrEIN, SLA@SOI, NextGrd SLA Framework) were created all mplementng SLA capabltes n ther own way. In order to be n the broadest sense platform ndependent wth our approach, we therefore have to fnd a soluton copng wth ths heterogenety. 2.2 SEARCH HEURISTICS The problem of consoldatng and rearrangng the allocaton of vrtual machnes n a datacenter n an energy effcent manner s descrbed n [15]. It s known to be a NP-hard problem [16] wth a large soluton space. Even f one suppresses any constrants from the problem, the sze of the soluton space s equal to the number of VMs to the power of nvolved servers, whch s a huge number leadng to a combnatoral exploson. For example, f the number of Servers s 10 3 and the number of VMs s 10 4, then the soluton space wthout consderng any constrants s In the heurstcs proposed n [15], for each VM to be moved we fnd the approprate server that leads to mnmze the current overall power consumpton of a data centre. Ths s smlar to the Frst Ft Decreasng (FFD) algorthm whch has been used n prevous works [17][18][19], wth the addton of powerawareness for choosng the server. le these types of heurstcs are fast, n many stuatons they cannot lead to the optmal soluton, unless the data centre s homogeneous. The heurstcs are searchng a soluton by fndng a local optmum for each VM, whch s known to not always lead to a global optmum for the datacenter. 2.3 CP-BASED FRAMEWORKS The framework presented n [20] addresses the Servce Consoldaton Problem (SCP) n a data centre usng the CP approach. The rule-based constrants are assessed by the Comet programmng language. Ths framework, however, focuses on the expermental evaluaton of tme necessary to fnd a feasble soluton usng CP and Integer Lnear Programmng (ILP) approaches. The obtaned results show that the CP paradgm s more effectve to fnd a soluton for a large number of constrants and nstances wth respect to tme. In [13], CP s appled to solve the bn repackng schedulng problem. The man dea of ths work s to schedule the transtons of VMs consderng both placement constrants and resource requrements. In contrast to [13], we allow a user/operator to derve automatcally the constrants startng from exstng SLA requrements. Furthermore, the objectve of savng energy s

3 stated explctly n the model used by our framework, by usng a runtme smulaton and evaluaton of the energy consumpton for every component of a data centre. CP-based approaches were also proposed to solve the data mgraton [21] and load rebalancng [22] problems. However, all the lsted works model the specfc constrants drectly. The usage of constrant programmng technology for SLA negotaton and valdaton has recently been nvestgated n a varety of approaches. The concurrent constrant P-calculus [11] provdes mechansms to negotate and valdate contracts by extendng the nomnal process calcul, for nstance. Another approach was ntroduced by [10] extendng the soft concurrent constrant language n order to facltate SLA negotaton. However, the focus of all research s the negotaton process. 3. FRAMEWORK DESIGN In ths secton, we frst descrbe the global desgn of the framework. We then present the translaton of SLAs nto constrants, followed by the descrpton of the power objectve model and dscuss about the heurstcs we use to ncrease the scalablty of our framework and the qualty of the computed confguratons. The optmzer based on the CP engne has, n our case, several nputs: The complete current data centre confguraton. A number of constrants, descrbed n Secton 3.2. An objectve functon - n our case t s called the Power Objectve, descrbed n sub-secton 3.3. A number of Search Heurstcs, descrbed n sub-secton 3.4. XML fles Java Power objectve Current DC confguraton problem s modeled by statng constrants (logcal relatons) that must be satsfed by ts soluton. Gven suffcent tme, the CP solvng algorthm s guaranteed to determne a globally optmal soluton, f one exsts. The solvng algorthm s ndependent of the constrants composng the problem and the order n whch they are provded. Ths enables the framework to handle both the placement constrants and a power model ndependent from each other. In practce, Entropy embeds the CP solver Choco [25]. Fgure 2 depcts the composton mechansm of the framework. Each call to the framework leads to 1) the generaton of one Entropy VRSP based on the current confguraton, 2) the translaton and the njecton of the external constrants, 3) the nserton of the power model, and 4) the nserton of the heurstcs to gude the solver effcently to a soluton provdng an optmzed energy usage. A tmeout can be provded to Entropy to make t stop solvng after a gven tme. en no tmeout s specfed, Entropy computes and returns the reconfguraton plan that lead to the best solutons accordng to the power model and the placement constrants. Otherwse, t returns the best soluton computed so far. 3.2 SLA CONSTRAINTS FIT4Green s mplemented as a plug-n to extend the exstng management framework. Thus t does not cope wth SLA creaton, barganng, executon and valdaton per se. However, n order to not volate the SLAs durng the optmzaton process, nformaton need to be njected to the model. Thus, n our approach the aforementoned problem of technologcal dversty of dfferent management frameworks needed to be dealt wth, too. The soluton was to defne an XML schema on a low, techncal level of abstracton. Ths n turn s beng used by the DC operator to supply the needed constrants n a both human and machne readable format SLA Schema Creaton As a startng pont (see Fgure 3) for the defnton of the schema we have used the fndngs of [9] where the authors have analyzed nearly ffty SLAs and extracted common metrcs. Techncal SLA XML Addtonal Placement Constrant XML Translate to Translate to Read Resource Constrants Placement Constrants Search heurstcs Fgure 2. Framework archtecture. Optmzer CP Engne In the followng, we outlne the varous nput elements. VM reallocaton 3.1 FRAMEWORK OVERVIEW The framework extends the Entropy consoldaton manager to compute an energy-effcent reconfguraton plan whle Entropy tself s not energy-aware. It reles on the VM Repackng Schedulng Problem (VRSP), an abstract reconfguraton algorthm modelng the current memory and CPU demand of the VMs, the server's state and the future placement of the VMs. The VRSP can then be specalzed to ft the datacenters and the VMs specfctes. The flexblty of Entropy comes from ts usage of CP [24] to compute the new confguraton and the reconfguraton plan. CP allows modelng and solvng combnatoral problems where the Servce Level Agreements Done by [9] Extracted Commonaltes Data Centre Operator and CPT Techncal SLA Insert nformaton Add more SLA constrants on the fly Insert nformaton SLA Constrants Placement Constrants Fgure 3. The development of constrants from natural language SLAs. In a frst step we deconstructed those hgh level Servce Level Objectves (SLOs) concernng ther mpact on low level techncal metrcs. As a result we have dentfed four man categores: Hardware-, QoS related-, Avalablty-, and Addtonal metrcs. Wthn the frst category all hardware related metrcs lke CPU frequency or RAM space s beng captured.

4 The second category (QoS) encapsulates factors lke the maxmum amount of VMs that share a sngle CPU core, or the bandwdth. In modern data centres these metrcs are often defned by the Capacty Plannng Team (CPT) ganng ther knowledge from past experence. Guaranteeng a certan servce executon tme for nstance needs extensve knowledge about the process tself and the nterplay wth hardware resources. However, f past experence has shown, that the CPU s the bottleneck the CPT can decde to restrct the number of VMs per core. [14] has ponted out that automatc transformaton of SLOs to techncal SLAs are also possble n specfc stuatons, elmnatng the needed nvolvement of the CPT. Ths technque s n general also applcable n combnaton wth our approach. The avalablty of a servce can n theory be used to shut down the servces for specfc tme perods. However, n practce t heavly depends on the nature of the contract f a servce provder really wants to make extensve use of these metrcs. If servce avalablty for nstance s set to 99.9% the provder mght not want to shut down the servce for 0.1% of the tme by purpose, as ths mght scare away hs customers. Nevertheless, n a dfferent scenaro where a servce can be shut down durng weekends, the provder wll certanly make use of t. Therefore, the thrd category was added to the XML schema. Last, the category addtonal metrcs contans, for example, guarantees concernng access possblty (e.g. VPN) or the guarantee of a dedcated server. Dedcated server n ths context means that only one VM can be hosted on a server and s not related to a set of VMs. ether a server supports a specal access possblty s captured wthn the FIT4Green meta-model and s therefore handled as an attrbute of a server. To conclude, the created techncal XML schema can deal wth all commonly known SLA metrcs. However, as an addton we have created a second, low level placement schema wth whch the data centre operator can easly add low level constrants on the fly wthout wrtng a sngle lne of code. It s based on the buld-n placement constrants of entropy [13] Constrant Programmng and SLAs In order to be used wthn Entropy, the techncal constrants provded by the DC operator need to be translated. In general two approaches on dfferent levels of abstracton can be used for ths purpose: The hgher level placement constrants wth a preselecton process, and The low level postng -method, whch drectly njects the rules and constrants to the Choco solver for consderaton. The frst technque was used for most of the hardware related metrcs. In a pre-selecton process a set of servers have been extracted that satsfy the hardware requrements, whch agan s used n combnaton wth the placement constrant fence allowng an allocaton only to be performed on ths set of servers. For the other metrcs contaned n the techncal SLA, the low level postng -method s used n combnaton wth the entropy model. It s more powerful and thus sutable for the creaton of more complex constrants. In Table 1 the dfferent CP-approaches and ther correlatons wth the techncal constrants are presented. Besdes, the number of lnes of code needed for the mplementaton of each constrant s provded. Here, the number n brackets allegorzes the number of Lnes of Code (LoC) needed to, on one hand transform the model used n FIT4Green to the one used n entropy, and on the other hand the LoC needed for the pre-selecton process. Those generc methods are used for a varety of constrants and therefore lsted separately. Table 1. CP-Approach for Techncal Constrants. Category Constrant Approach LoC Hardware HDD Choco + ext. Entropy 121+(25) CPUCores Entropy ( fence ) 0+(25) CPUFreq Entropy ( fence ) 0+(25) RAM Choco + ext. Entropy 123+(25) GPUCores Entropy ( fence ) 0+(25) GPUFreq Entropy ( fence ) 0+(47) RAIDLevel Entropy ( fence ) 0+(47) QoS MaxCPULoad Choco + ext. Entropy 90+(25) MaxVLoadPerCore Choco + ext. Entropy 109+(25) MaxVCPUPerCore Choco + ext. Entropy 124+(25) Bandwdth Entropy ( fence ) 0+(49) MaxVMperServer Entropy ( capacty ) 0+(25) Avalablty PlannedOutages Choco + ext. Entropy Future Work Addtonal Metrcs Avalablty Choco + ext. Entropy Future Work Dedcated Server Entropy ( capacty ) 0 + (25) Access Entropy ( fence ) 0 + (25) 3.3 POWER OBJECTIVE MODEL As a bass for our model we use a component called Power Calculator, whch s also developed wthn the FIT4Green project and s beng descrbed n [15]. en provded wth a descrpton of the datacenter physcal and dynamc elements, ths component s able to smulate the power consumpton of every part of the data center on a very fne level of granularty, n real tme. le t s perfectly possble to call the Power Calculator component durng the search of a reconfguraton plan, ths has proven to be neffcent for our purpose. Ths s due to the complexty of the problem (NP-hard) as stated above. Here, we need to avod callng the Power Calculator each tme we are testng the placement of a VM n a server because ths s very tme consumng. As a result the CP engne must use a statc verson of the Power Calculator. Ths means that the necessary values are retreved and stored n a vector before and not durng the search and the engne has therefore all parameters drectly at hand. In order to beneft from the fne granularty provded by the Power Calculator and at the same tme gan from the advantages of CP programmng we have used the followng approach n our work. In a frst step we have grouped all servers s nto famles S k that share smlar characterstcs, where I s the ndex of the server n the data centres, and k K s the ndex of the famly. The VMs v are also grouped nto V l famles that share smlar characterstcs, where j J s the ndex of the VM n the data centres, and l L s the ndex of the famly. Note that such an assumpton s possble snce t s common for a data centre to have famles of smlar equpment and because VMs often share smlar run-tme characterstcs as well. Furthermore we have defned a vector H = <h 1,..., h j,..., h n > for each server s that denotes the set of VMs assgned to that server, where h j = 1 f the node s s hostng the VM v j and 0 otherwse. The whole array H therefore represents how the VMs

5 are assgned on servers n the dfferent data centres. Now, the power consumed by server I depends on ts physcal components as well as on the set of VMs present: P = f ( s, H v) (1) Here, f s the power consumpton functon provded by the Power Calculator. The dot s the vectoral product and v s the vector of all VMs. Thus, H v s the vector representng all the VMs that are located on server. Next, we extend the functon by a factor representng the fact that, f there are no VMs on a server, t can be swtched off meanng that t s not consumng any energy any more. For ths purpose let X be a varable whch has a value of 1 f there s at least one VM n a server, 0 otherwse: X Then 1, j J hj = 1 = 0, otherwse P = X * f ( s, H v) (3) In the next step, the statc verson of the Power Calculator s ncluded. Here, functon f s splt n two parts: The calculaton of the dle power of a server n the famly S k (.e. power wthout any VM runnng), called P dle (S k ). The calculaton of the power consumed by a VM n the famly V l f the latter s runnng on a server n the famly S k, called PVM (S k, V l ). The dle power as well as the power per VM for each server can be computed before the search. Let α denote the vector of the dle powers of the famles of servers, and β denotes the array of the power consumpton per VM n each famly: α = P k S dle ( k ) β = P S, V ) (5) kl VM ( k l Then we can obtan the statc verson of the server s power by usng the followng equaton: P = X * α k + hj * βkl, k s Sk (6) j J, l v V j l en P 0 s the power of the data centre before the executon of the plan, as computed by the Power Calculator, then the power saved s calculated as: P 1 = (7) P I P save = P 0 P 1 (8) As a last step to obtan the global energy fgure of our soluton, we need to ntegrate the cost of the network movements. For ths purpose we frst need to know whch VMs are movng. Ths s done by subtractng the two matrxh 0 (ntal state of the data centre) and H 1 (fnal state of the data centre), and analyzng the resultng matrx. We obtan a vector of the moves M k = < (S from, (2) (4) S to ) VM1,, (S from, S to ) VMk,, (S from, S to ) VMn >, where S from and S to are the source and destnaton servers of the VM, respectvely and k [1..n] s the ndex of the VM. We can retreve the energy cost of a move, provdng the characterstcs of the source server, the destnaton server and the VM from the power calculator. Ths cost ncludes the energy spent by movng the VM through the network, but can also nclude the overhead ncurred n term of CPU load and RAM IO. Emove k = EnergyCost(s, s j, v k ) (9) Here, and j are the ndexes of the source and destnaton servers of the VM v k, respectvely. We obtan the energy cost of the plan by summng the cost of every movement: Emove = Emove k M k (10) k K If we know the end tme of a VM, we can compute ts remanng lfe tme (LT). Ths nformaton can be combned wth the cost of the network and equaton (8), to get the total energy savng that we can expect by movng a VM: E k = ) ( P0 k P1 k LTk Emove k (11) The global energy saved by the plan, at federaton level s therefore: E total = E k (12) k K In practce, these energy formulas are wrtten n the Choco modelng language wthn the Power Objectve component of our framework. 3.4 HEURISTICS As mentoned prevously computng a soluton for the VRSP usng the Optmzer may be tme consumng for large nfrastructures as selectng a satsfyng server for each runnng VMs whle maxmzng the nfrastructure energy effcency s a NP-Hard problem. A CP solver, such as Choco, provdes a customzable branchng heurstc to gude the solver to a soluton. A branchng heurstc ndcates an order to nstantate the varables and a value to try for each varable. For a gven problem, the branchng heurstc helps the solver by ndcatng varables that are crtcal to compute a soluton and values that are supposed to be the best. A branchng heurstc s then hghly coupled wth the exact objectve of the problem as t reles to the varables semantc, an nformaton that s ntally out of the CP solver concern. For the Optmzer, the branchng heurstc helps at nstantatng the varables n a prorty descendng order denotng the VM placement to a value that pont to an energy-effcent server. In practce, VMs are sorted n the ncreasng order of ther energy effcency and the solver wll try to place each VM to a server that wll provde the best energy gan. The energy gan s provded by the varable E k descrbed n the last secton. As ths metrc ncludes both the energy cost of the VM on ts destnaton server and the energy cost related to ts mgraton, ths approach tends also to reduce the number of mgratons to a mnmum to provde a fast reconfguraton process.

6 4. FRAMEWORK EVALUATION In ths secton we frst evaluate the energy savng due to our approach on a cloud testbed hostng workload nspred by a corporaton. We then evaluate the scalablty of the Optmzer. 4.1 Experments on Cloud Testbed In order to valdate the proposed approach n an envronment as close as possble to a cloud data centre, a tral has been performed at Hewlett Packard (HP) Italy Innovaton Center facltes, nsde the Cloud Computng Intatve lab envronment. The faclty s used to offer hands-on experence on a cloud demo nfrastructure and to setup Proof of Concepts (PoC) confguratons. Two dfferent workloads have been setup for an Infrastructure-asa-Servce prvate cloud: the frst one smulates a typcal week load pattern and a second one more challengng focuses on a sngle work day Lab tral resources Insde HP Italy Innovaton Center, two racks, wth an HP C7000 blade enclosure each, have been used to smulate two separate data centres; the frst one (DC1) has 4 BL 460c blades dedcated to host Vrtual Machnes usng VMWare ESX v4.0 natve hypervsor, and 3 addtonal blades for Cluster and Cloud Control, and the scheduler of the workload tasks (VM creaton and load generaton). The second one (DC2) hosts 3 BL460c blades to host Vrtual Machnes agan usng VMWare ESX v4.0 natve hypervsor, and 2 other blades for Cluster Control and the Data Collector of the Power and Montorng System. The racks are connected to a LAN and use a SAN devce to store all data, ncludng VM mages. The Vrtual Connect modules nsde the Blade enclosures offer a fast nternal 1GB network. Cloud Blade Enclosure 1 Blade Enclosure 2 Cluster Task scheduler FIT4Green VMs Cluster Power and Montorng Collector Fgure 4. The logcal vew of the man hardware resources. The characterstcs of the processors n the two racks/enclosures are lsted n Table Workload The system has been tested usng two synthetc workloads bult to reply wth hgh accuracy to the load patterns recorded n a real case PoC. The next fgure represents the pattern of total number of actve vrtual machnes durng full week of work nsde a smallmedum sze prvate cloud used by a corporaton n Italy durng a Proof of Concept performed wth HP Innovaton Center n Italy. The frst synthetc test reproduces the 7 days compressed n tme nto 24 hours, whle the second one focuses only on a sngle work Table 2. Characterstcs of the Racks/Enclosures. Enclosure 1 Enclosure 2 Processor model Intel Xeon E5520 Intel Xeon E5540 CPU frequency 2.27GHz 2.53GHz Cpu& Cores Dual cpu Quad core Dual cpu Quad core RAM 24 GB 24GB day has been reproduced n 12 hours. The followng pcture schematcally descrbes the weekly load pattern (number of actve VMs on the Y axs) and the red box dentfes the sngle work day for the second test. Number of actve VMs Tme Fgure 5. Schematc Vew on the Weekly load patterns. The frst workload consders also week-ends wth low load, but t s an nterestng approxmaton of a real case; the second one s more challengng and therefore t has been used more extensvely n the federated tral. The workload executon s performed through an open source scheduler applcaton (Task Scheduler runnng nsde a VM on the blade server n DC1), whle system and power montorng s performed through open source Collectd (nsde another VM on blade server n DC2). The explct SLAs confgured n the tral and mapped nto constrants are related to the Number of Vrtual CPUs per Core rato (2 n the tral) and to the descrpton of the topology of nodes n the federaton. The parameters related to polces appled to the data centres are the same on each clusters:.e. always guarantee at least 3 free VM slots, where a VM slot s the necessary amount of free resources to accommodate a new VM, and keep at most 6 VM slots Sngle Ste Tral The tral for a Sngle Ste scenaro has been performed usng only the frst rack (DC1) and both workloads. The Task Scheduler allocates VMs through Cloud and Cluster prmtves only the nodes n DC1; data collecton for power and system montorng runs on a blade server n DC2. Table 3 shows the results n terms of overall energy consumed by the node controllers (the servers wth natve hypervsors where VMs are allocated); due to the lab-grade confguraton, the number of cloud control servers (cloud and cluster controller, montorng and scheduler) wrt. node controllers s far too hgh compared to a real cloud envronment, therefore cloud control servers have been omtted from the computaton to allow a clearer nterpretaton of the results. For test 1, the energy data refer to the average consumpton per day of 4 node controllers nsde DC1.

7 Scenaro Table 3. Sngle Ste Tral. Average Day for Week Workload Sngle work day workload Wthout FIT4Green Wth FIT4Green no mgraton Wth FIT4Green usng mgraton 4867 savng 19.2% 4592 savng 23.8% 5938 savng 10.3% 5444 savng 17.7% The tral shows an energy savng of approxmately 24% n the average week workload, and almost 18% for the week day workload. As expected the second workload s more challengng than the frst one, moreover the effect of VM mgraton capablty for the optmzaton strategy s very mportant, especally n the most crtcal case. After the runs, the montored system data are analyzed to double check that the specfed SLAs have not been volated Federated Stes Tral The tral for the federated case has been performed usng a data centre hostng one cluster of 4 nodes, and the second data centre hostng another ndependent cluster of 3 nodes. The workload for the frst data centre s the same one as the sngle work day test of the sngle ste case, whle the workload for the second data centre s scaled by a factor ¾ (to cope wth the smaller amount of computng resources) and has ts peak shfted n tme of approx. 1/24 n the tme scale (1 hour n the 24 hours scenaro) to smulate a slght work-tme dfferences of the users of the second data centre. Results have been collected n dfferent confguratons: Wthout FIT4Green wth ndependent allocaton of the workload on the two DCs (clusters); each workload tem has been statcally pre-assgned to one cluster Wth FIT4Green wth ndependent allocaton of the workload on the two DCs (clusters) Wth FIT4Green wth dynamc allocaton of the workload on the two DCs (clusters); when a workload tem needs to be started FIT4Green s quered to decde on whch cluster to run t FIT4Green wth dynamc allocaton of the workload on the two DCs (clusters) and optmzed polces; n ths case the buffer of free slots of each cluster has been reduced captalzng on the avalablty of addtonal resources n the other cluster practcally the mnmum VM slot number has been reduced to 2 and the maxmum to 5 on each cluster because the VM allocaton can be satsfed by any one of the clusters Table 4 presents, for the dfferent confguratons, the numercal results n term of global energy consumed by each datacenter node controllers (cluster nodes) and the total for the federaton. In the case of FIT4Green Statc Allocaton each data centre s consdered separately, n the next case allocaton s decded based on the energy savng optmzatons. The ablty to use the federaton as a sngle pool of resources at allocaton tme allows savng to grow from 16.7% to 18.5%. Moreover the tunng of polces reducng the free amount of resources (mn. VM slots) to be kept free to cope wth load peaks at cluster level, allows savng to grow up to 21.7%. Confguraton Wthout FIT4Green Wth FIT4Green Statc Allocaton Wth FIT4Green Dynamc Allocaton Wth FIT4Green Optmzed Polces Table 4. Federated Stes Tral. Data Centre 1 Data Centre 2 Energy for Federaton Savng 16.7% Savng 18.5% Savng 21.7% Energy vs. Emssons Optmzaton In the prevous tests the two data centres were assumed to have exactly the same characterstcs n terms of energy and emssons effcency (as n the realty, snce they re co-hosted n the same ste). The goal s to evaluate the effectveness of the optmzer when dealng wth a federaton of data centres heterogeneous wth respect to energy and emssons effcences. In order to smulate the scenaro wth data centres wth dfferent energy and emssons features, the last test has been run n two addtonal work modes, by modfyng the meta-model Power Usage Effectveness (PUE) and Carbon Usage Effectveness (CUE) attrbutes of the data centre confguraton: DC1 wth PUE=2.1 and DC2 wth PUE=1.8 (more effcent) optmzng for total energy of the federaton DC1 wth CUE=0.772 g/ and DC 2 wth CUE=0.797 g/ optmzng for to total emssons of the federaton; the two values for CUE smulate DC1 gettng energy by Enel at 443 CO2 g/k and DC2 gets energy by A2A at CO2 368 g/k. Fgure 6 reports the fnal test results for the varous confguratons n vsual format, whle Table 5 contans the correspondng numercal values. Fgure 6. Graphcal Representaton of the Tral Results.

8 It s worth to notce that when FIT4Green optmzes for Energy, t saves 0.6% more n addton to the ICT energy optmzaton, because t s captalzng on the energy effcency dfference of the two data centres, by relatvely loadng more DC2 that has a better PUE value. en optmzng for Emssons, DC1 s relatvely more loaded because t has better emssons effcency (lower value of CUE) and the total mprovement s 0.6% better than the ICT energy optmzaton case. Confguraton Wthout FIT4Green FIT4Green optmze ICT Energy, gnore PUE and CUE FIT4Green optmze Total Energy, consderng PUE FIT4Green optmze Emssons, consderng CUE Table 5. Energy vs. Emssons. ICT Energy DC ICT Energy DC Total Energy Savng 18.16% Savng 18.78% Savng 17.68% Total Emssons g CO g CO2 Savng 18.10% g CO2 Savng 17.99% g CO2 Savng 18.72% 4.2 Scalablty Evaluaton In order to show the correctness of our approach wth a hgh number of servers and VMs, we made expermentaton n smulaton as a complement to the expermentaton done wthn HP premses. Indeed, whle the expermentaton has been done on real equpment for a low number of servers, hgh scale expermentaton can only be done through smulaton from a practcal pont of vew. The smulaton has been run usng a DELL Lattude E6410 laptop wth an Intel 7 Dual Core processor at 2.67GHz and 4GB of RAM. For the smulaton we have vared the number of servers, wth each server havng 1 CPU wth 4 cores at 1GHz, 8GB of RAM and 4 vrtual machnes nstances already actvated on t. Each VM has 1 Vrtual CPU used at 70%. The memory used by the VMs s set to 100MB. For each smulaton run, we measured the tme taken by the search to fnd a frst soluton and verfed t for all the VMs and gven the constrants. We have repeated the experment 3 tmes: wth one datacenter and no placement constrants, wth one datacenter wth an overbookng factor constrant set to 2, and wth two federated datacenters. In Table 6 the placement constrants actvated to realze each confguraton s detaled. Wth one datacenter, no placement constrant s actvated: the VMs are free to move n the datacenter, they just need to respect the default constrants that enforces that a vald confguraton s found wth respect to the consumpton of the VM n term of CPU, RAM and HDD and the avalable resources on the servers. The overbookng factor set to 2 corresponds to a constrant called MaxVCPUPerCore, whch enforces that no more than 2 vrtual CPU s attrbuted to one core. The 2 federated datacenters confguraton s translated nto «Fences» constrants dsallowng the VMs to mgrate from one datacenter to another, whch s usually not feasble n practce. Table 6. Constrants actvated n each confguraton. # Confguraton Placement constrants actvated 1 1 datacenter none 2 1 datacenter wth overbookng factor=2 MaxVCPUPerCore constrant set on each server 3 2 federated datacenters Fence constrant set on each VM Table 7. Solvng duraton of the Optmzer to compute the frst soluton. number of servers 1 datacenter (ms) 1 datacenter wth Overbookng factor=2 (ms) 2 federated datacenters (ms) Fgure 7. Graphcal representaton of the Optmzer solvng duraton to compute the frst soluton. Frst of all, the results presented n Table 7 show that for 700 servers and 2800 VMs the search completes n 6.7 mnutes on the worst case. If the servers are splt n two datacenters, the tme drops to nearly 1 mnute. Interestngly enough, addng new

9 constrants don t ncrease the search tme as one could expect: globally the search tmes are nferor wth the placement constrants actvated to the ones wthout. Ths s because addng new constrants, whle t adds a small overhead n processng the constrant, also greatly reduces the problem search space. Ths shows that the engne prunes ncorrect sub-trees n the search tree usng the new constrants. For example n the Table 7 we see that the tme for 400 servers splt n 2 DC (14757ms) s nearly equal to the tme n sngle DC for 200 servers (14214 ms). The tmes show that the engne s effectvely separatng the problem n 2, and that the two are then computed n parallel. The lttle tme dfference may be due to the overhead of parallel computng and the slghtly ncreased tme for the preparaton the problem. The result would be the same f the VMs are separated n two clusters n the same data centre, whch s also a common practce. The addton of the overbookng constrant reduces also the tme, for the same reasons. 5. CONCLUSION In ths paper we have presented an approach for energy-aware resource allocaton n datacenters usng constrant programmng. We addressed the problem of extensblty and flexblty by decouplng constrants and algorthms. Usng ths feature and easy-to-extend XML schemas, we were able to mplement 16 frequently used SLA parameters n the form of constrants. The results of the tests executed n a cloud envronment have shown that the presented approach s capable of savng both a sgnfcant amount of energy and CO 2 emssons n a real world scenaro on average 18%wthn our test case. Furthermore, our scalablty experment showed that splttng the problem n several parts to enable parallel computaton s very effcent n reducng the total computaton tme to fnd a soluton. Indeed, we were able to fnd the frst allocaton soluton for 2800 VMs n 700 servers splt n two clusters n approxmately 1 mnute. 6. FUTURE WORK Encouraged by the results of our test we wll contnue our research n ths area. One enhancement wll address the research n the area of SLAs. Even though current metrcs do not drectly relate to energy savng or envronmental metrcs, they play a major role n the process of energy savng strateges. As mentoned n [12] a key n lowerng the energy consumpton n data centres, wthout replacng hardware- or nfrastructuralcomponents, s to tweak SLAs n a way that guarantee the needed QoS for the customer, but at the same tme wdenng the range of flexblty for the data centre operator to apply certan energy savng strateges. For that reason, n order to apply ths approach the fxed structures of current SLAs ether need to be enhanced by the possblty to express preferences n a fuzzy manner or to use a dynamc, preferable autonomous, re-negotaton process by usng software agents, for nstance ( In the context of FIT4Green we do nether replace a complete data centre management framework nor postulate agent based SLA negotaton. Therefore, the frst approach s more approprate. In the context of our framework ths concludes n the extenson of entropy to use so called soft constrants. In addton Klngert et. al. n [12] menton the need for new green metrcs. In the current state the entropy lbrary provdes only a lmted model of the data centre nfrastructure and VMs. Therefore, we wll addtonally explore the needs of new green metrcs n a techncal aspect. Besdes the consderaton of GreenSLAs we plan to nvestgate new heurstcs and algorthms to frst mprove the effcency of the optmzer and second the qualty of the proposed solutons. We also plan to extend the concepts developed n ths paper to other components nvolved n delverng an Internet servce, suc as the network. ACKNOWLEDGMENTS Ths research has been partly (Corentn Dupont, Govann Gulan, Thomas Schulze, Andrey Somov) carred out wthn the European Project FIT4Green (FP7-ICT ). Detals on the project, ts goals and results can be found at: The authors would also lke to thank Marco D Grolamo (HP Italy Innovaton Centre, Mlan) for hs valuable comments and frutful dscussons. REFERENCES [1] Berral, J. L., Gor, I., Nou, R., Jula, F., Gutart, J., Gavalda, R., and Torres, J Towards energy-aware schedulng n data centers usng machne learnng. In Proceedngs of the 1 st Internatonal Conference on Energy-Effcent Computng and Networkng (Passau, Germany, Aprl 13-15, 2010). e- Energy 10. ACM, New York, NY DOI= / [2] Green Grd Consortum, [3] Banerjee, A., Mukherjee, T., Varsamopoulos, G., Gupta, S. K. S Coolng-aware and thermal-aware workload placement for green HPC data centers. In Proceedngs of Internatonal Green Computng Conference(Chcago, IL, USA, August 15-18, 2010) DOI= /DREENCOMP [4] Pakbazna, E. and Pedram, M Mnmzng data center coolng and server power costs. In Proceedngs of the 14th ACM/IEEE Internatonal Symposum on Low Power Electroncs and Desgn (San Francsco, CA, USA, August 19-21). ISPLED 09. ACM, New York, NY DOI = [5] Mesner, D., Gold, B. T., and Wensch, T. F PowerNap: Elmnatng server dle power. In proceedngs of the 14 th Internatonal Conference on Archtectural Support for Programmng Languages and Operatng Systems (Washngton, DC, USA, March 7-11, 2009). ASPLOS 09. ACM, New York, NY DOI = / [6] Carrol, R., Balasubramanam, S., Donnelly, W., and D. Botvch Dynamc optmzaton soluton for green servce mgraton n data centres. In Proceedngs of IEEE Internatonal Conference on Communcatons (Kyoto, Japan, June 5-9, 2011). ICC 11.pp. 1-6, DOI = /cc [7] Garg, S. K., Yeo, C. S., Anandasvam, A., and Buyya, R Envronment-conscous schedulng of HPC applcatons on dstrbuted cloud-orented data centers. Journal of Parallel and Dstrbuted Computng. 71 (2010), [8] Barbagallo, D., Ntto, E., Dubos, D. J., and Mrandola, R A Bo-nspred algorthm for energy optmzaton n a self-organzng data center. In Proceedngs of the FrstSelforganzng archtectures (Cambrdge, UK). SOAR 09. Sprnger-Verlag Berln, Hedelberg, ISBN: X

10 [9] Paschke, A., Schnappnger-Gerull, E A categorzaton scheme for SLA metrcs.in Proceedngs of Servce Orented Electronc Commerce, Vol 80, p [10] Bstarell, S., Santn, F A nonmonologc soft concurrent constrant language for sla negotaton, Proc. CILC 08 [11] Buscem, M.,Montanar, U Cc-p: A constrant-based language for specfyng servce level agreements. In Proceedngs of the 16 th European conference on programmng. (Braga, Portugal). ESOP 07. Sprnger Verlag Berln, Hedelberg ISBN: [12] Klngert, S., Schulze, T., Bunse, C GreenSLAs for the Energy-effcent Management of Data Centres. In Proceedngs of the Second Internatonal Conference on Energy-effcentComputng and Networkng (New York, USA, May 31-June 1, 2011). e-energy 11. ACM, New York, NY. [13] Hermener, F.,Demassey, S., Lorca, X Bn repackng schedulng n vrtualzed datacenters. In Proceedngs of the 17th Internatonal Conference on Prncples and Practce of Constrant Programmng (Peruga, Italy). CP'11. Jmmy Lee (Ed.). Sprnger-Verlag, Berln, Hedelberg, [14] Chen, Y.,Iyer, S., Lu, X.,Mlojcc, D., Saha, A SLA decomposton: Translatng servce level objectves to system level thresholds. In Proceedngs of the Fourth Internatonal Conference on Autonomc Computng (Washngton, DC, USA, 2007). ICAC'07. IEEE Computer Socety. DOI= /ICAC [15] Quan, D.-M.,Basmadjan, R., De Meer, H., Lent, R.,Mahmood, T.,Sannell, D.,Mezza, F.,Dupont, C Energy effcent resource allocaton strategy for cloud data centres.in Proceedngs of the 26 th Internatonal Symposum on Computer and nformaton Scences (London, UK, September 26-28, 2011). ISCIS 11.Sprnger, [16] Lawler, E Recent results n the theory of machne schedulng. In Mathematcal Programmng: The State of the Art. Sprnger-Verlag, Berln, Germany. [17] N. Bobroff, A. Kochut, and K. Beaty. Dynamc placement of vrtual machnes for managng SLA volatons. Integrated Network Management, IM th IFIP/IEEE Internatonal Symposum on, pages , May [18] Wood, T.,Shenoy, P. J.,Venkataraman,A.Yousf, M. S Black-box and gray-box strateges for vrtual machne mgraton. In Proceedngs of the 4th ACM/USENIX Symposum on Networked Systems Desgn and Implementaton (Cambrdge, MA, USA). NSDI 07. USENIX Assocaton, Berkeley, CA, USA, [19] Verma, A.,Ahuja, P.,Neog, A Power-aware dynamc placement of hpc applcatons. In Proceedngs of the 22nd Annual Internatonal Conference on Supercomputng(Island of Kos, Greece).ICS 08. ACM, New York, NY, DOI= / [20] Dhyan, K., Gualand, S., Cremones, P A Constrant programmng approach for the servce consoldaton problem. In Lecture Notes of Computer Scence, 6140(2010). Sprnger, DOI= / [21] Anderson, E., Hall, J., Hartlne, J., Hobbes, M., Karln, A., Saa, J., Swamnathan,R., Wlkes, J Algorthms for data mgraton.j.algorthmca, 57(2), [22] Fukunaga, A Search spaces for mn-perturbaton repar. InProceedngs of the 15th Internatonal Conference on Prncples and Practce of Constrant Programmng (Lsbon, Portugal). CP 09. Sprnger Verlag Berln, Hedelberg, [23] FIT4Green EU Project, [24] Ross, F., Van Beek, P., Walsh, T Handbook of Constrants Programmng. Elsever Scence Inc. [25] Choco,

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

HP Mission-Critical Services

HP Mission-Critical Services HP Msson-Crtcal Servces Delverng busness value to IT Jelena Bratc Zarko Subotc TS Support tm Mart 2012, Podgorca 2010 Hewlett-Packard Development Company, L.P. The nformaton contaned heren s subject to

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

Introduction CONTENT. - Whitepaper -

Introduction CONTENT. - Whitepaper - OneCl oud ForAl l YourCr t c al Bus nes sappl c at ons Bl uew r esol ut ons www. bl uew r e. c o. uk Introducton Bluewre Cloud s a fully customsable IaaS cloud platform desgned for organsatons who want

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

A Programming Model for the Cloud Platform

A Programming Model for the Cloud Platform Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna luxaodongxht@qq.com

More information

Calculating the high frequency transmission line parameters of power cables

Calculating the high frequency transmission line parameters of power cables < ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

Ants Can Schedule Software Projects

Ants Can Schedule Software Projects Ants Can Schedule Software Proects Broderck Crawford 1,2, Rcardo Soto 1,3, Frankln Johnson 4, and Erc Monfroy 5 1 Pontfca Unversdad Católca de Valparaíso, Chle FrstName.Name@ucv.cl 2 Unversdad Fns Terrae,

More information

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

How To Understand The Results Of The German Meris Cloud And Water Vapour Product Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller

More information

Traffic State Estimation in the Traffic Management Center of Berlin

Traffic State Estimation in the Traffic Management Center of Berlin Traffc State Estmaton n the Traffc Management Center of Berln Authors: Peter Vortsch, PTV AG, Stumpfstrasse, D-763 Karlsruhe, Germany phone ++49/72/965/35, emal peter.vortsch@ptv.de Peter Möhl, PTV AG,

More information

Hosting Virtual Machines on Distributed Datacenters

Hosting Virtual Machines on Distributed Datacenters Hostng Vrtual Machnes on Dstrbuted Datacenters Chuan Pham Scence and Engneerng, KyungHee Unversty, Korea pchuan@khu.ac.kr Jae Hyeok Son Scence and Engneerng, KyungHee Unversty, Korea sonaehyeok@khu.ac.kr

More information

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications Methodology to Determne Relatonshps between Performance Factors n Hadoop Cloud Computng Applcatons Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng and

More information

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines DBA-VM: Dynamc Bandwdth Allocator for Vrtual Machnes Ahmed Amamou, Manel Bourguba, Kamel Haddadou and Guy Pujolle LIP6, Perre & Mare Cure Unversty, 4 Place Jusseu 755 Pars, France Gand SAS, 65 Boulevard

More information

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud IJCSI Internatonal Journal of Computer Scence Issues, Vol. 0, Issue, No, March 03 ISSN (Prnt): 694-084 ISSN (Onlne): 694-0784 www.ijcsi.org 37 Proft-Aware DVFS Enabled Resource Management of IaaS Cloud

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

IMPACT ANALYSIS OF A CELLULAR PHONE

IMPACT ANALYSIS OF A CELLULAR PHONE 4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng

More information

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet 2008/8 An ntegrated model for warehouse and nventory plannng Géraldne Strack and Yves Pochet CORE Voe du Roman Pays 34 B-1348 Louvan-la-Neuve, Belgum. Tel (32 10) 47 43 04 Fax (32 10) 47 43 01 E-mal: corestat-lbrary@uclouvan.be

More information

A Dynamic Energy-Efficiency Mechanism for Data Center Networks

A Dynamic Energy-Efficiency Mechanism for Data Center Networks A Dynamc Energy-Effcency Mechansm for Data Center Networks Sun Lang, Zhang Jnfang, Huang Daochao, Yang Dong, Qn Yajuan A Dynamc Energy-Effcency Mechansm for Data Center Networks 1 Sun Lang, 1 Zhang Jnfang,

More information

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING FORMAL ANALYSIS FOR REAL-TIME SCHEDULING Bruno Dutertre and Vctora Stavrdou, SRI Internatonal, Menlo Park, CA Introducton In modern avoncs archtectures, applcaton software ncreasngly reles on servces provded

More information

QoS-based Scheduling of Workflow Applications on Service Grids

QoS-based Scheduling of Workflow Applications on Service Grids QoS-based Schedulng of Workflow Applcatons on Servce Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted System Laboratory Dept. of Computer Scence and Software Engneerng The Unversty

More information

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng

More information

Cloud Auto-Scaling with Deadline and Budget Constraints

Cloud Auto-Scaling with Deadline and Budget Constraints Prelmnary verson. Fnal verson appears In Proceedngs of 11th ACM/IEEE Internatonal Conference on Grd Computng (Grd 21). Oct 25-28, 21. Brussels, Belgum. Cloud Auto-Scalng wth Deadlne and Budget Constrants

More information

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña Proceedngs of the 2008 Wnter Smulaton Conference S. J. Mason, R. R. Hll, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers J. Parallel Dstrb. Comput. 71 (2011) 732 749 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: www.elsever.com/locate/pdc Envronment-conscous schedulng of HPC applcatons

More information

An Integrated Dynamic Resource Scheduling Framework in On-Demand Clouds *

An Integrated Dynamic Resource Scheduling Framework in On-Demand Clouds * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 30, 1537-1552 (2014) An Integrated Dynamc Resource Schedulng Framework n On-Demand Clouds * College of Computer Scence and Technology Zhejang Unversty Hangzhou,

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints Effectve Network Defense Strateges aganst Malcous Attacks wth Varous Defense Mechansms under Qualty of Servce Constrants Frank Yeong-Sung Ln Department of Informaton Natonal Tawan Unversty Tape, Tawan,

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

Optimization of network mesh topologies and link capacities for congestion relief

Optimization of network mesh topologies and link capacities for congestion relief Optmzaton of networ mesh topologes and ln capactes for congeston relef D. de Vllers * J.M. Hattngh School of Computer-, Statstcal- and Mathematcal Scences Potchefstroom Unversty for CHE * E-mal: rwddv@pu.ac.za

More information

An ILP Formulation for Task Mapping and Scheduling on Multi-core Architectures

An ILP Formulation for Task Mapping and Scheduling on Multi-core Architectures An ILP Formulaton for Task Mappng and Schedulng on Mult-core Archtectures Yng Y, We Han, Xn Zhao, Ahmet T. Erdogan and Tughrul Arslan Unversty of Ednburgh, The Kng's Buldngs, Mayfeld Road, Ednburgh, EH9

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure

More information

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA).

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA). Managng Server Energy and Operatonal Costs n Hostng Centers Yyu Chen Dept. of IE Penn State Unversty Unversty Park, PA 16802 yzc107@psu.edu Anand Svasubramanam Dept. of CSE Penn State Unversty Unversty

More information

Software project management with GAs

Software project management with GAs Informaton Scences 177 (27) 238 241 www.elsever.com/locate/ns Software project management wth GAs Enrque Alba *, J. Francsco Chcano Unversty of Málaga, Grupo GISUM, Departamento de Lenguajes y Cencas de

More information

Agile Traffic Merging for Data Center Networks. Qing Yi and Suresh Singh Portland State University, Oregon June 10 th, 2014

Agile Traffic Merging for Data Center Networks. Qing Yi and Suresh Singh Portland State University, Oregon June 10 th, 2014 Agle Traffc Mergng for Data Center Networks Qng Y and Suresh Sngh Portland State Unversty, Oregon June 10 th, 2014 Agenda Background and motvaton Power optmzaton model Smulated greedy algorthm Traffc mergng

More information

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment An Optmal Model for Prorty based Servce Schedulng Polcy for Cloud Computng Envronment Dr. M. Dakshayn Dept. of ISE, BMS College of Engneerng, Bangalore, Inda. Dr. H. S. Guruprasad Dept. of ISE, BMS College

More information

A heuristic task deployment approach for load balancing

A heuristic task deployment approach for load balancing Xu Gaochao, Dong Yunmeng, Fu Xaodog, Dng Yan, Lu Peng, Zhao Ja Abstract A heurstc task deployment approach for load balancng Gaochao Xu, Yunmeng Dong, Xaodong Fu, Yan Dng, Peng Lu, Ja Zhao * College of

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

Network Aware Load-Balancing via Parallel VM Migration for Data Centers

Network Aware Load-Balancing via Parallel VM Migration for Data Centers Network Aware Load-Balancng va Parallel VM Mgraton for Data Centers Kun-Tng Chen 2, Chen Chen 12, Po-Hsang Wang 2 1 Informaton Technology Servce Center, 2 Department of Computer Scence Natonal Chao Tung

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

Forecasting the Direction and Strength of Stock Market Movement

Forecasting the Direction and Strength of Stock Market Movement Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract - Stock market s one of the most complcated systems

More information

An Energy-Efficient Data Placement Algorithm and Node Scheduling Strategies in Cloud Computing Systems

An Energy-Efficient Data Placement Algorithm and Node Scheduling Strategies in Cloud Computing Systems 2nd Internatonal Conference on Advances n Computer Scence and Engneerng (CSE 2013) An Energy-Effcent Data Placement Algorthm and Node Schedulng Strateges n Cloud Computng Systems Yanwen Xao Massve Data

More information

Frequency Selective IQ Phase and IQ Amplitude Imbalance Adjustments for OFDM Direct Conversion Transmitters

Frequency Selective IQ Phase and IQ Amplitude Imbalance Adjustments for OFDM Direct Conversion Transmitters Frequency Selectve IQ Phase and IQ Ampltude Imbalance Adjustments for OFDM Drect Converson ransmtters Edmund Coersmeer, Ernst Zelnsk Noka, Meesmannstrasse 103, 44807 Bochum, Germany edmund.coersmeer@noka.com,

More information

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 畫 類 別 : 個 別 型 計 畫 半 導 體 產 業 大 型 廠 房 之 設 施 規 劃 計 畫 編 號 :NSC 96-2628-E-009-026-MY3 執 行 期 間 : 2007 年 8 月 1 日 至 2010 年 7 月 31 日 計 畫 主 持 人 : 巫 木 誠 共 同

More information

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT Toshhko Oda (1), Kochro Iwaoka (2) (1), (2) Infrastructure Systems Busness Unt, Panasonc System Networks Co., Ltd. Saedo-cho

More information

A Dynamic Load Balancing for Massive Multiplayer Online Game Server

A Dynamic Load Balancing for Massive Multiplayer Online Game Server A Dynamc Load Balancng for Massve Multplayer Onlne Game Server Jungyoul Lm, Jaeyong Chung, Jnryong Km and Kwanghyun Shm Dgtal Content Research Dvson Electroncs and Telecommuncatons Research Insttute Daejeon,

More information

Genetic Algorithm Based Optimization Model for Reliable Data Storage in Cloud Environment

Genetic Algorithm Based Optimization Model for Reliable Data Storage in Cloud Environment Advanced Scence and Technology Letters, pp.74-79 http://dx.do.org/10.14257/astl.2014.50.12 Genetc Algorthm Based Optmzaton Model for Relable Data Storage n Cloud Envronment Feng Lu 1,2,3, Hatao Wu 1,3,

More information

Load Balancing By Max-Min Algorithm in Private Cloud Environment

Load Balancing By Max-Min Algorithm in Private Cloud Environment Internatonal Journal of Scence and Research (IJSR ISSN (Onlne: 2319-7064 Index Coperncus Value (2013: 6.14 Impact Factor (2013: 4.438 Load Balancng By Max-Mn Algorthm n Prvate Cloud Envronment S M S Suntharam

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING

LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING 1 MS. POOJA.P.VASANI, 2 MR. NISHANT.S. SANGHANI 1 M.Tech. [Software Systems] Student, Patel College of Scence and

More information

Heuristic Static Load-Balancing Algorithm Applied to CESM

Heuristic Static Load-Balancing Algorithm Applied to CESM Heurstc Statc Load-Balancng Algorthm Appled to CESM 1 Yur Alexeev, 1 Sher Mckelson, 1 Sven Leyffer, 1 Robert Jacob, 2 Anthony Crag 1 Argonne Natonal Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439,

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo

More information

Activity Scheduling for Cost-Time Investment Optimization in Project Management

Activity Scheduling for Cost-Time Investment Optimization in Project Management PROJECT MANAGEMENT 4 th Internatonal Conference on Industral Engneerng and Industral Management XIV Congreso de Ingenería de Organzacón Donosta- San Sebastán, September 8 th -10 th 010 Actvty Schedulng

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel

More information

Period and Deadline Selection for Schedulability in Real-Time Systems

Period and Deadline Selection for Schedulability in Real-Time Systems Perod and Deadlne Selecton for Schedulablty n Real-Tme Systems Thdapat Chantem, Xaofeng Wang, M.D. Lemmon, and X. Sharon Hu Department of Computer Scence and Engneerng, Department of Electrcal Engneerng

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika.

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika. VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture A Desgn Method of Hgh-avalablty and Low-optcal-loss Optcal Aggregaton Network Archtecture Takehro Sato, Kuntaka Ashzawa, Kazumasa Tokuhash, Dasuke Ish, Satoru Okamoto and Naoak Yamanaka Dept. of Informaton

More information

Cost-based Scheduling of Scientific Workflow Applications on Utility Grids

Cost-based Scheduling of Scientific Workflow Applications on Utility Grids Cost-based Schedulng of Scentfc Workflow Applcatons on Utlty Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted Systems Laboratory Dept. of Computer Scence and Software Engneerng

More information

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process Dsadvantages of cyclc TDDB47 Real Tme Systems Manual scheduler constructon Cannot deal wth any runtme changes What happens f we add a task to the set? Real-Tme Systems Laboratory Department of Computer

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng Envronment Congcong Xong, Long Feng, Lxan Chen A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng

More information

Hollinger Canadian Publishing Holdings Co. ( HCPH ) proceeding under the Companies Creditors Arrangement Act ( CCAA )

Hollinger Canadian Publishing Holdings Co. ( HCPH ) proceeding under the Companies Creditors Arrangement Act ( CCAA ) February 17, 2011 Andrew J. Hatnay ahatnay@kmlaw.ca Dear Sr/Madam: Re: Re: Hollnger Canadan Publshng Holdngs Co. ( HCPH ) proceedng under the Companes Credtors Arrangement Act ( CCAA ) Update on CCAA Proceedngs

More information

Complex Service Provisioning in Collaborative Cloud Markets

Complex Service Provisioning in Collaborative Cloud Markets Melane Sebenhaar, Ulrch Lampe, Tm Lehrg, Sebastan Zöller, Stefan Schulte, Ralf Stenmetz: Complex Servce Provsonng n Collaboratve Cloud Markets. In: W. Abramowcz et al. (Eds.): Proceedngs of the 4th European

More information

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays VoIP Playout Buffer Adjustment usng Adaptve Estmaton of Network Delays Mroslaw Narbutt and Lam Murphy* Department of Computer Scence Unversty College Dubln, Belfeld, Dubln, IRELAND Abstract The poor qualty

More information

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management

More information

Efficient QoS Aggregation in Service Value Networks

Efficient QoS Aggregation in Service Value Networks 22 45th Hawa Internatonal Conference on System Scences Effcent QoS Aggregaton n Servce Value etworks Steffen Haak Research Center for Informaton Technology (FZI) haak@fz.de Benjamn Blau SAP AG benjamn.blau@sap.com

More information

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems 1 Mult-Resource Far Allocaton n Heterogeneous Cloud Computng Systems We Wang, Student Member, IEEE, Ben Lang, Senor Member, IEEE, Baochun L, Senor Member, IEEE Abstract We study the mult-resource allocaton

More information

Overview of monitoring and evaluation

Overview of monitoring and evaluation 540 Toolkt to Combat Traffckng n Persons Tool 10.1 Overvew of montorng and evaluaton Overvew Ths tool brefly descrbes both montorng and evaluaton, and the dstncton between the two. What s montorng? Montorng

More information

"Research Note" APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES *

Research Note APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES * Iranan Journal of Scence & Technology, Transacton B, Engneerng, ol. 30, No. B6, 789-794 rnted n The Islamc Republc of Iran, 006 Shraz Unversty "Research Note" ALICATION OF CHARGE SIMULATION METHOD TO ELECTRIC

More information

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers Far Vrtual Bandwdth Allocaton Model n Vrtual Data Centers Yng Yuan, Cu-rong Wang, Cong Wang School of Informaton Scence and Engneerng ortheastern Unversty Shenyang, Chna School of Computer and Communcaton

More information

To manage leave, meeting institutional requirements and treating individual staff members fairly and consistently.

To manage leave, meeting institutional requirements and treating individual staff members fairly and consistently. Corporate Polces & Procedures Human Resources - Document CPP216 Leave Management Frst Produced: Current Verson: Past Revsons: Revew Cycle: Apples From: 09/09/09 26/10/12 09/09/09 3 years Immedately Authorsaton:

More information

Availability-Based Path Selection and Network Vulnerability Assessment

Availability-Based Path Selection and Network Vulnerability Assessment Avalablty-Based Path Selecton and Network Vulnerablty Assessment Song Yang, Stojan Trajanovsk and Fernando A. Kupers Delft Unversty of Technology, The Netherlands {S.Yang, S.Trajanovsk, F.A.Kupers}@tudelft.nl

More information

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks Energy Effcent Routng n Ad Hoc Dsaster Recovery Networks Gl Zussman and Adran Segall Department of Electrcal Engneerng Technon Israel Insttute of Technology Hafa 32000, Israel {glz@tx, segall@ee}.technon.ac.l

More information

A Simple Approach to Clustering in Excel

A Simple Approach to Clustering in Excel A Smple Approach to Clusterng n Excel Aravnd H Center for Computatonal Engneerng and Networng Amrta Vshwa Vdyapeetham, Combatore, Inda C Rajgopal Center for Computatonal Engneerng and Networng Amrta Vshwa

More information

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble

More information

Resource Scheduling in Desktop Grid by Grid-JQA

Resource Scheduling in Desktop Grid by Grid-JQA The 3rd Internatonal Conference on Grd and Pervasve Computng - Worshops esource Schedulng n Destop Grd by Grd-JQA L. Mohammad Khanl M. Analou Assstant professor Assstant professor C.S. Dept.Tabrz Unversty

More information

The Load Balancing of Database Allocation in the Cloud

The Load Balancing of Database Allocation in the Cloud , March 3-5, 23, Hong Kong The Load Balancng of Database Allocaton n the Cloud Yu-lung Lo and Mn-Shan La Abstract Each database host n the cloud platform often has to servce more than one database applcaton

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Self-Adaptive SLA-Driven Capacity Management for Internet Services Self-Adaptve SLA-Drven Capacty Management for Internet Servces Bruno Abrahao, Vrglo Almeda and Jussara Almeda Computer Scence Department Federal Unversty of Mnas Geras, Brazl Alex Zhang, Drk Beyer and

More information

On the Interaction between Load Balancing and Speed Scaling

On the Interaction between Load Balancing and Speed Scaling On the Interacton between Load Balancng and Speed Scalng Ljun Chen and Na L Abstract Speed scalng has been wdely adopted n computer and communcaton systems, n partcular, to reduce energy consumpton. An

More information

Virtual Network Embedding with Coordinated Node and Link Mapping

Virtual Network Embedding with Coordinated Node and Link Mapping Vrtual Network Embeddng wth Coordnated Node and Lnk Mappng N. M. Mosharaf Kabr Chowdhury Cherton School of Computer Scence Unversty of Waterloo Waterloo, Canada Emal: nmmkchow@uwaterloo.ca Muntasr Rahan

More information

On the Interaction between Load Balancing and Speed Scaling

On the Interaction between Load Balancing and Speed Scaling On the Interacton between Load Balancng and Speed Scalng Ljun Chen, Na L and Steven H. Low Engneerng & Appled Scence Dvson, Calforna Insttute of Technology, USA Abstract Speed scalng has been wdely adopted

More information

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs Cost Mnmzaton usng Renewable Coolng and Thermal Energy Storage n CDNs Stephen Lee College of Informaton and Computer Scences UMass, Amherst stephenlee@cs.umass.edu Rahul Urgaonkar IBM Research rurgaon@us.bm.com

More information