J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

Size: px
Start display at page:

Download "J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers"

Transcription

1 J. Parallel Dstrb. Comput. 71 (2011) Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: Envronment-conscous schedulng of HPC applcatons on dstrbuted Cloud-orented data centers Saurabh Kumar Garg a,, Chee Shn Yeo b, Arun Anandasvam c, Rakumar Buyya a a Cloud Computng and Dstrbuted Systems Laboratory, Department of Computer Scence and Software Engneerng, The Unversty of Melbourne, Australa b Advanced Computng Programme, Insttute of Hgh Performance Computng, Sngapore c Insttute of Informaton Systems and Management, Karlsruhe Insttute of Technology, Germany a r t c l e n f o a b s t r a c t Artcle hstory: Receved 3 September 2009 Receved n revsed form 13 Aprl 2010 Accepted 30 Aprl 2010 Avalable onlne 11 May 2010 Keywords: Cloud computng Hgh Performance Computng (HPC) Energy-effcent schedulng Dynamc Voltage Scalng (DVS) Green IT The use of Hgh Performance Computng (HPC) n commercal and consumer IT applcatons s becomng popular. HPC users need the ablty to gan rapd and scalable access to hgh-end computng capabltes. Cloud computng promses to delver such a computng nfrastructure usng data centers so that HPC users can access applcatons and data from a Cloud anywhere n the world on demand and pay based on what they use. However, the growng demand drastcally ncreases the energy consumpton of data centers, whch has become a crtcal ssue. Hgh energy consumpton not only translates to hgh energy cost whch wll reduce the proft margn of Cloud provders, but also hgh carbon emssons whch are not envronmentally sustanable. Hence, there s an urgent need for energy-effcent solutons that can address the hgh ncrease n the energy consumpton from the perspectve of not only the Cloud provder, but also from the envronment. To address ths ssue, we propose near-optmal schedulng polces that explot heterogenety across multple data centers for a Cloud provder. We consder a number of energy effcency factors (such as energy cost, carbon emsson rate, workload, and CPU power effcency) whch change across dfferent data centers dependng on ther locaton, archtectural desgn, and management system. Our carbon/energy based schedulng polces are able to acheve on average up to 25% of energy savngs n comparson to proft based schedulng polces leadng to hgher proft and less carbon emssons Elsever Inc. All rghts reserved. 1. Introducton Durng the last few years, the use of Hgh Performance Computng (HPC) nfrastructure to run busness and consumer based IT applcatons has ncreased rapdly. Ths s evdent from the recent Top500 supercomputer applcatons where many supercomputers are now used for ndustral HPC applcatons, ncludng 9.2% of them are used for Fnance and 6.2% for Logstc servces [58]. Thus, t s desrable for IT ndustres to have access to a flexble HPC nfrastructure whch s avalable on demand wth mnmum nvestment. Cloud computng [10] promses to delver such relable servces through next-generaton data centers bult on vrtualzed compute and storage technologes. Users are able to access applcatons and data from a Cloud anywhere n the world on demand and pay based on what they use. Hence, Cloud computng s a hghly scalable and cost-effectve nfrastructure for Correspondng author. E-mal addresses: sgarg@csse.unmelb.edu.au (S.K. Garg), yeocs@hpc.a-star.edu.sg (C.S. Yeo), anandasvam@kt.edu (A. Anandasvam), ra@csse.unmelb.edu.au (R. Buyya). runnng HPC applcatons whch requres ever-ncreasng computatonal resources. However, Clouds are essentally data centers that requre hgh energy 1 usage to mantan operaton [5]. Today, a typcal data center wth 1000 racks need 10 MW of power to operate [50]. Hgh energy usage s undesrable snce t results n hgh energy cost. For a data center, the energy cost s a sgnfcant component of ts operatng and up-front costs [50]. Therefore, Cloud provders want to ncrease ther proft or Return on Investment (ROI) by reducng ther energy cost. Many Cloud provders are thus buldng dfferent data centers and deployng them n many geographcal locatons so as not only to expose ther Cloud servces to busness and consumer applcatons, e.g. Amazon [1], but also to reduce energy cost, e.g. Google [40]. In Aprl 2007, Gartner estmated that the Informaton and Communcaton Technologes (ICT) ndustry generates about 2% of the total global CO 2 2 emssons, whch s equal to the avaton ndustry [30]. As governments mpose carbon emssons lmts on the ICT 1 Energy and electrcty s used nterchangeably. 2 CO2 and carbon s used nterchangeably /$ see front matter 2010 Elsever Inc. All rghts reserved. do: /.pdc

2 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) Fg. 1. Computer power consumpton ndex. Source: [32]. ndustry lke n the automoble ndustry [18,21], Cloud provders must reduce energy usage to meet the permssble restrctons [15]. Thus, Cloud provders must ensure that data centers are utlzed n a carbon-effcent manner to meet scalng demand. Otherwse, buldng more data centers wthout any carbon consderaton s not vable snce t s not envronmentally sustanable and wll ultmately volate the mposed carbon emssons lmts. Ths wll n turn affect the future wdespread adopton of Cloud computng, especally for the HPC communty whch demands scalable nfrastructure to be delvered by Cloud provders. Companes lke Alpron [2] already offer software for cost-effcent server management and promse to reduce energy cost by analyzng, va advanced algorthms, whch server to shutdown or turn on durng the runtme. Motvated by ths practce, ths paper enhances the dea of cost-effectve management by takng both the aspects of economc (proft) and envronmental (carbon emssons) sustanablty nto account. In partcular, we am to examne how a Cloud provder can acheve optmal energy sustanablty of runnng HPC workloads across ts entre Cloud nfrastructure by harnessng the heterogenety of multple data centers geographcally dstrbuted n dfferent locatons worldwde. The analyss of prevous work shows that lttle nvestgaton has been done for both economc and envronmental sustanablty to acheve energy effcency on a global scale as n Cloud computng. Frst, prevous work has generally studed how to reduce energy usage from the perspectve of reducng cost, but not how to mprove the proft whle reducng the carbon emssons whch s also sgnfcantly mpactng the Cloud provders [25]. Second, most prevous work has focused on achevng energy effcency at a sngle data center locaton, but not across multple data center locatons. However, Cloud provders such as Amazon EC2 [1] typcally has multple data centers dstrbuted worldwde. As shown n Fg. 1, the energy effcency of an ndvdual data center n dfferent locatons changes dynamcally at varous tmes dependng on a number of factors such as energy cost, carbon emsson rate, workload, CPU power effcency, coolng system, and envronmental temperature. Thus, these dfferent contrbutng factors can be consdered to explot the heterogenety across multple data centers for mprovng the overall energy effcency of the Cloud provder. Thrd, prevous work has manly proposed energy savng polces that are applcaton-specfc [26,28], processor-specfc [52,17], and/or server-specfc [60,38]. But, these polces are only applcable or most effectve for the specfc models that they are specally desgned for. Hence, we propose some smple, yet effectve generc energy-effcent schedulng polces that can be extended to any applcaton, processor, and server models so that they can be readly deployed n exstng data centers wth mnmum changes. Our generc schedulng polces wthn a data center can also easly complement any of these applcaton-specfc, processor-specfc, and/or server-specfc energy savng polces that are already n place wthn exstng data centers or servers. Hence, the key contrbutons of ths paper are: (1) A novel mathematcal model for energy effcency based on varous contrbutng factors such as energy cost, carbon emsson rate, HPC workload, and CPU power effcency; (2) Near-optmal energy-effcent schedulng polces whch not only mnmze the carbon emsson and maxmze the proft of the Cloud provder, but also can be readly mplemented wthout much nfrastructure changes such as the relocaton of exstng data centers; (3) Energy effcency analyss of our proposed polces (n terms of carbon emssons and proft) through extensve smulatons usng real HPC workload traces, and data center carbon emsson rates and energy costs to demonstrate the mportance of consderng varous contrbutng factors; (4) Analyss of lower/upper bounds of the optmzaton problem; and (5) Explotng local mnma n Dynamc Voltage Scalng (DVS) to further reduce the energy consumpton of HPC applcatons wthn a data center. Ths paper s organzed as follows. Secton 2 dscusses related work. Secton 3 defnes the Cloud computng scenaro and the problem descrpton. In Secton 4, dfferent polces for allocatng applcatons to data centers effcently are descrbed. Secton 5 explans the evaluaton methodology and smulaton setup, followed by the analyss of the performance results n Secton 6. Secton 7 presents the concluson and future work. 2. Related work Table 1 gves an overvew of prevous work whch addresses any of the fve aspects consdered by ths paper. To the best of our knowledge, except our work, there s no prevous work whch collectvely addresses all fve aspects. Most prevous work addresses energy-effcent computng for servers [5]. But most of them focuses on reducng energy consumpton n data centers for web workloads [60,12]. Thus, they assume that energy s an ncreasng functon of CPU frequency snce web workloads have the same executon tme per request. However, HPC workloads have dfferent executon tme dependng on specfc applcaton requrements. Hence, the energy CPU-frequency relatonshp of a HPC workload s sgnfcantly dfferent from that of a web workload as dscussed n Secton 4.2. Therefore, n ths paper, we defne a generalzed power model and adopt a more general strategy to scale up or down the CPU frequency. Some prevous work examne how energy can be saved for executng HPC applcatons. Bradley et al. [6] proposed algorthms to mnmze the power utlzaton by usng workload hstory and predctng future workload wthn acceptable relablty. Lawson and Smrn [39] proposed an energy savng scheme that dynamcally adusts the number of CPUs n a cluster to operate n sleep mode when the utlzaton s low. Tesauro et al. [57] presented an applcaton of batch renforcement learnng combned wth nonlnear functon approxmaton to optmze multple aspects of data center behavor such as performance, power, and avalablty. But these solutons target to save energy wthn a sngle server or a sngle data center (wth many servers) n a sngle locaton. Snce our generc schedulng polcy mproves the energy effcency across data centers n multple locatons wth dfferent carbon emsson rates, t can be used n conuncton wth these solutons to utlze any energy effcency already mplemented n a sngle locaton.

3 734 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) Table 1 Comparson of related work. CO 2 emsson/energy consumpton HPC workload characterstc Multple data centers Energy cost aware schedulng Our work X X X X X Bradley et al. [6] X X Lawson and X X Smrn [39] Tesauro [57] X X Orgere et al. [45] X X X Patel et al. [47] X Chase et al. [11] X X Burge et al. [9] X X X Market-orented schedulers There are some studes on energy effcency n Grds, whch comprse resource stes n multple locatons smlar to our scope. Orgere et al. [45] proposed a predcton algorthm to reduce power consumpton n large-scale computatonal grds such as Grd5000 by aggregatng the workload and turnng off unused CPUs. Hence, they do not consder usng DVS to save power for CPUs. Patel et al. [47] proposed allocatng Grd workload on a global scale based on the energy effcency at dfferent data centers. But, ther focus s on reducng temperature, and thus do not examne how energy consumpton can be reduced by explotng dfferent power effcency of CPUs, energy costs, and carbon emsson rates across data centers. In addton, they do not focus on any partcular workload characterstcs, whereas we focus on HPC workload. Not much prevous work studes the energy sustanablty ssue from an economc cost perspectve. To address energy usage, Chase et al. [11] adopted an economc approach to manage shared server resources n whch servces bd for resources as a functon of delvered performance. Burge et al. [9] scheduled tasks to heterogeneous machnes and made admsson decsons based on the energy costs of each machne to maxmze the proft n a sngle data center. But, both of them do not study the crtcal relatonshp between carbon emssons (envronmental sustanablty) and proft (economc sustanablty) for the energy sustanablty ssue, and how they can affect each other. On the other hand, we examne how carbon emssons can be reduced for executng HPC applcatons wth neglgble effect on the proft of the Cloud provder. 3. Meta-schedulng model 3.1. System model Our system model s based on the Cloud computng envronment, whereby Cloud users are able to tap the computatonal power offered by the Cloud provders to execute ther HPC applcatons. The Cloud meta-scheduler acts as an nterface to the Cloud nfrastructure and schedules applcatons on the behalf of users as shown n Fg. 2. It nterprets and analyzes the servce requrements of a submtted applcaton and decdes whether to accept or reect the applcaton based on the avalablty of CPUs. Its obectve s to schedule applcatons such that the carbon emssons can be reduced and the proft can be ncreased for the Cloud provder, whle the Qualty of Servce (QoS) requrements of the applcatons are met. As data centers are located n dfferent geographcal regons, they have dfferent carbon emsson rates and energy costs dependng on regonal constrants. Each data center s responsble for updatng ths nformaton to the meta-scheduler for energy-effcent schedulng. The two partcpatng partes, Cloud users and Cloud provders, are dscussed below wth ther obectves and constrants: (1) Cloud users: The Cloud users need to run HPC applcatons/ workloads whch are compute-ntensve wth low data transfer requrements, and thus requre parallel and dstrbuted processng to sgnfcantly reduce ther executon tme. The users Fg. 2. Cloud meta-schedulng protocol. submt parallel non-moldable applcatons wth ther QoS and processng requrements to the Cloud meta-scheduler. Each applcaton must be executed wthn an ndvdual data center and does not have preemptve prorty. The reason for ths requrement s that the synchronzaton among varous tasks of parallel applcatons can be affected by communcaton delays when applcatons are executed across multple data centers. Furthermore, snce the man am of ths paper s to desgn hgh-level applcaton-ndependent meta-schedulng polces, we do not want to consder the fne-graned detals of HPC workload (such as consderng the mpact of communcaton and synchronzaton, and ther overlappng wth computaton), whch are more applcable at the local scheduler level. The obectve of the user s to have hs applcaton completed by the specfed deadlne. Deadlnes are hard,.e. the user wll beneft from the HPC resources only f the applcaton completes before ts deadlne [49]. To facltate the comparson of varous polces descrbed n ths work, the estmated executon tme of an applcaton s assumed to be known by the user at the tme of submsson [24]. Ths can be derved based on user-suppled nformaton, expermental data, applcaton proflng or benchmarkng, and other technques. Exstng performance predcton technques (based on analytcal modelng [44], emprcal [4] and hstorcal data [55,37,53]) can also be appled to estmate the executon tme of parallel applcatons. However, n realty, t s

4 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) Fg. 3. Free tme slots. not always possble to estmate the executon tme of an applcaton accurately. But, n Cloud computng where users pay based on actual resource usage, a user wll have to pay more than expected f the executon tme of hs applcaton has been under-estmated. Thus, a user must stll be gven the prvlege of whether to accept or change any automatcally derved estmate before submsson. (2) Cloud provders: A Cloud provder has multple data centers dstrbuted across the world. For example, Amazon [1] has data centers n many ctes across Asa, Europe, and Unted States. Each data center has a local scheduler that manages the executon of ncomng applcatons. The Cloud meta-scheduler nteracts wth these local schedulers for applcaton executon. Each local scheduler perodcally supples nformaton about avalable tme slots (t s, t e, n) to the meta-scheduler, where t s and t e are the start tme and end tme of the slot respectvely and n s the number of CPUs avalable for the slot. Wthn a data center, the free tme slots are obtaned based on the approach used by Sngh et al. [54]. The CPU avalablty at partcular tmes n the future s mantaned by the local scheduler. The free tme slot nformaton s generated untl a gven tme horzon, thus creatng wndows of avalablty or free tme slots; the end tme of a free tme slot s ether the end tme of a ob n the watng queue or the plannng horzon. The tme horzon s set to, thus all the free tme slot nformaton s dsclosed to the metascheduler. An example s gven n Fg. 3, where S1, S2,..., S5 are free tme slots. The Cloud Provder also supples executon prce and I/O data transfer cost to the meta-scheduler. To facltate energy-effcent computng, each local scheduler also supples nformaton about the carbon emsson rate, Coeffcent of Performance (COP), electrcty prce, CPU power frequency relatonshp, Mllon Instructons Per Second (MIPS) ratng of CPUs at the maxmum frequency, and CPU operatng frequency range of the data center. The MIPS ratng s used to ndcate the overall performance of a CPU. All CPUs wthn a data center are homogeneous, but CPUs can be heterogeneous across data centers. The carbon emsson rates are calculated based on the fuel type used n electrc power generaton. These are publshed regularly by varous government agences such as US Energy Informaton Admnstraton (EIA). COP of the data center s coolng system s defned as the amount of coolng delvered per unt of electrcal power consumed. COP can be measured by montorng the energy consumpton by varous components of the coolng system [46]. Varous parameters of CPUs at the data center can be derved expermentally [29] Data center energy model The maor contrbutors for the total energy usage n a data center are IT equpment (whch conssts of servers, storage devces, and network equpment) and coolng systems [19]. Other systems such as lghtng are not consdered due to ther neglgble contrbuton to the total energy usage. Wthn a data center, the total energy usage of a server depends on ts CPUs, memory, dsks, fans, and other components [22]. However, for smplcty, we only compute the energy usage of a server based on ts CPUs due to two reasons. Frst, as ponted out by Fan et al. [22], the energy usage of the server vares dependng on the type of workload executed. Snce we only consder computentensve HPC applcatons and the CPUs use the largest proporton of energy n a server, t s suffcent n our case to only model CPU energy usage. Second, the am of ths paper s to examne how a meta-scheduler can acheve energy effcency on a global scale by explotng the heterogenety across multple data centers. Hence we do not focus n detal on how to save energy usage locally wthn a data center, whereby energy can potentally be saved through other components n the server, such as memory and dsks. But, our proposed meta-schedulng polces can easly complment any other solutons that focus on savng energy wthn a data center n ths regard. The power consumpton of a CPU can be reduced by lowerng ts supply voltage usng DVS. DVS s an effcent way to manage dynamc power dsspaton durng computaton. The power consumpton model of CPUs whch are generally composed of CMOS crcuts s gven by: P = αv 2 f + I leak V + P short, where P s the power dsspaton, V s the supply voltage, f s the clock frequency, I leak s the leakage current, and P short s the short crcut power dsspated durng the voltage swtchng process [8,48]. The frst term consttutes the dynamc power of the CPU and the second term consttutes the statc power. P short s generally neglgble n comparson to other terms. Snce the voltage can be expressed as a lnear functon of frequency n the CMOS logc, the power consumpton P of a CPU n a data center s approxmated by the followng functon (smlar to prevous work [60,12]): P = β +α f 3, where β s the statc power consumed by the CPU, α s the proportonalty constant, and f s the frequency at whch the CPU s operatng. We use ths cubc relatonshp between the operatng frequency and power consumpton snce ths paper focuses on compute-ntensve workload and to the best of our knowledge, the cubc relatonshp s the most commonly used metrc for CPU power n prevous work [60,12]. We also consder that a CPU of data center can adust ts frequency from a mnmum of f mn to a maxmum of f max dscretely. The frequency levels supported by a CPU typcally vares for dfferent CPU archtecture. For nstance, Intel Pentum M 1.6 GHz CPU supports 6 V from V to V. The energy cost of the coolng system depends on ts COP [42,56]. COP s an ndcaton for the effcency of the coolng system, whch s defned as the rato of the amount of energy consumed by CPUs to the energy consumed by the coolng system. However, COP s not constant and vares wth the coolng ar temperature. We assume that COP wll reman constant durng a schedulng cycle and data centers wll update the meta-scheduler whenever COP changes. Thus, the total energy consumed by the coolng system n a data center s gven by: E h = Ec COP (1) where E c s the total energy consumed by CPUs and E h s the total energy consumed by coolng devces. The total energy consumed by data center can then be approxmated by: E total = E c + E h = COP E c COP + 1 = E c COP. Therefore, the data center effcency (DCE) [59] s gven as: DCE = COP COP Relaton between executon tme and CPU frequency Snce we use DVS to scale up/down the CPU frequency, the executon tme of an applcaton can sgnfcantly vary accordng

5 736 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) Table 2 Parameters of a data center. Parameter Notaton Carbon emsson rate (kg/kw h) r CO 2 Average COP COP Electrcty prce ($/kw h) p e Data transfer prce ($/GB) for upload/download p DT CPU power P = β +α f 3 CPU frequency range Tme slots (start tme, end tme, number of CPUs) [f mn, f max ] (t s,t e,n) to the CPU frequency. However, the decrease n executon tme due to the ncrease n CPU frequency depends on whether the applcaton s CPU bound or not. For example, f the performance of an applcaton s completely dependent on the CPU frequency, then ts executon tme wll be nversely proportonal to the change n CPU frequency. Thus, the executon tme of an applcaton s modeled accordng to the defnton proposed by Hsu et al. [34]: f max T(f ) = T(f max ) γ cpu (2) f where T(f ) s the executon tme of the applcaton at CPU frequency f, T(f max ) s the executon tme of the applcaton at the maxmum CPU frequency f max, and γ cpu s the CPU boundness of the applcaton. If the value of γ cpu decreases, the CPU boundness of the applcaton wll also decrease, whch results n potentally more energy reducton by usng DVS n servers wthn a data center (see Secton 4.2). It s however mportant to note that the CPU boundness of an applcaton vares based on the CPU archtecture, as well as memory and dsks. Lke many pror studes [20,28], we stll use ths factor to model the CPU usage ntensty of an applcaton n a smple generc manner so as to optmze ts energy consumpton accordngly. However, for all our experments, we have used the worst case value of γ cpu = 1 to analyze the performance of our heurstcs Problem descrpton Let a Cloud provder have N data centers dstrbuted n dfferent locatons, and J s the number of applcatons currently executng on these data centers. All the parameters assocated wth a data center are gven n Table 2. A data center ncurs carbon emsson based on ts carbon emsson rate r CO 2 (kg/kw h). To execute an applcaton, the Cloud provder has to pay data center the energy cost and data transfer cost dependng on ts electrcty prce p e ($/kw h) and data transfer prce p DT ($/GB) for upload/download respectvely. The Cloud provder then charges fxed prces to the user for executng hs applcaton based on the CPU executon prce p c ($/CPU/hour) and data transfer prce p DTU ($/GB) for the processng tme and upload/download respectvely. A user submts hs requrements for an applcaton n the form of a tuple (d, n, e 1,..., e N, γ cpu, (DT) ), where d s the deadlne to complete applcaton, n s the number of CPUs requred for applcaton executon, e s the applcaton executon tme on the data center when operatng at the maxmum CPU frequency, γ cpu s the CPU boundness of the applcaton, and (DT) s the sze of data to be transferred. For smplcty, we assume that users are able to specfy ther processng requrements (n, e, and γ cpu ). It s realstc for a user to specfy n and e n an utlty computng envronment snce the user need to pay for usage. For nstance, n Amazon EC2 [1], the user can buy one Compute unt wth the number of processors but on hourly bass. Thus, even f the user s applcaton fnshes before one hour, the user need to pay for one complete hour. In addton, let f be the ntal frequency at whch CPUs of a data center operate whle executng applcaton. Hence executng applcaton on data center results n the followng: () Energy consumpton of CPUs E c = (β + α (f ) 3 ) n e f γ cpu max (3) f () Total energy whch consst of the coolng system and CPUs E = COP + 1 COP E c. (4) () Energy cost C e = p e E. (5) (v) Carbon emsson (CO 2 E) = r CO 2 E. (6) (v) Executon Proft (ProfExec) = n e p c C e. (7) (v) Data Transfer Proft (ProfData) = (DT) (p DTU p DT ). (8) (v) Proft (Prof ) = (ProfExec) + (ProfData). (9) The carbon emsson (CO 2 E) (Eq. (6)) ncurred by applcaton s computed usng the carbon emsson rate r CO 2 of data center. However, ths means that (CO 2 E) only reflects the average carbon emsson ncurred snce r CO 2 s an average rate. We can only use r CO 2 snce the exact amount of carbon emsson produced depends on the type of fuel used to generate the electrcty and no detaled data s avalable n ths regard. The proft (Prof ) (Eq. (9)) ganed by the Cloud provder from the executon of an applcaton on data center ncludes the executon proft (ProfExec) and nput/output data transfer proft (ProfData). Studes [27,3] have shown that the ongong operatonal costs (such as energy cost) of data centers greatly surpass ther one-tme captal costs (such as hardware and support nfrastructure costs). Hence, when computng the executon proft (ProfExec), we assume that the CPU executon prce p c charged by the Cloud provder to the user already ncludes the one-tme captal costs of data centers, so that we only subtract the ongong energy cost C e of executng applcatons from the revenue. The data transfer proft (ProfData) s the dfference between the cost pad by the user to the provder and the cost ncurred for transferrng the data to the data center. The meta-schedulng problem can then be formulated as: Mnmze Carbon Emsson = Maxmze Proft = Subect to: N N (a) Response tme of applcaton < d (b) f mn < f < f max N (c) x 1 (d) 1 x = J x (CO 2 E) (10) J x (Prof ). (11) f applcaton allocated to data center 0 otherwse. The dual obectve functons (10) and (11) of the meta-schedulng problem are to mnmze the carbon emsson and maxmze the proft of a Cloud provder. Constrant (a) ensures that the deadlne requrement of an applcaton s met. But t s dffcult to

6 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) calculate the exact response tme of an applcaton snce applcatons have dfferent szes, requre multple CPUs, and have very dynamc arrval rates [12]. Moreover, ths problem maps to the 2-dmensonal bn-packng problem whch s NP-hard n nature [41] (see Appendx A for the proof). Hence we propose varous schedulng polces to heurstcally approxmate the optmum. 4. Meta-schedulng polces The meta-scheduler perodcally assgns applcatons to data centers at a fxed tme nterval called the schedulng cycle. Ths enables the meta-scheduler to potentally make a better selecton choce of applcatons when mappng from a larger pool of applcatons to the data centers, as compared to durng each submsson of an applcaton. In each schedulng cycle, the meta-scheduler collects the nformaton from both data centers and users. In general, a meta-schedulng polcy conssts of two phases: (1) mappng phase, n whch the meta-scheduler frst maps an applcaton to a data center; and (2) schedulng phase, n whch the schedulng of applcatons s done wthn the data center, where the requred tme slots s chosen to execute the applcaton. Dependng on the obectve of Cloud provder whether to mnmze carbon emsson or maxmze proft, we have desgned varous mappng polces whch are dscussed n the subsequent secton. To further reduce the energy consumpton wthn the data center, we have desgned a DVS based schedulng polcy for the local scheduler of a data center Mappng phase (across many data centers) We have desgned the followng meta-schedulng polces to map applcatons to data centers dependng on the obectve of the Cloud provder: Mnmzng carbon emsson The followng polces optmze the global carbon emsson of all data centers whle keepng the number of deadlne msses low. Greedy Mnmum Carbon Emsson (GMCE): Snce the am s to mnmze the carbon emsson across all the data centers, we want the most number of applcatons to be executed on data centers wth the least carbon emsson. Hence applcatons are sorted by ther deadlne (earlest frst) to reduce the deadlne msses, whle data centers are sorted by ther carbon emsson (lowest frst), whch s computed as: r CO 2 COP +1 COP (β + α (f max ) 3 ). Each applcaton s then mapped to a data center n ths orderng. Mnmum-Carbon-Emsson Mnmum-Carbon-Emsson (MCE MCE): MCE MCE s based on the Mn Mn heurstc [35] whch has performed very well n prevous studes of dfferent envronments [7]. The meta-scheduler frst fnds the best data center for all applcatons that are consdered. Then among these applcaton data center pars, the meta-scheduler selects the best par to map frst. Snce the am s to mnmze the carbon emsson, the best par has the mnmum carbon emsson (CO 2 E),.e. mnmum ftness value of executng applcaton on data center. MCE MCE has the followng steps: Step 1: For each applcaton n the lst of applcatons to be mapped, fnd the data center of whch the carbon emsson s the mnmum,.e. mnmum (CO 2 E) (the frst MCE), among all data centers whch can complete the applcaton by ts deadlne. If there s no data center where the applcaton can be completed by ts deadlne, the applcaton s removed from the lst of applcatons to be mapped. Step 2: Among all the applcaton data center pars found n Step 1, fnd the par that results n the mnmum carbon emsson,.e. mnmum (CO 2 E) (the second MCE). Then, map the applcaton to the data center, and remove t from the lst of applcatons to be mapped. Step 3: Update the avalable tme slots from data centers. Step 4: Do Step 1 to 3 agan untl all applcatons are mapped Maxmzng proft The followng polces optmze the global proft of all data centers whle keepng the number of deadlne msses low. Greedy Maxmum Proft (GMP): Snce the am s to maxmze the proft across all the data centers, we want the most number of applcatons to be executed on data centers wth the least energy cost. Hence applcatons are sorted by ther deadlne (earlest frst) to reduce the deadlne msses, whle data centers are sorted by ther energy cost (lowest frst), whch s computed as: p e COP +1 COP (β + α (f max ) 3 ). Each applcaton s then mapped to a data center n ths orderng. Maxmum-Proft Maxmum-Proft (MP MP): MP MP works n the same way as MCE MCE. However, snce the am s to maxmze the proft, the best par has the maxmum proft (Prof ),.e. maxmum ftness value of executng applcaton on data center. Hence the steps of MP MP are the same as MCE MCE, except the followng dfferences: Step 1: For each applcaton n the lst of applcatons to be mapped, fnd the data center of whch the proft s the maxmum,.e. maxmum (Prof ) (the frst MP), among all data centers whch can complete the applcaton by ts deadlne. Step 2: Among all the applcaton data center pars found n Step 1, fnd the par that results n the maxmum proft,.e. maxmum (Prof ) (the second MP) Mnmzng carbon emsson and maxmzng proft (MCE MP) MCE MP works n the same way as MCE MCE. But, snce the am s to mnmze the total carbon emsson whle maxmzng the total proft across all the data centers, MCE MP handles the tradeoff between carbon emsson and proft whch may be conflctng. Hence the steps of MCE MP are the same as MCE MCE, except the followng dfferences: Step 1: For each applcaton n the lst of applcatons to be mapped, fnd the data center of whch the carbon emsson s the mnmum,.e. mnmum (CO 2 E) (the frst MCE), among all data centers whch can complete the applcaton by ts deadlne. Step 2: Among all the applcaton data center pars found n Step 1, fnd the par that results n the maxmum proft,.e. maxmum (Prof ) (the second MP). One more mappng polcy can be desgned to mnmze carbon emsson and maxmze proft smultaneously by reversng the above two steps. Ths polcy s named as MP MCE (Maxmzng Proft and Mnmzng Carbon Emsson) Schedulng phase (wthn a data center) The energy consumpton and carbon emsson are further reduced wthn a data center by usng DVS at the CPU level to save energy by scalng down the CPU frequency. Thus, before the meta-scheduler assgns an applcaton to a data center, t decdes the tme slot n whch the applcaton should be executed and the frequency at whch the CPU should operate to save energy. But, snce a lower CPU frequency can ncrease the number of applcatons reected due to the deadlne msses, the schedulng of

7 738 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) applcatons wthn the data center can be of two types: (1) CPUs run at the maxmum frequency (.e. wthout DVS) or (2) CPUs run at varous frequences usng DVS (.e. wth DVS). It s mportant to adust DVS approprately n order to reduce the number of deadlne msses and energy consumpton smultaneously. The meta-scheduler wll frst try to operate the CPU at a frequency n the range [f mn, f max ] nearest to the optmal CPU fre- quency f opt = 3 β for 2α γ cpu = 1 (see Appendx B for the analyss of explotng local mnma n DVS). Snce the CPU frequency of data center can only operate n the nterval [f mn f opt = f mn f f opt < f mn, and f opt = f max f f opt, f max ], we defne > f max. If the deadlne of an applcaton wll be volated, the metascheduler wll scale up the CPU frequency to the next level and then try agan to fnd the free tme slots to execute the applcaton. If the meta-scheduler fals to schedule the applcaton on the data center as no free tme slot s avalable, then the applcaton s forwarded to the next data center for schedulng (the orderng of data centers depends on varous polces as descrbed n Secton 4.1) Lower bound and upper bound Due to the NP hardness of the meta-schedulng problem descrbed n Secton 3.4, t s dffcult to fnd the optmal proft and carbon emsson n polynomal tme. Thus, to estmate the performance of our schedulng algorthms, we present a lower bound for the carbon emsson and an upper bound for the proft of the Cloud provder respectvely. Both bounds are derved based on the prncple that we can get the mnmum carbon emsson or the maxmum proft when most of the applcatons are executed on the most effcent data center and also at the optmal CPU frequency. For carbon emsson mnmzaton, the most effcent data center ncurs the mnmum carbon emsson for executng applcatons, whle for proft maxmzaton, the most effcent data center results n the mnmum energy cost. For the sole purpose of dervng the lower and upper bounds, we relax three constrants of our system model so as to map the maxmal number of applcatons to the most effcent data center. Frst, we relax the constrant that when an applcaton s executed at the maxmum CPU frequency, t wll result n the maxmum energy consumpton. Instead, we assume that even though all applcatons are executed at the maxmum CPU frequency, the actual energy consumed by them for calculatng ther carbon emsson stll remans at the optmal CPU frequency usng DVS. Second, although the applcatons consdered n the system model are parallel applcatons wth fxed CPU requrements, we relax ths constrant to applcatons that are moldable n the requred number of CPUs. Thus, the runtme of applcatons wll decrease lnearly when t s scheduled on a larger number of CPUs. Ths n turn ncreases the number of applcatons that can be allocated to the most effcent data center wth the mnmum energy possble. Thrd, the applcatons n the system model are arrvng dynamcally n many dfferent schedulng cycles, but for dervng the bounds, all applcatons are consdered n only one schedulng cycle and mapped to data centers. Ths forms the best deal bound scenaro, where all the ncomng applcatons are known n advance. Hence the actual dynamc scenaro defntely has worse performance than that of the deal bound scenaro. It s mportant to note that the bounds of carbon emsson and proft obtaned wth these three assumptons are unreachable loose bounds of the system model. Ths s because data centers wll be executng the maxmum possble workload wth 100% utlzaton of ther CPUs, whle the least possble energy consumpton s stll consdered for the purpose of comparson. Let TWL be the total workload scheduled, TCE be the total carbon emsson, and TP be the total proft. The lower bound for the carbon emsson s derved through the followng steps: Step 1: Applcatons are sorted by ther deadlne (earlest frst) to reduce the deadlne msses, whle data centers are sorted by ther carbon emsson (lowest frst), whch s computed as: r CO 2 COP +1 COP (β + α (f max ) 3 ). Each applcaton s then mapped to a data center n ths orderng. Step 2: For each applcaton, search for a data center, startng from the most effcent one, where the applcaton can be scheduled wthout mssng ts deadlne when runnng at the maxmum CPU frequency. Step 3: If a data center s not found, then applcaton wll be removed from the lst of potental applcatons. Go to Step 2 to schedule other applcatons. Step 4: If a data center s found, applcaton s assgned to t and molded such that there s no fragmentaton n the schedule of data center for executng applcatons. Step 5: TWL+ = n e Step 6: TCE+ = r CO 2 (Power consumpton of the COP +1 COP CPU at optmal CPU frequency) n (Executon tme of applcaton at optmal CPU frequency) Step 7: TP+ = (1 (p e COP +1 COP (Power consumpton of the CPU at optmal CPU frequency))) n (Executon tme of applcaton at optmal CPU frequency). Step 8: Repeat from Step 2 untl all applcatons are scheduled. TCE TWL wll be the lower bound of the average carbon emsson due to the executon of all applcatons across multple data centers of the Cloud provder. To derve the upper bound for the proft, the steps reman the same, except the followng dfferences: In Step 1, data centers are sorted by ther energy cost (lowest frst), whch s computed as: p e COP +1 COP (β + α (f max ) 3 ). wll be the upper bound of the average proft. TP TWL 5. Performance evaluaton Confguraton of applcatons: We use workload traces from Fetelson s Parallel Workload Archve (PWA) [23] to model the HPC workload. Snce ths paper focuses on studyng the requrements of Cloud users wth HPC applcatons, the PWA meets our obectve by provdng workload traces that reflect the characterstcs of real parallel applcatons. Our experments utlze the frst week of the LLNL Thunder trace (January 2007 to June 2007). The LLNL Thunder trace from the Lawrence Lvermore Natonal Laboratory (LLNL) n USA s chosen due to ts hghest resource utlzaton of 87.6% among avalable traces to deally model a heavy workload scenaro. From ths trace, we obtan the submt tme, requested number of CPUs, and actual runtme of applcatons. We set the CPU boundness of all workload as 1 (.e. γ cpu = 1) to examne the worst case scenaro of CPU energy usage. We use a methodology proposed by Irwn et al. [36] to synthetcally assgn deadlnes through two classes namely Low Urgency (LU) and Hgh Urgency (HU). An applcaton n the LU class has a hgh rato of deadlne / runtme so that ts deadlne s defntely longer than ts requred runtme. Conversely, an applcaton n the HU class has a deadlne of low rato. Values are normally dstrbuted wthn each of the hgh and low deadlne parameters. The rato of the deadlne parameter s hgh-value mean and low-value mean s thus known as the hgh:low rato. In our experments, the deadlne hgh:low rato s 3, whle the low-value deadlne mean and varance s 4 and 2 respectvely. In other words, LU applcatons have a hgh-value deadlne mean of 12, whch s 3 tmes longer than HU applcatons wth a low-value deadlne mean of 4. The arrval sequence of applcatons from the HU and LU classes s randomly dstrbuted. Confguraton of data centers: We model 8 data centers wth dfferent confguratons as lsted n Table 3. Carbon emsson rates

8 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) Table 3 Characterstcs of data centers. Locaton Carbon emsson Electrcty CPU power factors CPU frequency level Number rate a (kg/kw h) prce b ($/kw h) β α f max f opt of CPUs New York, USA Pennsylvana, USA Calforna, USA Oho, USA North Carolna, USA Texas, USA France Australa a Carbon emsson rates are derved from a US Department of Energy (DOE) document (Appendx F-Electrcty Emsson Factors 2007) [14]. b Electrcty prces are average commercal prces tll 2007 based on a US Energy Informaton Admnstraton (EIA) report [16]. and electrcty prces at varous data center locatons are averages over the entre regon and derved from the data publshed by US Department of Energy [14] and Energy Informaton Admnstraton [16]. We derved the power related parameter, from a recent work presented by Wang and Lu [60] who also focus on the smlar problem of consderng the energy consumpton of heterogeneous CPUs. They used the expermental data from the work by Rusu et al. [51], Desktop CPU Power Survey by Slent PC Revew [13], and CPU Performance Charts by Tom Hardware [33] to estmate varous server parameters. The CPU Performance Charts [33] provdes the comprehensve comparson of AMD and Intel processors usng more than 20 benchmarks such as audo and vdeo encodng tools and multtaskng applcatons. Smlar emprcal data wth expermental detals are gven n Desktop CPU Power Survey [13] for power and energy effcency of varous Intel and AMD processors, such as Athlon and Pentum 4 630, at dle and full load power. Thus, the power parameters (.e. CPU power factors and frequency level) of the CPUs at dfferent data centers are derved based on these expermental data [60]. The values of α and β are set such that the rato of statc power and dynamc power can cover a wde varety of CPUs. Current commercal CPUs only support dscrete frequency levels, such as the Intel Pentum M 1.6 GHz CPU whch supports 6 V levels. We consder dscrete CPU frequences wth 5 levels n the range [f mn, f max ]. For the lowest frequency f mn, we use the same s 37.5% of f max. value used by Wang and Lu [60],.e. f mn To ncrease the utlzaton of data centers and reduce the fragmentaton n the schedulng of parallel applcatons, the local scheduler at each data center uses Conservatve Backfllng wth advance reservaton support as proposed by Mu alem and Fetelson [43]. The meta-scheduler schedules applcatons perodcally at each schedulng cycle of 50 s, whch s to ensure that the metascheduler can receve at least one applcaton n every schedulng cycle. The COP (power usage effcency) value of data centers s randomly generated usng a unform dstrbuton between [0.6, 3.5] as ndcated n the study conducted by Greenberg et al. [32]. To avod the energy cost of a data center exceedng the revenue generated by the Cloud provder, the CPU executon prce charged by the provder to the user s fxed at $0.40 /CPU/h whch s approxmately twce of the maxmum energy cost at a data center. Performance metrcs: We observe the performance from both user and provder perspectves. From the provder perspectve, four metrcs are necessary to compare the polces: average energy consumpton, average carbon emsson, proft ganed, and workload executed. The average energy consumpton compares the amount of energy saved by usng dfferent schedulng algorthms, whereas the average carbon emsson compares ts correspondng envronmental mpact. Snce mnmzng the carbon emsson can affect a Cloud provder economcally by decreasng ts proft, we have consdered the proft ganed as another metrc to compare dfferent algorthms. It s mportant to know the effect of varous meta-schedulng polces on energy consumpton, snce hgher energy consumpton s lkely to generate more carbon emsson for worse envronmental mpact and ncur more energy cost for operatng data centers. From the user perspectve, we observe the performance of varyng: (1) urgency class and (2) arrval rate of applcatons. For the urgency class, we use varous percentages (0%, 20%, 40%, 60%, 80%, and 100%) of HU applcatons. For nstance, f the percentage of HU applcatons s 20%, then the percentage of LU applcatons s the remanng 80%. For the arrval rate, we use varous factors (10 (low), 100 (medum), 1000 (hgh), and (very hgh)) of submt tme from the trace. For example, a factor of 10 means an applcaton wth a submt tme of 10 s from the trace now has a smulated submt tme of 1 s. Hence, a hgher factor represents hgher workload by shortenng the submt tme of applcatons. Expermental scenaros: To comprehensvely evaluate the performance of our algorthms, we examne varous expermental scenaros that are classfed as: (a) Evaluaton wthout data transfer cost (Secton 6.1): In the frst set of experments (Secton 6.1.1), we evaluate the mportance of our mappng polces whch consder global factors such as carbon emsson rate, electrcty prce and data center effcency. In these experments, we also evaluate the effectveness of explotng local mnma n our local DVS polcy (Sectons and 6.1.2). Then, n the next set of experments (Secton 6.1.3), we compare the performance of our proposed algorthms wth the lower bound (carbon emsson) and the upper bound (proft). To evaluate the overall best among the proposed algorthms, we conduct more experments by varyng dfferent factors whch can affect ther performance: Impact of urgency and arrval rate of applcatons (Secton 6.1.4) Impact of carbon emsson rate (Secton 6.1.5) Impact of electrcty prce (Secton 6.1.6) Impact of data center effcency (Secton 6.1.7). (b) Evaluaton wth data transfer cost (Secton 6.2): In the last set of experments (Secton 6.2.1), we examne how the data transfer cost affect the performance of our algorthms. 6. Analyss of results Ths secton presents the evaluaton of our proposed mappng schedulng polces based on varous metrcs such as carbon emsson, and proft. Durng expermentaton, the performance of MP MCE polcy n terms of carbon emsson and proft was observed to be very smlar to GMP wth no addtonal benefts. Hence, to save space, the results for MP MCE are not presented n the paper.

9 740 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) (a) Carbon emsson vs. urgency. (b) Carbon emsson vs. arrval rate. (c) Energy cost vs. urgency. (d) Energy cost vs. arrval rate. (e) Workload executed vs. urgency. (f) Workload executed vs. arrval rate. Fg. 4. Effect of mappng polcy and DVS Evaluaton wthout data transfer cost Effect of mappng polcy and DVS As dscussed n Secton 4, our meta-schedulng polces are desgned to save energy at two phases, frst at the mappng phase and then at the schedulng phase. Hence, n ths secton, we examne the mportance of each phase n savng energy. These experments also answer the queston why we requre specal energy savng schemes at two phases. Frst, we examne the mportance of consderng the global factors at the mappng phase by comparng meta-schedulng polces wthout the energy savng feature at the local schedulng phase,.e. DVS s not avalable at the local scheduler. Hence, we name the wthout DVS verson of the carbon emsson based polcy (GMCE) and proft based polcy (GMP) as GMCE-WthoutDVS and GMP-WthoutDVS respectvely. The results n Fg. 4 shows that the consderaton of varous global factors cannot only decrease the carbon emsson, but also decrease the overall energy consumpton. For varous urgency of applcatons (Fg. 4(a)), GMCE-WthoutDVS can prevent up to 10% carbon emsson over GMP-WthoutDVS. For varous arrval rate of applcatons (Fg. 4(b)), GMCE-WthoutDVS can produce up to 23% less carbon emsson than GMP-WthoutDVS. The correspondng dfference n energy cost (Fg. 4(c) and (d)) between them s very lttle (about 0% 6%). Ths s because wth the decrease n energy consumpton due to the executon of HPC workload, both carbon emsson and energy cost wll automatcally decrease. Ths trend stll remans by comparng GMCE and GMP, both of whch uses DVS at the schedulng phase.

10 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) (a) Energy consumpton vs. urgency. (b) Energy consumpton vs. arrval rate. (c) Workload executed vs. urgency. (d) Workload executed vs. arrval rate. Fg. 5. Explotng local mnma n DVS. Next, we examne the mpact of the schedulng phase on energy consumpton by comparng meta-schedulng polces wth DVS (GMCE and GMP) and wthout DVS (GMCE-WthoutDVS and GMP- WthoutDVS). Wth DVS, the energy cost (Fg. 4(c)) to execute HPC workload has been reduced on average by 33% when we compare GMP wth GMP-wthoutDVS. Wth the ncrease n HU applcatons, the gap s ncreasng and we can get almost 50% decrease n energy cost as shown n Fg. 4(c). Wth the ncrease n arrval rate, we get a consstent 25% gan n energy cost by usng DVS (Fg. 4(d)). The carbon emsson s also reduced further on average by 13% wth the ncrease n urgent applcatons as shown n Fg. 4(a). Wth the ncrease n arrval rate, the HPC workload executed s decreasng n the case of polces usng DVS as can be observed from Fg. 4(f). Ths s because the executon of applcatons at lower CPU frequency results n more reecton of urgent applcatons when the arrval rate s hgh. Thus, HPC workload executed n the case of polces wthout DVS s almost the same even when the arrval rate s very hgh. Fnally, we examne the overall trend of change n carbon emsson, workload and energy cost wth respect to the number of urgent applcatons and ob arrval rate. In Fg. 4, wth the ncrease n the number of urgent applcatons, the energy cost and carbon emsson s ncreasng whle the amount of workload executed s decreasng. Ths s due to more applcatons beng scheduled on less carbon-effcent stes n order to avod mssng of the deadlnes. Ths s also the reason for all four polces to execute decreasng workload as the number of HU applcatons ncreases. Due to ncrease n the number of urgent applcatons, many applcatons mssed the deadlne, and thus got reected. Due to cubc relatonshp between energy and CPU frequency, the carbon emsson and energy cost s ncreased more rapdly n comparson to the workload whch s decreasng. Wth respect to ob arrval rate, the change n the energy cost, carbon emsson and workload s very low. Ths s because the executon of obs at lower CPU frequency results n more reecton of urgent obs when the arrval rate s hgh Explotng local mnma n DVS We want to hghlght the mportance of explotng local mnma n the DVS functon whle schedulng wthn a data center. But, to correctly hghlght the dfference n DVS performance for the schedulng phase of the meta-schedulng polcy, we need an ndependent polcy (whch s not lnked to our proposed polces) for the mappng phase. Hence, we use EDF EST, where the applcatons are ordered based on Earlest Deadlne Frst (EDF), whle the data centers are ordered based on Earlest Start Tme (EST). We name our proposed DVS as EDF EST-wthOurDVS that explots the local mnma n the DVS functon. Our proposed DVS s compared to a prevously proposed DVS named as EDF ESTwthPrevDVS, n whch the CPU frequency s scaled up lnearly between [f mn, f max ] [60,38]. Fg. 5 shows that EDF EST-wthOurDVS has not only outperformed EDF EST-wthPrevDVS by savng about 35% of energy, but also executed about 30% more workload. Ths s because EDF ESTwthPrevDVS tres to run applcatons at the mnmum CPU frequency f mn whch may not be the optmal frequency. As dscussed n Appendx B and shown n Fg. B.1, t s clear that an applcaton executed at f mn may not lead to the least energy consumpton due to the presence of local mnma. Moreover, executng applcatons at a lower frequency results n a lower acceptance of applcatons snce less CPUs are avalable. Thus, t s mportant to explot such characterstcs when desgnng the schedulng polcy wthn a data center.

11 742 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) (a) Carbon emsson. (b) Proft. Fg. 6. Comparson of lower bound and upper bound Comparson of lower bound and upper bound To evaluate the performance of our algorthms n terms of carbon emsson reduced and proft ganed by the Cloud provder, we compare our algorthms wth the theoretcally unreachable bound. Fg. 6 shows how dfferent polces closely perform to the lower bound of average carbon emsson and the upper bound of average proft. In Fg. 6(a), the dfference n average carbon emsson for carbon emsson based polces (GMCE, MCE MCE, and MCE MP) and the lower bound s less than about 16% whch becomes less than about 2% n the case of 20% HU applcatons. On the other hand, n Fg. 6(b), the dfference n average proft for proft based polces (GMP and MP MP) and the upper bound s less than about 2% whch becomes less than about 1% n the case of 40% of HU applcatons. Hence, n summary, our carbon emsson based and proft based polces perform wthn about 16% and 2% of the optmal carbon emsson and proft respectvely. In Fg. 6(a) and (b), wth the ncrease n HU applcatons, the dfference between the lower/upper bounds and varous polces s ncreasng. Ths s due to the ncrease n looseness of the bounds wth the ncrease n HU applcatons. To avod deadlne msses wth a hgher number of HU applcatons, our proposed polces schedule more applcatons at hgher CPU frequency whch results n hgher energy consumpton. Ths n turn leads to an ncrease n the carbon emsson and decrease n the proft. Whereas, for computng the lower/upper bounds, we only consder energy consumpton at the optmal CPU frequency. Thus, the effect of urgency on the bounds s not as consderable as n our polces. Ths explans why our polces are closer to the bounds for a lower number of HU applcatons Impact of urgency and arrval rate of applcatons Fg. 7 shows how the urgency and arrval rate of applcatons affects the performance of carbon emsson based polces (GMCE, MCE MCE, and MCE MP) and proft based polces (GMP and MP MP). The metrcs of total carbon emsson and total proft are used snce the Cloud provder needs to know the collectve loss n carbon emsson and gan n proft across all data centers. When the number of HU applcatons ncreases, the total proft of all polces (Fg. 7(c)) decreases almost lnearly by about 45% from 0% to 100% HU applcatons. Smlarly, there s also a drop n total carbon emsson (Fg. 7(a)). Ths fall n total carbon emsson and total proft s due to the lower acceptance of applcatons as observed n Fg. 7(e). In Fg. 7(a), the decrease n total carbon emsson for proft based polces (GMP and MP MP) s much more than that of carbon emsson based polces (MCE MP, GMCE, and MCE MCE). Ths s because carbon emsson based polces schedule applcatons on more carbon-effcent data centers. Lkewse, the ncrease n arrval rate also affects the total carbon emsson (Fg. 7(b)) and total proft (Fg. 7(d)). As more applcatons are submtted, less applcatons can be accepted (Fg. 7(f)) snce t s harder to satsfy ther deadlne requrement when workload s hgh Impact of carbon emsson rate To examne the mpact of carbon emsson rate n dfferent locatons on our polces, we vary the carbon emsson rate, whle keepng all other factors such as electrcty prce as the same. Usng normal dstrbuton wth mean = 0.2, random values are generated for the followng three classes of carbon emsson rate across all data centers as: (A) Low varaton (low) wth standard devaton = 0.05, (B) Medum varaton (medum) wth standard devaton = 0.2, and (C) Hgh varaton (hgh) wth standard devaton = 0.4. All experments are conducted at medum ob arrval rate wth 40% of HU applcatons. The performance of all polces s smlar for all three cases of carbon emsson rate. For example, n Fg. 8(a), the carbon emsson of proft based polces (GMP and MP MP) s always hgher than carbon emsson based polces (GMCE, MCE MCE, and MCE MP). Smlarly, for proft (Fg. 8(b)), all proft based polces perform better than all carbon emsson based polces. For nstance, n Fg. 8(a), the dfference n carbon emsson of MCE MCE and MP MP s about 12% for low varaton, whch ncreases to 33% for hgh varaton. On the other hand, n Fg. 8(b), the correspondng decrease n proft s almost neglgble and s less than 1% for both the low and hgh varaton case. Moreover, by comparng MCE MCE and MP MP n Fg. 8(c), the amount of workload executed by MCE MCE s slghtly hgher than MP MP. Thus, for the case of hgh varaton n carbon emsson rate, Cloud provders can use carbon emsson based polces such as MCE MCE to consderably reduce carbon emsson wth almost neglgble mpact on ther proft. For mnmzng carbon emsson, MCE MCE s preferred over GMCE snce the latter leads to lower proft due to the schedulng of more applcatons on data centers wth hgher electrcty prce Impact of electrcty prce To nvestgate the mpact of electrcty prce n dfferent locatons on our polces, we vary the electrcty prce, whle keepng all other factors such as carbon emsson rate as the same. Usng normal dstrbuton wth mean = 0.1, random values are generated for the followng three classes of electrcty prce across all data centers as: (A) Low varaton (low) wth standard devaton = 0.01, (B) Medum varaton (medum) wth standard devaton = 0.02, and (C) Hgh varaton (hgh) wth standard devaton = All experments are conducted at medum ob arrval rate wth 40% of HU applcatons.

12 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) (a) Carbon emsson vs. urgency. (b) Carbon emsson vs. arrval rate. (c) Proft vs. urgency. (d) Proft vs. arrval rate. (e) Workload executed vs. urgency. (f) Workload executed vs. arrval rate. Fg. 7. Impact of urgency and arrval rate of applcatons. The varaton n electrcty prce affects the performance of proft based polces (GMP and MP MP) n terms of carbon emsson (Fg. 9(a)) and workload executed (Fg. 9(c)), whle carbon emsson based polces (GMCE, MCE MCE and MCE MP) are not affected. But, the proft of all polces decrease more as the varaton of electrcty prce ncreases (Fg. 9(b)) due to the subtracton of energy cost from proft. For hgh varaton n electrcty prce, there s not much dfference (about 1.4%) n carbon emsson between MP MP and MCE MCE (Fg. 9(a)). Hence, Cloud provders can use MP MP whch gves slghtly better average proft than carbon emsson based polces (GMCE, MCE MCE and MCE MP). On the other hand, for cases when the varaton n electrcty prce s not hgh, provders can use carbon emsson based polces such as MCE MCE and MCE MP to reduce about 5% 7% of carbon emsson by sacrfcng less than 0.5% of proft Impact of data center effcency To study the mpact of data center effcency n dfferent locatons on our polces, we vary the datacentereffcency = COP, COP+1 whle keepng all other factors such as carbon emsson rate as the same. Usng normal dstrbuton wth mean = 0.4, random values are generated for the followng three classes of data center effcency across all data centers as: (A) Low varaton (low) wth standard devaton = 0.05, (B) Medum varaton (medum) wth standard devaton = 0.12, and (C) Hgh varaton (hgh) wth standard devaton = 0.2. All experments are conducted at medum ob arrval rate wth 40% of HU applcatons. Fg. 10(a) shows carbon emsson based polces (GMCE, MCE MCE and MCE MP) acheve the lowest carbon emsson wth almost equal values. MCE MCE performs better than MCE MP by schedulng more HPC workload (Fg. 10(c)) whle achevng smlar

13 744 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) (a) Carbon emsson. (b) Proft. (c) Workload executed. Fg. 8. Impact of carbon emsson rate. (a) Carbon emsson. (b) Proft. (c) Workload executed. Fg. 9. Impact of electrcty prce.

14 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) (a) Carbon emsson. (b) Proft. (c) Workload executed. Fg. 10. Impact of data center effcency. proft (Fg. 10(b)). But when the varaton n data center effcency s hgh, GMCE can execute much hgher workload (Fg. 10(c)) than MCE MCE and MCE MP whle achevng only slghtly less proft than proft based polces (GMP and MP MP) (Fg. 10(b)). Thus, Cloud provders can use GMCE to decrease the carbon emssons across ther data centers wthout sgnfcant proft loss Evaluaton wth data transfer cost Impact of data transfer cost The data transfer cost of the Cloud provder vares across dfferent data centers. Thus, to study the mpact of data transfer cost on our polces, we vary the data transfer cost whle keepng all other factors such as carbon emsson rate and electrcty prce as the same. Snce ths paper focuses on compute-ntensve parallel applcatons wth low data transfer requrements, the maxmum data transfer sze of an applcaton s set to only 10 TB. For ths set of experments, the Cloud provder charges the user a fxed prce of $0.17 /GB for data transfer up to 10 TB, whch s derved from Amazon EC2 [1]. Snce, the workload traces used for the experments does not contan any nformaton on nput or output data, thus the data transferred durng executon s randomly generated durng smulaton. The data transfer sze of an applcaton s vared between [0, 10] TB usng unform dstrbuton. The data transfer cost that the Cloud provder has to ncur s vared between $[0, 0.17] usng normal dstrbuton wth mean = Random values are generated for the followng three classes of data transfer cost across all data centers as: (A) Low varaton (low) wth standard devaton = 0.05, (B) Medum varaton (medum) wth standard devaton = 0.12, and (C) Hgh varaton (hgh) wth standard devaton = 0.2. All experments are conducted at medum ob arrval rate wth 40% of HU applcatons. Fg. 11 shows how the average carbon emsson and proft wll be affected due to data transfer cost n comparson to the case when data transfer cost s not consdered (as ndcated by WthoutDT). The relatve performance of all polces has remaned almost the same even wth data transfer cost. For nstance, n Fg. 11(a) and (c), MP MP results n the maxmum average carbon emsson, whle MCE MCE results n the mnmum carbon emsson. Ths s because of the compute-ntensve workload, whereby the mpact of data transfer cost s neglgble n comparson to the executon cost. There s only a slght ncrease n the average proft (Fg. 11(b)) due to the addtonal proft ganed by the Cloud provder from the transfer of data. 7. Concludng remarks and future drectons The usage of energy has become a maor concern snce the prce of electrcty has ncreased dramatcally. Especally, Cloud provders need a hgh amount of electrcty to run and mantan ther computatonal resources n order to provde the best servce level for the customer. Although ths mportance has been emphaszed n a lot of research lterature, the combned approach of analyzng the proft and energy sustanablty n the resource allocaton process has not been taken nto consderaton. The goal of ths paper s to outlne how managng resource allocaton across multple locatons can have an mpact on the energy cost of a provder. The overall meta-schedulng problem s descrbed as an optmzaton problem wth dual obectve functons. Due to ts NP-hard characterstc, several heurstc polces are proposed and compared. The polces are compared wth each other for dfferent scenaros and also wth the derved lower/upper bounds. In some cases, the polces performed very well wth only almost 1% away from the upper bound of proft. By ntroducng DVS and hence lowerng the supply voltage of CPUs, the energy cost for executng HPC workloads can be reduced by 33% on average. Applcatons wll run on CPUs wth a lower frequency than expected, but they stll meet the requred deadlnes. The lmtaton

15 746 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) (a) Carbon emsson. (b) Proft. (c) Workload executed. Fg. 11. Impact of data transfer cost. Table 4 Summary of heurstcs wth comparson results. Metaschedulng polcy Descrpton Tme Complexty Overall performance HU Jobs Arrval rate Carbon emsson rate Data center effcency Energy cost GMCE Greedy (Carbon Emsson) O(NJ) Bad Bad Bad Best (hgh) Bad MCE MCE Two-phase Greedy (Carbon Emsson) O(NJ 2 ) Good Good (low) Best (hgh) Okay (low) Good (low) (low) GMP Greedy (Proft) O(NJ) Okay Okay (hgh) Bad (low) Bad (hgh) Bad (hgh) MP MP Two-phase Greedy (Proft) O(NJ 2 ) Good Bad (Carbon Emsson), Good (low) Best (low) Good (hgh) (hgh) Best (Proft) MCE MP Two-phase Greedy (Carbon Emsson and Proft) O(NJ 2 ) Best (low) Good (hgh) Okay Okay Best (low) of carbon emsson can be forced by governments to comply wth certan threshold values [15]. In such cases, Cloud provders can focus on reducng carbon emsson n addton to mnmzng energy consumpton. We dentfed that polces lke MCE MCE can help provder to reduce ther emsson whle almost mantanng ther proft. If the provder faces a volatle electrcty prce, the MP MP polcy wll lead to a better outcome. Dependng on the envronmental and economc constrants, Cloud provders can selectvely choose dfferent polces to effcently allocate ther resources to meet customers requests. The characterstcs and performance of each meta-schedulng polcy are summarzed n Table 4, where low and hgh represent the scenaro for whch the overall performance of the polcy s gven. For nstance, GMCE performs the best when the varaton n data center effcency s hgh, whle MCE MP performs the best when the varaton n energy cost s low or when there s a low number of HU applcatons. We observed that the mpact of data transfer cost s mnmal for the compute-ntensve applcatons that we have consdered. However, our model has explctly consdered the data transfer cost and thus can be used for data-ntensve applcatons as well. In future, we wll lke to extend our model to consder the aspect of turnng servers on and off, whch can further reduce energy consumpton. Ths requres a more techncal analyss of the delay and power consumpton for suspendng servers, as well as the effect on the relablty of computng devces. We wll also want to extend our polces for vrtualzed envronments, where t can be easer to consoldate many applcatons on fewer physcal servers. In addton, we can consder the energy (and potentally latency) overhead of movng data sets between the data centers, n partcular for data-ntensve HPC applcatons. Ths overhead can be qute sgnfcant dependng on the sze of the data set and the actvty of the workloads.

16 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) Acknowledgments Energy We would lke to thank Marcos Das de Assuncao for hs constructve comments on ths paper. Ths work s partally supported by research grants from the Australan Research Councl (ARC) and Australan Department of Innovaton, Industry, Scence and Research (DIISR). Appendx A. Proof of 2-dmensonal bn-packng problem γ = 0.5 Defnton 1. Let L = (x 1,..., x,..., x n ) be a gven lst of n tems wth a value of x (0, 1], and B = b 1,..., b m be a fnte sequence of m bns each of unt capacty. The 2-dmensonal bn-packng problem s to assgn each x nto a unque bn, wth the sum of numbers n each b B not exceedng one, such that the total number of used bns s a mnmum (denoted by L ) [31]. Proposton 1. The optmzaton problem descrbed n Eqs. (10) and (11) s an NP-hard problem. Proof. Ths proposton can be easly proven by reducng the problem to the (2-dmensonal) bn-packng problem [31], whch s a well-known NP-hard problem. The number of bns m s equal to the avalable N data centers. The dmensons of an applcaton consst of two parameters d and e. However, e depends on the frequency of the CPUs of data center. By defnng a transformaton functon ρ : R R R, we can transform e to e. Ths restrcton only consders data centers wth the same frequency for all CPUs. Consequently, f (d, e ) = x and by Defnton 1, t s a 2-dmensonal bn-packng problem defned by the deadlne and runtme of an applcaton. Appendx B. Analyss of explotng local mnma n DVS Secton 3.4 shows that the energy consumpton of a CPU depends on the frequency of the CPU at whch an applcaton wll be executed. Hence the obectve s to obtan an optmal CPU frequency so that the energy consumpton of the CPU can be mnmzed whle completng the applcaton wthn ts deadlne. From the plot of energy consumpton n Fg. B.1, we can observe the exstence of the local mnma where the energy consumpton wll be the mnmum. In order to dentfy ths local mnma, we dfferentate the energy consumpton of an applcaton on a CPU at a data center wth respect to the operatng CPU frequency f as: E c = (β + α (f ) 3 ) n e (E c ) f = n e (E c ) γ cpu f max f 1 (β + α (f ) 3 )f max γ cpu + 3α (f ) For local mnma, (f ) f max f + 1 γ cpu. (B.1) (B.2) = 0 (B.3) f β + α (f ) 3 f max γ cpu + 3α (f ) 2 (f ) f max γ cpu f = 0. (B.4) Frequency Fg. B.1. Energy consumpton vs. CPU frequency. Snce we can clearly see that the local mnma exsts n Fg. B.1, at least one root of the above polynomal wll exst n the range [0, ]. When γ cpu = 1, the above equaton wll reduce to: β (f ) 2 + 2α f = 0. (B.5) Many prevous work [20,28] have chosen a fxed γ cpu value to compute the energy usage of CPUs. Lkewse, n ths paper, we = 1 to understand the worst case scenaro of CPU energy usage. We can then pre-compute the local mnma (wth statc varables such as CPU power effcency) before startng the meta-schedulng algorthm. assume a fxed γ cpu References [1] Amazon, Amazon Elastc Compute Cloud (EC2), Aug amazon.com/ec2/. [2] Alpron, Alpron Sute, [3] C. Belady, In the data center, power and coolng costs more than the t equpment t supports, Electroncs Coolng 13 (1) (2007) 24. [4] F. Berman, H. Casanova, A. Chen, K. Cooper, H. Dal, A. Dasgupta, W. Deng, J. Dongarra, L. Johnsson, K. Kennedy, C. Koelbel, B. Lu, X. Lu, A. Mandal, G. Marn, M. Mazna, J. Mellor-Crummey, C. Mendes, A. Olugble, J.M. Patel, D. Reed, Z. Sh, O. Severt, H. Xa, A. YarKhan, New grd schedulng and reschedulng methods n the grads proect, Internatonal Journal of Parallel Programmng 33 (2) (2005) [5] R. Banchn, R. Raamony, Power and energy management for server systems, Computer 37 (11) (2004) [6] D. Bradley, R. Harper, S. Hunter, Workload-based power management for parallel computer systems, IBM Journal of Research and Development 47 (5) (2003) [7] T. Braun, H. Segel, N. Beck, L. Bolon, M. Maheswaran, A. Reuther, J. Robertson, M. Theys, B. Yao, D. Hensgen, et al., A comparson of eleven statc heurstcs for mappng a class of ndependent tasks onto heterogeneous dstrbuted computng systems, Journal of Parallel and Dstrbuted Computng 61 (6) (2001) [8] T. Burd, R. Brodersen, Energy effcent CMOS mcroprocessor desgn, n: Proceedngs of the 28th Hawa Internatonal Conference on System Scences, HICSS 95, vol. 1060, [9] J. Burge, P. Ranganathan, J.L. Wener, Cost-aware schedulng for heterogeneous enterprse machnes (CASH EM), Techncal Report HPL , HP Labs, Palo Alto, Apr [10] R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, I. Brandc, Cloud computng and emergng IT platforms: vson, hype, and realty for delverng computng as the 5th utlty, Future Generaton Computer Systems 25 (6) (2009) [11] J.S. Chase, D.C. Anderson, P.N. Thakar, A.M. Vahdat, R.P. Doyle, Managng energy and server resources n hostng centers, SIGOPS Operatng Systems Revew 35 (5) (2001) [12] Y. Chen, A. Das, W. Qn, A. Svasubramanam, Q. Wang, N. Gautam, Managng server energy and operatonal costs n hostng centers, ACM SIGMETRICS Performance Evaluaton Revew 33 (1) (2005) [13] M. Chn, Desktop cpu power survey, Slentpcrevew. com, [14] US Department of Energy, Voluntary reportng of greenhouse gases: Appendx F. Electrcty emsson factors, Appendx20F_r pdf. [15] K. Corrgan, A. Shah, C. Patel, Estmatng envronmental costs, n: Proceedngs of the Ist USENIX Workshop on Sustanable Informaton Technology, San Jose, CA, USA, 2009.

17 748 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) [16] US Department of Energy, US Energy Informaton Admnstraton (EIA) report, [17] A. Elyada, R. Gnosar, U. Weser, Low-complexty polces for energyperformance tradeoff n chp-mult-processors, IEEE Transactons on Very Large Scale Integraton (VLSI) Systems 16 (9) (2008) [18] Unted States Envronmental Protecton Agency, Letter to enterprse server manufacturer or other nterested stakeholder, Dec energystar.gov/a/products/downloads/server_announcement.pdf. [19] Unted States Envronmental Protecton Agency, Report to congress on server and data center energy effcency, Publc Law , Aug Datacenter_Report_Congress_Fnal1.pdf. [20] M. Etnsk, J. Corbalan, J. Labarta, M. Valero, A. Vedenbaum, Power-aware load balancng of large scale MPI applcatons, n: Proceedngs of the 2009 IEEE Internatonal Symposum on Parallel & Dstrbuted Processng, Rome, Italy, [21] EUbusness, Proposed EU regulaton to reduce CO 2 emssons from cars, Dec [22] X. Fan, W.-D. Weber, L.A. Barroso, Power provsonng for a warehouse-szed computer, n: Proceedngs of the 34th Annual Internatonal Symposum on Computer Archtecture, ACM, New York, NY, USA, 2007, pp [23] D. Fetelson, Parallel workloads archve, Aug labs/parallel/workload. [24] D.G. Fetelson, L. Rudolph, U. Schwegelshohn, K.C. Sevck, P. Wong, Theory and practce n parallel ob schedulng, n: Proceedngs of the 1997 Internatonal Workshop on Job Schedulng Strateges for Parallel Processng, London, UK, [25] W. Feng, K. Cameron, The green500 lst: encouragng sustanable supercomputng, Computer (2007) [26] X. Feng, R. Ge, K.W. Cameron, Power and energy proflng of scentfc applcatons on dstrbuted systems, n: Proceedngs of the 19th IEEE Internatonal Parallel and Dstrbuted Processng Symposum, Los Alamtos, CA, USA, [27] W. Feng, T. Scogland, The green500 lst: year one, n: Proceedngs of the 2009 IEEE Internatonal Symposum on Parallel & Dstrbuted Processng, Rome, Italy, [28] V. Freeh, D. Lowenthal, F. Pan, N. Kappah, R. Sprnger, B. Rountree, M. Femal, Analyzng the energy-tme trade-off n hgh-performance computng applcatons, IEEE Transactons on Parallel and Dstrbuted Systems 18 (6) (2007) 835. [29] A. Gandh, M. Harchol-Balter, R. Das, C. Lefurgy, Optmal power allocaton n server farms, n: Proceedngs of the 11th Internatonal Jont Conference on Measurement and Modelng of Computer Systems, Seattle, WA, USA, [30] Gartner, Gartner estmates ct ndustry accounts for 2 percent of global CO 2 emssons, Apr [31] M.R. Garey, D.S. Johnson, Computers and Intractablty: A Gude to the Theory of NP-Completeness, W. H. Freeman, San Francsco, USA, [32] S. Greenberg, E. Mlls, B. Tschud, P. Rumsey, B. Myatt, Best practces for data centers: results from benchmarkng 22 data centers, n: Proceedngs of the 2006 ACEEE Summer Study on Energy Effcency n Buldngs, Pacfc Grove, USA, [33] T. Hardware, Cpu performance charts, Toms Hardware, [34] C. Hsu, U. Kremer, The desgn, mplementaton, and evaluaton of a compler algorthm for CPU energy reducton, n: Proceedngs of the ACM SIGPLAN 2003 Conference on Programmng Language Desgn and Implementaton, Sweden, [35] O. Ibarra, C. Km, Heurstc Algorthms for Schedulng Independent Tasks on Nondentcal Processors, Journal of the ACM 24 (2) (1977) [36] D. Irwn, L. Grt, J. Chase, Balancng rsk and reward n a market-based task servce, n: Proceedngs of the 13th IEEE Internatonal Symposum on Hgh Performance Dstrbuted Computng, Honolulu, USA, [37] S.-H. Jang, V.E. Taylor, X. Wu, M. Praugo, E. Deelman, G. Mehta, K. Vah, Performance predcton-based versus load-based ste selecton: quantfyng the dfference, n: M. J. Oudshoorn, S. Raasekaran (Eds.), ISCA PDCS, ISCA, 2005, pp [38] K. Km, R. Buyya, J. Km, Power aware schedulng of bag-of-tasks applcatons wth deadlne constrants on dvs-enabled clusters, n: Proceedngs of the Seventh IEEE Internatonal Symposum on Cluster Computng and the Grd, Ro de Janero, Brazl, [39] B. Lawson, E. Smrn, Power-aware resource allocaton n hgh-end systems va onlne smulaton, n: Proceedngs of the 19th Annual Internatonal Conference on Supercomputng, Cambrdge, USA, [40] J. Markoff, S. Hansell, Hdng n Plan Sght, google seeks more power, [41] S. Martello, P. Toth, An algorthm for the generalzed assgnment problem, Operatonal Research 81 (1981) [42] J. Moore, J. Chase, P. Ranganathan, R. Sharma, Makng schedulng cool : temperature-aware workload placement n data centers, n: Proceedngs of the 2005 Annual Conference on USENIX Annual Techncal Conference, Anahem, CA, [43] A.W. Mu alem, D.G. Fetelson, Utlzaton, predctablty, workloads, and user runtme estmates n schedulng the IBM SP2 wth backfllng, IEEE Transactons on Parallel and Dstrbuted Systems 12 (6) (2001) [44] G.R. Nudd, D.J. Kerbyson, E. Papaefstathou, S.C. Perry, J.S. Harper, D.V. Wlcox, Pace a toolset for the performance predcton of parallel and dstrbuted systems, Internatonal Journal of Hgh Performance Computng Applcaton 14 (3) (2000) [45] A. Orgere, L. Lefèvre, J. Gelas, Save watts n your grd: green strateges for energy-aware framework n large scale dstrbuted systems, n: Proceedngs of the th IEEE Internatonal Conference on Parallel and Dstrbuted Systems, Melbourne, Australa, [46] C. Patel, R. Sharma, C. Bash, M. Betelmal, Energy flow n the nformaton technology stack: coeffcent of performance of the ensemble and ts mpact on the total cost of ownershp, HP Labs External Techncal Report, HPL [47] C. Patel, R. Sharma, C. Bash, S. Graupner, Energy aware grd: global workload placement based on energy effcency, Techncal Report HPL , HP Labs, Palo Alto, Nov [48] P. Plla, K. Shn, Real-tme dynamc voltage scalng for low-power embedded operatng systems, n: Proceedngs of the 18th ACM Symposum on Operatng Systems Prncples, Banff, Canada, [49] R. Porter, Mechansm desgn for onlne real-tme schedulng, n: Proceedng of the 5th ACM Conference on Electronc Commerce, New York, USA, [50] S. Rvore, M.A. Shah, P. Ranganathan, C. Kozyraks, Joulesort: a balanced energy-effcency benchmark, n: SIGMOD 07: Proceedngs of the 2007 ACM SIGMOD Internatonal Conference on Management of Data, ACM, New York, NY, USA, 2007, pp [51] C. Rusu, A. Ferrera, C. Scordno, A. Watson, Energy-effcent real-tme heterogeneous server clusters, n: Proceedngs of the 12th IEEE Real-Tme and Embedded Technology and Applcatons Symposum, Stockholm, Sweden, [52] V. Salapura, et al. Power and performance optmzaton at the system level, n: Proceedngs of the 2nd Conference on Computng Fronters, Ischa, Italy, [53] H.A. Sanay, S. Vadhyar, Performance modelng of parallel applcatons for grd schedulng, Journal of Parallel Dstrbuted Computng 68 (8) (2008) [54] G. Sngh, C. Kesselman, E. Deelman, A provsonng model and ts comparson wth best-effort for performance-cost optmzaton n grds, n: Proceedngs of the 16th Internatonal Symposum on Hgh Performance Dstrbuted Computng, Calforna, USA, [55] W. Smth, I. Foster, V. Taylor, Predctng applcaton run tmes usng hstorcal nformaton, n: D.G. Fetelson, L. Rudolph (Eds.), Job Schedulng Strateges for Parallel Processng, n: Lect. Notes Comput. Sc., vol. 1459, Sprnger Verlag, 1998, pp [56] Q. Tang, S.K.S. Gupta, D. Stanzone, P. Cayton, Thermal-aware task schedulng to mnmze energy usage of blade server based datacenters, n: Proceedngs of the 2nd IEEE Internatonal Symposum on Dependable, Autonomc and Secure Computng, DASC 2006, IEEE Computer Socety, Los Alamtos, CA, USA, [57] G. Tesauro, et al. Managng power consumpton and performance of computng systems usng renforcement learnng, n: Proceedngs of the 21st Annual Conference on Neural Informaton Processng Systems, Vancouver, Canada, [58] TOP500 Supercomputers, Supercomputer s Applcaton Area Share, [59] G. Verdun, D. Azevedo, H. Barrass, S. Berard, M. Bramftt, T. Cader, T. Darby, C. Long, N. Gruendler, B. Macarthur, et al. The green grd metrcs: data center nfrastructure effcency (DCIE) detaled analyss, the green grd. [60] L. Wang, Y. Lu, Effcent power management of heterogeneous soft realtme clusters, n: Proceedngs of the 2008 Real-Tme Systems Symposum, Barcelona, Span, Saurabh Kumar Garg s a Ph.D. student at the Cloud Computng and Dstrbuted Systems (CLOUDS) Laboratory, Unversty of Melbourne, Australa. In Melbourne Unversty, he has been awarded varous specal scholarshps for hs Ph.D. canddature. He completed hs 5-year Integrated Master of Technology n Mathematcs and Computng from the Indan Insttute of Technology (IIT) Delh, Inda, n Hs research nterests nclude Resource Management, Schedulng, Utlty and Grd Computng, Cloud Computng, Green Computng, Wreless Networks, and Ad hoc Networks. Chee Shn Yeo s a research engneer at the Insttute of Hgh Performance Computng (IHPC), Sngapore. He completed hs Ph.D. at the Unversty of Melbourne, Australa. Hs research nterests nclude parallel and dstrbuted computng, servces and utlty computng, energy-effcent computng, and market based resource allocaton.

18 S.K. Garg et al. / J. Parallel Dstrb. Comput. 71 (2011) Arun Anandasvam s a research assstant and Ph.D. student n the Insttute of Informaton Systems and Management at Unverstät Karlsruhe. Hs research work comprses prcng polces and decson frameworks of grd and Cloud computng provders. Rakumar Buyya s a Professor of Computer Scence and Software Engneerng; and Drector of the Cloud Computng and Dstrbuted Systems (CLOUDS) Laboratory at the Unversty of Melbourne, Australa. He s also servng as the foundng CEO of Manrasoft Pty Ltd., a spn-off company of the Unversty, commercalsng nnovatons orgnatng from the CLOUDS Lab. He has poneered Economc Paradgm for Servce-Orented Grd computng and demonstrated ts utlty through hs contrbutons to conceptualsaton, desgn and development of Cloud and Grd technologes such as Aneka, Alchem, Nmrod-G and Grdbus that power the emergng escence and ebusness applcatons.

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

QoS-based Scheduling of Workflow Applications on Service Grids

QoS-based Scheduling of Workflow Applications on Service Grids QoS-based Schedulng of Workflow Applcatons on Servce Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted System Laboratory Dept. of Computer Scence and Software Engneerng The Unversty

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

Credit Limit Optimization (CLO) for Credit Cards

Credit Limit Optimization (CLO) for Credit Cards Credt Lmt Optmzaton (CLO) for Credt Cards Vay S. Desa CSCC IX, Ednburgh September 8, 2005 Copyrght 2003, SAS Insttute Inc. All rghts reserved. SAS Propretary Agenda Background Tradtonal approaches to credt

More information

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application Internatonal Journal of mart Grd and lean Energy Performance Analyss of Energy onsumpton of martphone Runnng Moble Hotspot Applcaton Yun on hung a chool of Electronc Engneerng, oongsl Unversty, 511 angdo-dong,

More information

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process Dsadvantages of cyclc TDDB47 Real Tme Systems Manual scheduler constructon Cannot deal wth any runtme changes What happens f we add a task to the set? Real-Tme Systems Laboratory Department of Computer

More information

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA).

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA). Managng Server Energy and Operatonal Costs n Hostng Centers Yyu Chen Dept. of IE Penn State Unversty Unversty Park, PA 16802 yzc107@psu.edu Anand Svasubramanam Dept. of CSE Penn State Unversty Unversty

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

Cloud Auto-Scaling with Deadline and Budget Constraints

Cloud Auto-Scaling with Deadline and Budget Constraints Prelmnary verson. Fnal verson appears In Proceedngs of 11th ACM/IEEE Internatonal Conference on Grd Computng (Grd 21). Oct 25-28, 21. Brussels, Belgum. Cloud Auto-Scalng wth Deadlne and Budget Constrants

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

Dynamic Pricing for Smart Grid with Reinforcement Learning

Dynamic Pricing for Smart Grid with Reinforcement Learning Dynamc Prcng for Smart Grd wth Renforcement Learnng Byung-Gook Km, Yu Zhang, Mhaela van der Schaar, and Jang-Won Lee Samsung Electroncs, Suwon, Korea Department of Electrcal Engneerng, UCLA, Los Angeles,

More information

iavenue iavenue i i i iavenue iavenue iavenue

iavenue iavenue i i i iavenue iavenue iavenue Saratoga Systems' enterprse-wde Avenue CRM system s a comprehensve web-enabled software soluton. Ths next generaton system enables you to effectvely manage and enhance your customer relatonshps n both

More information

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud IJCSI Internatonal Journal of Computer Scence Issues, Vol. 0, Issue, No, March 03 ISSN (Prnt): 694-084 ISSN (Onlne): 694-0784 www.ijcsi.org 37 Proft-Aware DVFS Enabled Resource Management of IaaS Cloud

More information

Cost-based Scheduling of Scientific Workflow Applications on Utility Grids

Cost-based Scheduling of Scientific Workflow Applications on Utility Grids Cost-based Schedulng of Scentfc Workflow Applcatons on Utlty Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted Systems Laboratory Dept. of Computer Scence and Software Engneerng

More information

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure

More information

Performance Evaluation of Infrastructure as Service Clouds with SLA Constraints

Performance Evaluation of Infrastructure as Service Clouds with SLA Constraints Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour 2 Computer Scence Department, CICESE Research Center, Ensenada, BC,

More information

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL

More information

Calculating the high frequency transmission line parameters of power cables

Calculating the high frequency transmission line parameters of power cables < ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,

More information

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña Proceedngs of the 2008 Wnter Smulaton Conference S. J. Mason, R. R. Hll, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

A Novel Auction Mechanism for Selling Time-Sensitive E-Services

A Novel Auction Mechanism for Selling Time-Sensitive E-Services A ovel Aucton Mechansm for Sellng Tme-Senstve E-Servces Juong-Sk Lee and Boleslaw K. Szymansk Optmaret Inc. and Department of Computer Scence Rensselaer Polytechnc Insttute 110 8 th Street, Troy, Y 12180,

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

Forecasting the Direction and Strength of Stock Market Movement

Forecasting the Direction and Strength of Stock Market Movement Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract - Stock market s one of the most complcated systems

More information

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

Number of Levels Cumulative Annual operating Income per year construction costs costs ($) ($) ($) 1 600,000 35,000 100,000 2 2,200,000 60,000 350,000

Number of Levels Cumulative Annual operating Income per year construction costs costs ($) ($) ($) 1 600,000 35,000 100,000 2 2,200,000 60,000 350,000 Problem Set 5 Solutons 1 MIT s consderng buldng a new car park near Kendall Square. o unversty funds are avalable (overhead rates are under pressure and the new faclty would have to pay for tself from

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing Effcent Bandwdth Management n Broadband Wreless Access Systems Usng CAC-based Dynamc Prcng Bader Al-Manthar, Ndal Nasser 2, Najah Abu Al 3, Hossam Hassanen Telecommuncatons Research Laboratory School of

More information

Real-Time Process Scheduling

Real-Time Process Scheduling Real-Tme Process Schedulng ktw@cse.ntu.edu.tw (Real-Tme and Embedded Systems Laboratory) Independent Process Schedulng Processes share nothng but CPU Papers for dscussons: C.L. Lu and James. W. Layland,

More information

Can Auto Liability Insurance Purchases Signal Risk Attitude?

Can Auto Liability Insurance Purchases Signal Risk Attitude? Internatonal Journal of Busness and Economcs, 2011, Vol. 10, No. 2, 159-164 Can Auto Lablty Insurance Purchases Sgnal Rsk Atttude? Chu-Shu L Department of Internatonal Busness, Asa Unversty, Tawan Sheng-Chang

More information

Traffic-light a stress test for life insurance provisions

Traffic-light a stress test for life insurance provisions MEMORANDUM Date 006-09-7 Authors Bengt von Bahr, Göran Ronge Traffc-lght a stress test for lfe nsurance provsons Fnansnspetonen P.O. Box 6750 SE-113 85 Stocholm [Sveavägen 167] Tel +46 8 787 80 00 Fax

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

HP Mission-Critical Services

HP Mission-Critical Services HP Msson-Crtcal Servces Delverng busness value to IT Jelena Bratc Zarko Subotc TS Support tm Mart 2012, Podgorca 2010 Hewlett-Packard Development Company, L.P. The nformaton contaned heren s subject to

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment An Optmal Model for Prorty based Servce Schedulng Polcy for Cloud Computng Envronment Dr. M. Dakshayn Dept. of ISE, BMS College of Engneerng, Bangalore, Inda. Dr. H. S. Guruprasad Dept. of ISE, BMS College

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management

More information

Cloud-based Social Application Deployment using Local Processing and Global Distribution

Cloud-based Social Application Deployment using Local Processing and Global Distribution Cloud-based Socal Applcaton Deployment usng Local Processng and Global Dstrbuton Zh Wang *, Baochun L, Lfeng Sun *, and Shqang Yang * * Bejng Key Laboratory of Networked Multmeda Department of Computer

More information

To manage leave, meeting institutional requirements and treating individual staff members fairly and consistently.

To manage leave, meeting institutional requirements and treating individual staff members fairly and consistently. Corporate Polces & Procedures Human Resources - Document CPP216 Leave Management Frst Produced: Current Verson: Past Revsons: Revew Cycle: Apples From: 09/09/09 26/10/12 09/09/09 3 years Immedately Authorsaton:

More information

Power Low Modified Dual Priority in Hard Real Time Systems with Resource Requirements

Power Low Modified Dual Priority in Hard Real Time Systems with Resource Requirements Power Low Modfed Dual Prorty n Hard Real Tme Systems wth Resource Requrements M.Angels Moncusí, Alex Arenas {amoncus,aarenas}@etse.urv.es Dpt d'engnyera Informàtca Matemàtques Unverstat Rovra Vrgl Campus

More information

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy Fnancal Tme Seres Analyss Patrck McSharry patrck@mcsharry.net www.mcsharry.net Trnty Term 2014 Mathematcal Insttute Unversty of Oxford Course outlne 1. Data analyss, probablty, correlatons, vsualsaton

More information

Optimization Model of Reliable Data Storage in Cloud Environment Using Genetic Algorithm

Optimization Model of Reliable Data Storage in Cloud Environment Using Genetic Algorithm Internatonal Journal of Grd Dstrbuton Computng, pp.175-190 http://dx.do.org/10.14257/gdc.2014.7.6.14 Optmzaton odel of Relable Data Storage n Cloud Envronment Usng Genetc Algorthm Feng Lu 1,2,3, Hatao

More information

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng Envronment Congcong Xong, Long Feng, Lxan Chen A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng

More information

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications Methodology to Determne Relatonshps between Performance Factors n Hadoop Cloud Computng Applcatons Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng and

More information

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs Cost Mnmzaton usng Renewable Coolng and Thermal Energy Storage n CDNs Stephen Lee College of Informaton and Computer Scences UMass, Amherst stephenlee@cs.umass.edu Rahul Urgaonkar IBM Research rurgaon@us.bm.com

More information

Managing Cycle Inventories. Matching Supply and Demand

Managing Cycle Inventories. Matching Supply and Demand Managng Cycle Inventores Matchng Supply and Demand 1 Outlne Why to hold cycle nventores? Economes of scale to reduce fxed costs per unt. Jont fxed costs for multple products Long term quantty dscounts

More information

Outsourcing inventory management decisions in healthcare: Models and application

Outsourcing inventory management decisions in healthcare: Models and application European Journal of Operatonal Research 154 (24) 271 29 O.R. Applcatons Outsourcng nventory management decsons n healthcare: Models and applcaton www.elsever.com/locate/dsw Lawrence Ncholson a, Asoo J.

More information

A Secure Password-Authenticated Key Agreement Using Smart Cards

A Secure Password-Authenticated Key Agreement Using Smart Cards A Secure Password-Authentcated Key Agreement Usng Smart Cards Ka Chan 1, Wen-Chung Kuo 2 and Jn-Chou Cheng 3 1 Department of Computer and Informaton Scence, R.O.C. Mltary Academy, Kaohsung 83059, Tawan,

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and Ths artcle appeared n a journal publshed by Elsever. The attached copy s furnshed to the author for nternal non-commercal research and educaton use, ncludng for nstructon at the authors nsttuton and sharng

More information

Preventive Maintenance and Replacement Scheduling: Models and Algorithms

Preventive Maintenance and Replacement Scheduling: Models and Algorithms Preventve Mantenance and Replacement Schedulng: Models and Algorthms By Kamran S. Moghaddam B.S. Unversty of Tehran 200 M.S. Tehran Polytechnc 2003 A Dssertaton Proposal Submtted to the Faculty of the

More information

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems STAN-CS-73-355 I SU-SE-73-013 An Analyss of Central Processor Schedulng n Multprogrammed Computer Systems (Dgest Edton) by Thomas G. Prce October 1972 Techncal Report No. 57 Reproducton n whole or n part

More information

Software project management with GAs

Software project management with GAs Informaton Scences 177 (27) 238 241 www.elsever.com/locate/ns Software project management wth GAs Enrque Alba *, J. Francsco Chcano Unversty of Málaga, Grupo GISUM, Departamento de Lenguajes y Cencas de

More information

A New Quality of Service Metric for Hard/Soft Real-Time Applications

A New Quality of Service Metric for Hard/Soft Real-Time Applications A New Qualty of Servce Metrc for Hard/Soft Real-Tme Applcatons Shaoxong Hua and Gang Qu Electrcal and Computer Engneerng Department and Insttute of Advanced Computer Study Unversty of Maryland, College

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo

More information

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid Feasblty of Usng Dscrmnate Prcng Schemes for Energy Tradng n Smart Grd Wayes Tushar, Chau Yuen, Bo Cha, Davd B. Smth, and H. Vncent Poor Sngapore Unversty of Technology and Desgn, Sngapore 138682. Emal:

More information

Thermal-aware relocation of servers in green data centers

Thermal-aware relocation of servers in green data centers Fronters of Informaton Technology & Electronc Engneerng www.zju.edu.cn/jzus; engneerng.cae.cn; www.sprngerlnk.com ISSN 295-9184 (prnt); ISSN 295-923 (onlne) E-mal: jzus@zju.edu.cn Chaudhry et al. / Front

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy 4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.

More information

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng

More information

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems Schedulablty Bound of Weghted Round Robn Schedulers for Hard Real-Tme Systems Janja Wu, Jyh-Charn Lu, and We Zhao Department of Computer Scence, Texas A&M Unversty {janjaw, lu, zhao}@cs.tamu.edu Abstract

More information

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture A Desgn Method of Hgh-avalablty and Low-optcal-loss Optcal Aggregaton Network Archtecture Takehro Sato, Kuntaka Ashzawa, Kazumasa Tokuhash, Dasuke Ish, Satoru Okamoto and Naoak Yamanaka Dept. of Informaton

More information

A Performance Analysis of View Maintenance Techniques for Data Warehouses

A Performance Analysis of View Maintenance Techniques for Data Warehouses A Performance Analyss of Vew Mantenance Technques for Data Warehouses Xng Wang Dell Computer Corporaton Round Roc, Texas Le Gruenwald The nversty of Olahoma School of Computer Scence orman, OK 739 Guangtao

More information

Study on Model of Risks Assessment of Standard Operation in Rural Power Network

Study on Model of Risks Assessment of Standard Operation in Rural Power Network Study on Model of Rsks Assessment of Standard Operaton n Rural Power Network Qngj L 1, Tao Yang 2 1 Qngj L, College of Informaton and Electrcal Engneerng, Shenyang Agrculture Unversty, Shenyang 110866,

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS 21 22 September 2007, BULGARIA 119 Proceedngs of the Internatonal Conference on Informaton Technologes (InfoTech-2007) 21 st 22 nd September 2007, Bulgara vol. 2 INVESTIGATION OF VEHICULAR USERS FAIRNESS

More information

Modeling and Analysis of 2D Service Differentiation on e-commerce Servers

Modeling and Analysis of 2D Service Differentiation on e-commerce Servers Modelng and Analyss of D Servce Dfferentaton on e-commerce Servers Xaobo Zhou, Unversty of Colorado, Colorado Sprng, CO zbo@cs.uccs.edu Janbn We and Cheng-Zhong Xu Wayne State Unversty, Detrot, Mchgan

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

"Research Note" APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES *

Research Note APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES * Iranan Journal of Scence & Technology, Transacton B, Engneerng, ol. 30, No. B6, 789-794 rnted n The Islamc Republc of Iran, 006 Shraz Unversty "Research Note" ALICATION OF CHARGE SIMULATION METHOD TO ELECTRIC

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

Marginal Revenue-Based Capacity Management Models and Benchmark 1

Marginal Revenue-Based Capacity Management Models and Benchmark 1 Margnal Revenue-Based Capacty Management Models and Benchmark 1 Qwen Wang 2 Guanghua School of Management, Pekng Unversty Sherry Xaoyun Sun 3 Ctgroup ABSTRACT To effcently meet customer requrements, a

More information

How To Improve Power Demand Response Of A Data Center Wth A Real Time Power Demand Control Program

How To Improve Power Demand Response Of A Data Center Wth A Real Time Power Demand Control Program Demand Response of Data Centers: A Real-tme Prcng Game between Utltes n Smart Grd Nguyen H. Tran, Shaole Ren, Zhu Han, Sung Man Jang, Seung Il Moon and Choong Seon Hong Department of Computer Engneerng,

More information

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays VoIP Playout Buffer Adjustment usng Adaptve Estmaton of Network Delays Mroslaw Narbutt and Lam Murphy* Department of Computer Scence Unversty College Dubln, Belfeld, Dubln, IRELAND Abstract The poor qualty

More information

CLoud computing technologies have enabled rapid

CLoud computing technologies have enabled rapid 1 Cost-Mnmzng Dynamc Mgraton of Content Dstrbuton Servces nto Hybrd Clouds Xuana Qu, Hongxng L, Chuan Wu, Zongpeng L and Francs C.M. Lau Department of Computer Scence, The Unversty of Hong Kong, Hong Kong,

More information

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

How To Understand The Results Of The German Meris Cloud And Water Vapour Product Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller

More information

Hosting Virtual Machines on Distributed Datacenters

Hosting Virtual Machines on Distributed Datacenters Hostng Vrtual Machnes on Dstrbuted Datacenters Chuan Pham Scence and Engneerng, KyungHee Unversty, Korea pchuan@khu.ac.kr Jae Hyeok Son Scence and Engneerng, KyungHee Unversty, Korea sonaehyeok@khu.ac.kr

More information

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Self-Adaptive SLA-Driven Capacity Management for Internet Services Self-Adaptve SLA-Drven Capacty Management for Internet Servces Bruno Abrahao, Vrglo Almeda and Jussara Almeda Computer Scence Department Federal Unversty of Mnas Geras, Brazl Alex Zhang, Drk Beyer and

More information

Checkng and Testng in Nokia RMS Process

Checkng and Testng in Nokia RMS Process An Integrated Schedulng Mechansm for Fault-Tolerant Modular Avoncs Systems Yann-Hang Lee Mohamed Youns Jeff Zhou CISE Department Unversty of Florda Ganesvlle, FL 326 yhlee@cse.ufl.edu Advanced System Technology

More information

An Energy-Efficient Data Placement Algorithm and Node Scheduling Strategies in Cloud Computing Systems

An Energy-Efficient Data Placement Algorithm and Node Scheduling Strategies in Cloud Computing Systems 2nd Internatonal Conference on Advances n Computer Scence and Engneerng (CSE 2013) An Energy-Effcent Data Placement Algorthm and Node Schedulng Strateges n Cloud Computng Systems Yanwen Xao Massve Data

More information

LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING

LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING 1 MS. POOJA.P.VASANI, 2 MR. NISHANT.S. SANGHANI 1 M.Tech. [Software Systems] Student, Patel College of Scence and

More information

Calculation of Sampling Weights

Calculation of Sampling Weights Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample

More information

Overview of monitoring and evaluation

Overview of monitoring and evaluation 540 Toolkt to Combat Traffckng n Persons Tool 10.1 Overvew of montorng and evaluaton Overvew Ths tool brefly descrbes both montorng and evaluaton, and the dstncton between the two. What s montorng? Montorng

More information

Hollinger Canadian Publishing Holdings Co. ( HCPH ) proceeding under the Companies Creditors Arrangement Act ( CCAA )

Hollinger Canadian Publishing Holdings Co. ( HCPH ) proceeding under the Companies Creditors Arrangement Act ( CCAA ) February 17, 2011 Andrew J. Hatnay ahatnay@kmlaw.ca Dear Sr/Madam: Re: Re: Hollnger Canadan Publshng Holdngs Co. ( HCPH ) proceedng under the Companes Credtors Arrangement Act ( CCAA ) Update on CCAA Proceedngs

More information

Pricing Model of Cloud Computing Service with Partial Multihoming

Pricing Model of Cloud Computing Service with Partial Multihoming Prcng Model of Cloud Computng Servce wth Partal Multhomng Zhang Ru 1 Tang Bng-yong 1 1.Glorous Sun School of Busness and Managment Donghua Unversty Shangha 251 Chna E-mal:ru528369@mal.dhu.edu.cn Abstract

More information

The Load Balancing of Database Allocation in the Cloud

The Load Balancing of Database Allocation in the Cloud , March 3-5, 23, Hong Kong The Load Balancng of Database Allocaton n the Cloud Yu-lung Lo and Mn-Shan La Abstract Each database host n the cloud platform often has to servce more than one database applcaton

More information

Dynamic Scheduling of Emergency Department Resources

Dynamic Scheduling of Emergency Department Resources Dynamc Schedulng of Emergency Department Resources Junchao Xao Laboratory for Internet Software Technologes, Insttute of Software, Chnese Academy of Scences P.O.Box 8718, No. 4 South Fourth Street, Zhong

More information

Traffic State Estimation in the Traffic Management Center of Berlin

Traffic State Estimation in the Traffic Management Center of Berlin Traffc State Estmaton n the Traffc Management Center of Berln Authors: Peter Vortsch, PTV AG, Stumpfstrasse, D-763 Karlsruhe, Germany phone ++49/72/965/35, emal peter.vortsch@ptv.de Peter Möhl, PTV AG,

More information

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems 1 Applcaton of Mult-Agents for Fault Detecton and Reconfguraton of Power Dstrbuton Systems K. Nareshkumar, Member, IEEE, M. A. Choudhry, Senor Member, IEEE, J. La, A. Felach, Senor Member, IEEE Abstract--The

More information

Sangam - Efficient Cellular-WiFi CDN-P2P Group Framework for File Sharing Service

Sangam - Efficient Cellular-WiFi CDN-P2P Group Framework for File Sharing Service Sangam - Effcent Cellular-WF CDN-P2P Group Framework for Fle Sharng Servce Anjal Srdhar Unversty of Illnos, Urbana-Champagn Urbana, USA srdhar3@llnos.edu Klara Nahrstedt Unversty of Illnos, Urbana-Champagn

More information