2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA).

Size: px
Start display at page:

Download "2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA)."

Transcription

1 Managng Server Energy and Operatonal Costs n Hostng Centers Yyu Chen Dept. of IE Penn State Unversty Unversty Park, PA yzc107@psu.edu Anand Svasubramanam Dept. of CSE Penn State Unversty Unversty Park, PA anand@cse.psu.edu Amtayu Das Dept. of CSE Penn State Unversty Unversty Park, PA adas@cse.psu.edu Qan Wang Dept. of ME Penn State Unversty Unversty Park, PA quw6@psu.edu Wub Qn Dept. of ME Penn State Unversty Unversty Park, PA wqn@psu.edu Natarajan Gautam Dept. of IE Penn State Unversty Unversty Park, PA ngautam@psu.edu ABSTRACT The growng cost of tunng and managng computer systems s leadng to out-sourcng of commercal servces to hostng centers. These centers provson thousands of dense servers wthn a relatvely small real-estate n order to host the applcatons/servces of dfferent customers who may have been assured by a servce-level agreement (SLA). Power consumpton of these servers s becomng a serous concern n the desgn and operaton of the hostng centers. The effects of hgh power consumpton manfest not only n the costs spent n desgnng effectve coolng systems to ward off the generated heat, but n the cost of electrcty consumpton tself. It s crucal to deploy power management strateges n these hostng centers to lower these costs towards enhancng proftablty. At the same tme, technques for power management that nclude shuttng down these servers and/or modulatng ther operatonal speed, can mpact the ablty of the hostng center to meet SLAs. In addton, repeated on-off cycles can ncrease the wear-and-tear of server components, ncurrng costs for ther procurement and replacement. Ths paper presents a formalsm to ths problem, and proposes three new onlne soluton strateges based on steady state queung analyss, feedback control theory, and a hybrd mechansm borrowng deas from these two. Usng real web server traces, we show that these solutons are more adaptve to workload behavor when performng server provsonng and speed control than earler heurstcs towards mnmzng operatonal costs whle meetng the SLAs. Categores and Subject Descrptors: I.6.5 [Model Development]: Modelng methodologes; D.2.8 [Metrcs]: Performance measures Contact author Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. SIGMETRICS 05, June 6 10, 2005, Banff, Alberta, Canada. Copyrght 2005 ACM /05/ $5.00. General Terms: Desgn, Performance Keywords: Energy Management, Performance Modelng, Feedback, Server Provsonng 1. INTRODUCTION The motvaton for ths paper stems from two growngly mportant trends. On the one hand, the cost and complexty of system tunng and management s leadng numerous enterprses to offload ther IT demands to hostng/data centers. These hostng centers are thus makng a consderable nvestment n procurng and operatng servers to take on these demandng loads. The other trend s the growng mportance of energy/power consumpton of these servers at the hostng centers, n terms of the electrcty cost to keep them powered on, as well as n the desgn of extensve coolng systems to keep ther operatng temperatures wthn thermal stablty lmts for server components. A careful balance s needed at these centers to provson the rght number of resources to the rght servce/applcaton beng hosted, at the rght tme, n order to reduce ther operatonal cost whle stll meetng any performance based servce level agreement (SLA) decded upon earler. Focusng specfcally on the energy consumpton problem, ths paper presents three man technques - a pro-actve one, a reactve one, and a hybrd between the two - to dynamcally optmze operatng costs whle meetng performance-based SLAs. The growth of network-based commercal servces, together wth off-loadng of IT servces, s leadng to the growth of hostng/data centers that need to house several applcatons/servces. These centers provson thousands of servers, and data storage devces, to run these applcatons, earnng revenue from these applcaton provders (customers) n return - sometmes referred to as on-demand computng. Over-provsonng the servce capacty to allow for worst case load condtons s economcally not very attractve. Consequently, much of pror work (e.g. [9, 31, 36]) has looked at fndng the rght capacty, and dstrbutng ths capacty between the dfferent applcatons based on ther SLAs. However, there could stll be tme perods durng the executon when the overall server capacty s much hgher than the current demands across the applcatons. It should be noted that the hostng center s stll ncurrng operatonal costs, such as electrcty, durng such perods.

2 Energy consumpton of these hostng/data servers s becomng a serous concern. As several recent studes have ponted out [11, 12, 22, 23, 29] data centers can consume several Megawatt. It s not just the cost of powerng these servers, but we need to also nclude the cost of deployng coolng systems (whch n turn consume power) to ensure stable operaton [27]. We are at power denstes of around 100 Watts per square feet, and the coolng problem s expected to get worse wth shrnkng form factors. Fnally, one also needs to be concerned wth the envronmental ssues when generatng and delverng such hgh electrc power capactes. Statcally provsonng the number of servers for a gven cost (say of electrcty) and performance SLA can be conservatve or mss out on opportuntes for savngs, snce workloads are typcally varyng over tme. Dynamc power management s thus very mportant when deployng server farms/clusters, and hardware has started provdng assstance to acheve ths. For nstance, n addton to powerng down a server, most processors today allow dynamc voltage/frequency scalng (DVS), where the frequency (and voltage) can be lowered to produce much more savngs n power consumpton compared to how much one looses on performance. Whle ths may not provde as much savngs as shuttng down a server completely, the advantage s that t can stll servce requests (albet at a lower frequency), and does not ncur as hgh costs (whether t be tme for transtonng between frequences or n terms of wear-andtear assocated wth repeated server on-off cycles). Much of the earler work [11, 29] n ths area, used server turn off/on mechansms for power management. The only other work that consdered a combnaton of these two mechansms (turn off and DVS) was done at IBM [15]. However, ths work dd not (nor dd the others) consder the mpact of server off/on cycles on the long term relablty of server components due to wear-and-tear. Note that falng of components ncurs addtonal costs (hardware and personnel) for procurement and replacement. Further, none of the pror studes have really consdered the goal of meetng a response tme requrement (SLA). Rather, they have tred to optmze energy frst, at a slght degradaton n response tme. For a hostng center, t s more mportant to meet the requred SLA n terms of response tme (snce that s the revenue maker), and reducng the energy consumpton when meetng ths goal should be a desrable (rather than vce-versa). As our results wll show, not treatng the performance-based SLA n the dynamc optmzaton as a frst class ctzen causes these other schemes to fall short, even though they may provde more energy savngs. Fnally, most of the pror work has just looked at optmzng the energy n a sngle applcaton settng, assumng the exstence of a hgher level server provsonng mechansm across applcatons. Our framework on the other hand, ntegrates server provsonng across applcatons (whch automatcally tells us how many servers to turn on/off) wth energy management wthn an applcaton, n an elegant fashon whle stll allowng dfferent optons for these two steps. Ths paper presents the frst formalzaton to ths dynamc optmzaton problem of server provsonng and DVS control for multple applcatons, whch ncludes a response-tme SLA, and the costs (both power and wear-and-tear) of server shutdowns. We present three new onlne mechansms to ths problem. The frst s pro-actve n that t predcts workload behavor for the near future, and uses ths nformaton n a stochastc queung model to perform control. The second s reactve n that t uses feedback control to acheve the same goals. The thrd s a hybrd scheme where we use predctve nformaton from the frst for server provsonng, and feedback control of the latter for DVS. Usng real web-server workloads n a mult-applcaton settng, we show that all these schemes can provde good energy savngs wthout gnorng the SLA (unlke the only other related mechansm [15] we can compare t s never able to meet the SLA). 2. SYSTEM MODEL Our hostng center model contans a number of dentcal servers, called a server cluster, whch are all equally capable of runnng any applcaton. Whle most of our system models and optmzaton mechansms are applcable to a dverse spectrum of applcaton domans (from those n the scentfc doman to those n the commercal world), the mplementaton and evaluatons are manly targeted towards server based applcatons (web servers n partcular) that servce clent requests. Each HTTP request s drected to one of the servers runnng the applcaton, whch processes the request n a tme (servce tme) that s related to a request parameter (e.g. fle sze). Consequently, one can mpact the throughput/response tme for these requests by modulatng the number of servers allocated to an applcaton at any tme. Dense blade systems are beng ncreasngly deployed n hostng centers because of ther smaller form factors (makng t easer to rack them up), and power consumpton characterstcs. Such systems provde a farly powerful (server-class) processor such as an Intel Xeon, 1-2 GB of memory, and perhaps a small SCSI dsk. There can be several hundreds/thousands of these blade servers n a hostng center, all consumng electrc power dependng on whether they are turned on, and f so at what frequency. From the vewpont of the customers (applcatons) of the hostng center, t s mportant to have at least as many servers as necessary for that applcaton n order to meet a desred level of performance (SLA) for ts clent requests. In ths exercse, we use a smple SLA that tres to bound the average response tme for requests, and a more extensve SLA could have been used as well. From the vewpont of the hostng center, the goal s to consume as lttle electrcal energy as possble whle stll ensurng that the ndvdual applcatons are able to meet ther end-user SLAs for clent requests. A schematc of the system envronment s gven n Fgure 1. Fgure 1: System Model The workload mposed on a server s resources when runnng an applcaton expends electrc power, thereby ncurrng a cost n ts operaton. The mportant resources of concern nclude semconductor components such as the server CPU, caches, DRAMs, and system nterconnects, as well as electro-mechancal components such as dsks. There are two mechansms avalable today for managng the power consumpton of these systems: One can temporarly power down the blade, whch ensures that no electrcty flows to any component of ths server. Whle ths can provde the most power savngs, the downsde s that ths blade s not avalable to serve any requests. Brngng up

3 the machne to serve requests would ncur addtonal costs, n terms of () tme and energy expended to boot up the machne durng whch requests cannot be served, and () ncreased wear-and-tear of components (the dsks n partcular) that can reduce the mean-tme between falures (MTBF) leadng to addtonal costs for replacements and personnel. Another common opton for power management s dynamc voltage/frequency scalng (DVS). As wll be shown n secton 4, the dynamc power consumed n crcuts s proportonal to the cubc power of the operatng clock frequency. Slowng down the clock allows the scalng down of the supply voltages for the crcuts, resultng n power savngs. Even though not all server components may be exportng software nterfaces to perform DVS, CPUs [2] - even those n the server market - are startng to allow such dynamc control. Wth the CPUs consumng the bulk of the power n a blade server (note that an Intel Xeon consumes between Watts at full speed, whle the other blade components ncludng the dsk can add about Watts), DVS control n our envronment can provde substantal power savngs. Our framework allows the employment of both these optons towards enhancng energy savngs, wthout compromsng on enduser SLAs. In our model, when a machne s swtched off, t consumes no power. When the machne s on, t can operate at a number of dscrete frequences f1 < f2 <... < fl, where the relatonshp between the power consumpton and these operatng frequences s of the form P = P fxed + P f.f 3 (see secton 4), so that we capture the cubc relatonshp wth the CPU frequency whle stll accountng for the power consumpton of other components that do not scale wth the frequency. DVS mplementaton for a server cluster of these blades can be broadly classfed based on whether () the control s completely decentralzed, where each node ndependently makes ts scalng choce purely on local nformaton, or () there s a coordnated (perhaps centralzed) control mechansm that regulates the operaton of each node. Though decentralzed DVS control s attractve from the mplementaton vewpont, prevous research [15] has shown that a coordnated voltage scalng approach can provde substantally hgher savngs. In our system model, we use a coordnated DVS strategy where a controller s assgned for each applcaton and t perodcally assgns the operatng frequency/voltage for the servers runnng that applcaton. We perform ths at an applcaton granularty snce the load for each applcaton, and correspondng SLA requrements, can be qute dfferent. Further, n our model, each server contnues to be entrely devoted to a sngle applcaton untl the tme of server reallocaton. Wth ths model of the hostng center, the two soluton steps to the problem at hand are to () perform server provsonng to decde how many servers to allocate to each applcaton, and () decde what should be the operatng frequency for the servers allocated to each applcaton. These steps need to be done perodcally to account for tme-varyng behavor. Note that the frst step may requre brngng up/down servers and/or re-assgnng servers. The tme cost for such mgratons of applcatons (T mgrate) and/or reboots (T reboot ) are ncorporated n our model. Further, there s a long term mpact of server reboots due to wear-and-tear of components such as the dsk, and we nclude ther dollar cost. 3. RELATED WORK strngent servce-level agreements to be met, there have been several nvestgatons nto resource capacty plannng and dynamc provsonng ssues for QoS control (e.g. [5, 31, 34, 38]). As many studes have ponted out, over-provsonng of resources can be economcally unattractve, and t s mportant to accommodate transent overload stuatons (whch are qute common for these servces) wth the exstng resources [9]. Dynamc load montorng (e.g. [30]), transent-load based optmzaton (e.g. [9, 36]) and feedback-based technques (e.g. [3, 18, 26]) are beng examned to handle these stuatons. However, these studes have manly focused on performance (and revenue based on an SLA), and have not examned the power consumpton ssues. Energy Management n Moble Devces:. Energy management has tradtonally been consdered mportant for moble and resource-constraned envronments that are lmted by battery capactes. One common technque s to shut off components (e.g. the dsk [21],[24]) durng perods of nactvty. The nfluence that the frequency (whch allows the voltage to be scaled down) has on the power consumpton has been exploted to mplement Dynamc Voltage Scalng (DVS) mechansms for energy management of ntegrated crcuts [10], [32]. Consequently, many processors - not only those for the moble market, but even those n the server space [2] - today are startng to offer software nterfaces for DVS. There have been numerous studes on explotng DVS for power savngs wthout compromsng on performance (e.g. [17], [19],[25], [28], [37]). However, all these studes have been for moble devces that have very dfferent workload patterns than servers, and/or for embedded envronments where soft/hard real-tme constrants need to be met wthn a power budget. Energy Management n Servers and Data Centers:. It s only recently that energy management for server clusters found n data/hostng centers has ganed much attenton [8, 12, 16, 20, 23, 29]. Amongst these early studes, [11, 12, 29] show that resource allocaton and energy management can be ntertwned by developng technques for shuttng down servers that are not n use, or have low load (by offloadng ther dutes to other servers). A detaled study of the power profle of a real system by researchers at IBM Austn [6] ponts out that the CPU s the largest consumng component for typcal web server confguraton. Subsequent studes [15, 33] have looked at optmzng ths power, by montorng evolvng load and performance metrcs to dynamcally modulate CPU frequences. A categorzaton of these most closely related nvestgatons s summarzed n Table 1, n terms of whether () the schemes consder just server shutdowns or allow DVS, () they focus on energy management of just 1 server (whch can be thought of as completely ndependent management of a server wthout regard to the overall workload across the system), () they consder dfferent applcatons to be concurrently executng across these servers wth dfferent loads, v) they try to meet SLAs mposed by these applcatons, and (v) they ncorporate the cost of rebootng servers (n terms of both tme and MTBF). Of the four related studes shown here, only two [15, 33] have consdered the possblty of DVS for server applcatons. Of these two, only the IBM work [15] has examned the ssues n the context of a server cluster, wth the other focusng on a sngle machne/server settng. However, even the IBM work has not consdered the ssues n the context of multple applcatons beng hosted on these server clusters (.e. the server provsonng problem n conjuncton wth energy management), nor have they consdered the long term mpact of machne reboots on operatng costs. Resource Provsonng n Hostng/Data Centers:. Snce many of the servces/applcatons hosted at these centers can have

4 DVS? Multple Multple SLA? Reboot servers? applns.? cost? Duke [11] No Yes(5) Yes(2) Yes Yes(Tme) Rutgers [29] No Yes(8) No No Yes(Tme) Vrgna [33] Yes No No No No IBM [15] Yes Yes(10) No No No Ours Yes Yes Yes Yes Yes Table 1: Comparng the most related studes wth ths work. The numbers gven n parenthess ndcate number of servers/applns. used n the study. The Tme for the reboot cost ndcates that only the tme overhead of rebootng systems was consdered (and not the wear-and-tear). 4. PROBLEM FORMULATION We model our hostng centers to have M dentcal servers, all equally capable of runnng any applcaton. Our model allows a maxmum of N dfferent applcatons to be hosted at any tme across these servers, wth each applcaton beng allocated m servers at any tme ( P m M). When a server s operatonal, t can run between a maxmum frequency f max (consumng the hghest power) and a mnmum frequency f mn, wth a range of dscrete operatng frequency levels n-between. The servce tme of a request that comes to an applcaton s mpacted (lnearly) by the frequency, and the overall servce capacty allocated to an applcaton s also drectly related to the number of servers allotted to t. A server CPU operatng at a frequency f consumes dynamc power (P ) that s proportonal to V 2 f, where V s the operatng voltage [10]. Further, underlyng crcut desgn nherently mposes a relatonshp between the operatng voltage and crcut frequency. For nstance, as we lower the voltage (to reduce power), the frequency needs to be reduced n order to allow for the crcuts to stablze. The common rule of thumb for ths relatonshp s gven by V f. Snce the power consumpton of all other components (except the CPU) s ndependent of the frequency, we have the followng smple model of power consumed by one cluster node runnng at frequency f: P = P fxed + P f f 3. Ths cubc relatonshp between operatng frequency and power consumpton has been used n other related work as well [15]. We can then calculate the energy consumpton of the servers at the hostng center operatng at a constant frequency f, over tme t,as Z E = M P dt = M (P fxed + P f f 3 ) t t Snce the real workload s expected to be tme varyng, we need to control M and f over tme to manage the energy consumpton whle adherng to any performance goals. Let us say that these are controlled perodcally at a granularty of tme t. Electrcty Cost of Operaton:. Over a duraton of Z such P tme unts of duraton t, the overall energy consumpton s Z P N z=1 = m(z) (P fxed + P f f (z) 3 ) t, where m (z) s the number of servers allocated to applcaton durng the z-th tme perod, whch are all runnng at a frequency f (z). IfK $ s the electrcty charge expressed as dollars per unt of energy consumpton (e.g. klowatt hour), then we have the total electrcty P cost (n dollars) for operaton of the hostng center to be Z P N z=1 K $ m (z) (P fxed + P f f (z) 3 ) t Cost of Server Turn-ons:. One mportant pont to consder s the mpact of turnng on/off servers. Whle frequency scalng s tself relatvely smple (takng only a few cycles), and may not sgnfcantly affect the long term relablty of the system, the effects of turnng off/on the servers should not be overlooked. Machne reboots can last several seconds/mnutes, consumng power n the process. Further, t s well understood that server components (dsks n partcular) can be susceptble to machne start-stop cycles [14], thus reducng the MTBF when we perform server shutdowns. If B o s the dollar cost of a sngle server turn-on cycle, then the total amount of dollars expended by any mechansm usng ths approach can be calculated as P Z z=1 Bo h PN m(z) P N m(z 1) + Note that the term x + = x when x 0 and x + = 0 otherwse. B o tself needs to account for the dollar energy cost for brngng up the machne (P T reboot K $ ), together wth the dollar cost ncurred by a smaller MTBF (denoted as C r),.e. B o = P max T reboot K $ + C r, where P max denotes the server power consumpton when runnng at f max. Puttng these together, we get the dollar cost of operaton to be ZX NX K $ m (z) (P fxed + P f f (z) 3 ) t z=1 " N # +! X NX +B o m (z) m (z 1) whch s the objectve functon to mnmze. Note that one could choose to mnmze ths objectve by nordnately slowng down the frequences, or shuttng down a large number of servers. Ths can result n volatng any servce level agreements (to meet a response tme requrement) to customers/applcatons. We use a smple SLA, where the requrement s to meet an average response tme target. Consequently, when mnmzng the above objectve functon, we need to obey the followng constrants: W W, whch s the SLA to constran the average response tme of applcaton to a target W. f (z) F=(f1,f2,...fl) where f1 <f2 <... < fl, whch says that a runnng server has to operate at one of the l dscrete frequences. We use f mn to denote f1 and f max to denote fl. P N m (z) <= M, snce the total number of allocated servers has to be less than total capacty. 5. METHODOLOGY We next dscuss three mechansms to determne the number of servers (m ) allocated to each applcaton and ther frequency (f ) at any nstant. Note that all m servers of applcaton run at the same frequency f as explaned n secton 2. To accommodate tme-varyng workload behavor, we assume that every T mnutes, servers are allocated for the N applcatons (.e. for all, we determne m ). Lkewse, every t mnutes, we determne the frequency f for the m servers of applcaton. Although not requred for modelng, for logcal and mplementaton reasons we assume that T s an nteger multple of t. Thereby at larger tme granulartes T, we perform server allocaton/provsonng, and at smaller tme granulartes (t), we tune the servers to dfferent frequences. The basc premse for dong ths at 2 granulartes, wth T > t s that server re-allocaton (machne reboots and applcaton mgraton) s much more tme-consumng than changng the frequency. There are U ntervals of length T present n the entre duraton for whch we want to optmze, and each of these (denoted by u =1to U) has S ntervals of length t (.e. s =1to S). Ths s pctorally shown

5 Fgure 2: Granularty (T,t) for server allocaton and frequency modulaton n fgure 2. We frst propose two technques - non-lnear optmzaton based on steady-state queung analyss, and a technque based on control theory - to determne f and m at every t and T mnutes respectvely. Then, we llustrate a hybrd approach ntegratng these technques. 5.1 Queung Theory Based Approach We can model the system under consderaton as a set of N parallel queues denotng the N applcatons. Clent requests for applcaton arrve one by one nto ts correspondng nfnte capacty queue (queue-). Settng m and f by analyzng ths queung system nvolves three phases. Frst, we need to predct request arrval pattern and servce tme requrements. Next, we can use the predcted arrval and servce tme nformaton n a queung model to determne the mean response tme for each applcaton. Fnally, we can use the response tme nformaton n our optmzaton problem, for whch we descrbe a tractable soluton strategy Predcton To perform queung analyss n each nterval, we requre estmates of mean arrval rate of requests (λ), squared coeffcent of varaton of request nter-arrval tmes (Ca), 2 mean fle sze n bytes (φ), and squared coeffcent of varaton of fle sze (Cs 2 ) for each applcaton. Notce that all four of the above parameters are tme varyng as well as stochastc, requrng a predcton of ther values at each nterval. Hstorcally, researchers have used ether selfsmlar traffc patterns or used determnstc tme-varyng parameters. On one hand, self-smlar models are attractve for descrbng the nature of the traffc and generatng synthetc workloads, but they do not lend themselves well for analyss. On the other hand, tme-varyng models requre Posson assumptons and full knowledge of the tme-varyng behavor. It s mportant to note that t s not our goal here to determne the best predcton technque. However, our contrbuton s that f the parameters are well predcted, we can approxmate the nter-arrval tmes and fle sze requrements as ndependent and dentcally dstrbuted (..d.) nsde each nterval and use ths convenently for onlne optmzaton. We present a smple predcton technque wth the dsclamer that ths can surely be mproved upon. From one tme nterval to another, for the mean nter-arrval tmes and standard devaton of nter-arrval tmes, we use a multplcatve S-ARMA (seasonal autoregressve movng average) [35]. We use several ntervals of tranng data to tune the S-ARMA model. Then, based on the cumulatve average, seasonalty effects, as well as the value at a few prevous ntervals, we dentfy the parameters for a gven nterval. For mean and standard devaton of fle sze we use a smple decomposed model or Wnter s smoothng method [35]. In general, we found that the predcton works reasonably well for arrval rates, but the errors can be hgher n terms of fle szes n certan cases. However, as we mentoned, our goal here s not necessarly to derve the best predcton mechansm. Note that the predcton values wll be computed dynamcally at the begnnng of each nterval durng the actual executon wth the real data Queung Analyss Usng the predctons of λ(), Ca(), 2 φ(), and Cs 2 () for a gven nterval for each applcaton, we next obtan an expresson for the predcted average response tmes (W ) n that gven nterval. We approxmate queue- to be a G/G/m queue wth..d. nterarrval tmes and..d. servce tmes. Further, we assume that the tme nterval s large enough that steady state results can be used (note that ths s one of the reasons why results wth ths technque may not be very good for small tme granulartes of control as our evaluatons wll show). There are several approxmatons for response tme n the lterature, of whch many are emprcal. In ths study, we use the method n Bolch et al [7], whch states that W = φ() β f + αm φ() β f 1 1 ρ «C 2 a ()+C 2 s () 2m «, (1) where β f s the bandwdth of the server n bytes served per second for applcaton (β s a constant that s calculated by measurng the servce tme of HTTP requests of dfferent szes on an m +1 2 actual system), ρ = λ()φ() m β f, and α m = ρ. Notce that W n Equaton (1) s non-lnear wth respect to the decson/control varables f and m. Also, we do not need to wrte down the constrant ρ < 1 explctly n the optmzaton problem as satsfyng the SLA constrant would automatcally ensure ths. Note that W decreases wth respect to both m and f. Therefore the SLA constrant would be bndng f f and m were contnuous. However n the dscrete case, we can obtan an effcent fronter (.e. for every f we can fnd the smallest m that wll render the SLA constrant feasble). As there are l levels for f,we wll have to consder only l pars of m and f for each applcaton n each nterval, as s exploted below Solvng the optmzaton problem We use s and u n all the parameters to denote the correspondng subscrpts for S and U ntervals as shown n Fgure 2. For example, m (u) s the number of servers n queue- at the u th nterval of duraton T, and f (u, s) s the frequency of a server n queue- at the s th nterval of duraton t for ths gven u. The optmzaton problem can then be re-wrtten n terms of the decson varables f (u, s) and m (u) (assume for all, m (0) = 0) as: mn f (u,s),m (u) UX u=1 SX s=1 NX K $ m (u)(p fxed + P f f (u, s) 3 )t " N # +! X NX + B o m (u) m (u 1) subject to W W for all NX m (u) M for all u, and m (u) s a non-negatve nteger f F Clearly, the above optmzaton s non-lnear and dscrete n terms of the decson varables for both the objectve and the constrants. Hence standard optmzaton technques are not enough. Snce we only consder a fnte number of frequency alternatves, t s temptng to do a complete enumeraton to determne the optmal soluton. However, a complete enumeraton would requre comparng O(l S M U N) values - and performng these perodcally durng executon. Therefore we resort to a heurstc as explaned below.

6 We revse the predcton model descrbed above dynamcally based on the prevous observaton (for u 1), and then run the followng two steps at the begnnng of each u. We explan the steps brefly wthout gong nto detals. We frst consder the case t = T, and therefore for all ntervals s = 1. Under ths framework, we use f (u, s) =f (u) for ease of notaton. Step 1: We frst obtan a feasble soluton that would result n an upper bound to the objectve functon. We consder only a sngle nterval and optmze the parameters f (u) and m (u) for the smplfed objectve as below: mn f (u),m (u) P N K $m (u)(p fxed + P f f (u) 3 )t +B 0 P N m(u) There are two ways of solvng ths, one s to assume the decson varables (m (u) and f (u)) are contnuous and use standard nonlnear programmng technques. Ths s especally useful when the problem s scaled to a large number of applcatons. However, we use a second method where we explot the monotoncty propertes of the objectve functon and constrants as well as the fact that the frequences take only a small number of dscrete values. For that purpose, we use estmates of moments of nter-arrval tmes and fle szes, across all applcatons and across all ntervals. Then, for each nterval u we start by fndng the mnmum number of servers m (u) for all applcatons so that the constrant W (u) W can be satsfed usng the hghest frequency for the servers. Note that the total number of servers n the hostng center M s large enough that ths would guarantee the constrant P N m(u) M to be automatcally satsfed - otherwse t mples the SLA has been chosen poorly snce t would have been volated even wthout any energy management. Ths soluton can be mproved by recognzng that the objectve s a cubc n terms of the frequency that needs to be reduced. Therefore by sutably decreasng f (u) and ncreasng m (u), as long as the constrants are satsfed, we can obtan a soluton that would be close to the optmal soluton,.e. we are tryng to fnd the number of servers that are needed when they are all operatng at lower frequences that can gve the most power savngs. When reducng f (u), we select applcatons n decreasng order of f (u) for each nterval. Step 2: We next consder all the ntervals together. The actual objectve functon that s the total cost across the entre executon, s used for ths purpose. We use the upper bound soluton from the prevous step and work our way acceptng feasble solutons that reduce the objectve functon value. If a pont n ths feasble space mproves the objectve, we accept that soluton and search further n ts neghborhood. In our approach we consder one nterval at a tme (gong from the frst to the last) and n each nterval we select the applcatons n the decreasng order of ther frequences. For each applcaton we compare the number of servers n that nterval aganst those of the next nterval. We try to level off the number of servers (to the extent possble) whenever the resultng soluton mproves. We search greedly gvng mportance to the ponts where the number of servers for ths nterval are close to the number for the prevous nterval (to reduce turn-on costs). Thereafter, we tune the frequences so that the resultng soluton remans feasble. Note that T = t s beng assumed n the above two steps. To handle the cases where T > t, we frst perform the above optmzaton (assumng t = T and usng average values of arrval and fle sze parameters over every nterval) at each of u =1to U, to frst determne the m (u). Subsequently, usng the predcton nformaton for each s, we determne the approprate f (u, s) usng ths pre-determned value of m (u). W1 W2 WN Target Response Tme W1 W1 W2 W2 WN WN Trackng Error of Aggregate Frequency F1 F2 FN m1 m2 Server mn Allocaton f1 f2 fn Smulator W1 W2 WN Measured Response Tme Fgure 3: Overvew of the control-theoretc approach 5.2 Theoretc Approach An alternatve strategy that we propose s based on feedback control theory. Compared to the prevous mechansm that reles on steady-state analyss of the system, control theory provdes a way of addressng the system s transent dynamcs usng feedback. It can allow fner granularty of controllng the system, to become more responsve to workload changes. In ths approach, we decompose the gven problem nto two sub-problems: 1. Dynamcally determne an aggregate frequency for each applcaton that can meet response tme guarantees when there s a sngle server per applcaton runnng at ths frequency; the objectve for ths subproblem s to meet response tme wth mnmal aggregate capacty. 2. Solve a server allocaton problem, whch determnes the number of servers for each applcaton n order to provde the aggregate frequency obtaned from the frst subproblem. The objectve for ths subproblem s to balance between the cost of turnng on new servers and the energy consumpton of runnng servers at a hgher frequency. Fgure 3 shows a schematc of ths approach. At each tme perod t, based on the feedback of trackng error that s the dfference between the measured response tme and ts target value, an aggregate frequency wll be computed for each applcaton usng control theory to provde response tme guarantees. The number of servers for each applcaton wll then be allocated through an on-lne optmzaton to provde the requred aggregate frequency. The frequency for each ndvdual server s calculated n terms of the aggregate frequency and the number of runnng servers. Snce we set all servers for each applcaton to be runnng at the same frequency f, the aggregate frequency provded for the -th applcaton s calculated as F (u, s) =m (u) f (u, s) for u =1,,U; s =1,,S. We use F (u, s) nterchangeably wth F (k), k =1,,U S when necessary. It should be noted that though the noton of aggregate frequency s based on approxmatng the response tme from multple servers by the response tme from a sngle server wth the same capacty, the feedback of real response tme wll hopefully make enough adjustments for ths approxmaton. The detals of the feedback control block and the on-lne optmzaton block are gven next Feedback of Aggregate Frequency For the frst subproblem, we apply optmal control theory to dynamcally determne the aggregate server frequency for each applcaton. The orgnal formulaton n Secton 4 mnmzes operaton cost subject to a constrant on response tme. In ths secton, we ncorporate the response tme SLA nto the objectve functon to meet response tme wth as low cost as possble. We modfy the objectve functon as follows, NX U S X (R F F 3 (k)+r W, (W (k) W ) 2 ) (2) k=1 where k s the ndex for tme nterval t; R F and R W, are weghts whose rato provdes tradeoffs between meetng target response

7 tme and havng lower energy cost. The weght parameters R F and R W, are chosen to gve prorty to the SLA. The aggregate frequency F s constraned by the total system capacty,.e. NX F (k) M f max (3) for k =1,,U S. The cost functon (Eq. 2) together wth constrant (Eq. 3) defne a constraned mult-nput (F ) mult-output (W ) optmal control problem. One way to deal wth the constrant (Eq. 3) s to ncorporate t nto the cost functon. In ths paper, snce we assume that there s enough computaton capacty to meet the response tme guarantee for all applcatons (otherwse, the SLA would not have been agreed upon), constrant (Eq. 3) s always satsfed. Consequently, we solve the aggregate server frequency for each applcaton separately, whch leads to solvng N ndependent sngle-nput sngle-output control desgn problem. From the optmal control lterature, the Lnear Quadratc (LQ) regulator [4] seems to be the most approprate way of solvng ths problem. However, n a general LQ formulaton, the cost functon depends on a quadratc term of the control varable. Therefore, we to make the LQ method applcable for our problem. Based on the above arguments and approxmatons, the cost functon for optmal control desgn s defned as follows defne a new control varable F = F 3/2 U S X J = (R F F 2 (k)+r W, (W (k) W ) 2 ) (4) k=1 for =1,,N. Next, we frst use system dentfcaton technques to buld a dynamc model between the new control nput F and the response tme W, after whch a LQ control law s derved. System Identfcaton. We approxmate the dynamc relaton from F to the response tme W by a lnear second-order ARX model as follows: W (k+2) = a,1w (k+1)+a,2w (k)+b,1 F(k+1)+e (k+2) (5) or n a state-space form, W(k +1) W (k +2) 0 1 where H = «= H W(k) W (k +1) a,2 a,1 «, G = ««0 +G F+ e (k +2) «(6) 0, and e b denotes the,1 nose n the system, k =1,,U S 2. A second-order model s chosen here as a balance between model complexty versus accuracy to ft the emprcal data. For system dentfcaton, a pseudo-random bnary sgnal s used to generate the server frequency trajectory F, and part of the nput workload s used to produce the correspondng response tme trajectory W ; then the data set ( F,W ) s used by standard system dentfcaton toolbox n Matlab to compute the model coeffcents n (Eq. 5). It should be noted that even though we use a real trace of HTTP requests for system dentfcaton (whch s the same as that used n generatng predcton models for queung n secton 5.1.1), ths trace s for a tme perod that s dfferent from what s used n the actual smulatons and evaluatons. Snce the coeffcent values n Equaton 5 depend on the samplng granularty of the emprcal data, for each tme granularty (T,t) that we use n our evaluatons, an ndvdual system model (Eq. 5) s constructed for control desgn. Detals on the accuracy of the model can be found n [13]. Law. Gven the cost functon (Eq. 4) together wth the lnear dynamc equaton (Eq. 6), the Lnear-Quadratc regulator provdes an optmal soluton that s always stablzng. The resultng LQ control nput F s represented as the product of an optmal feedback gan wth the trackng error n meetng response tme, F (k +1)= R 1 W(k) F Π W «G W (k 1) W (7) In terms of the LQ control theory, the negatve feedback gan R 1 F ΠG s determned by solvng Π from the followng Rcatt equaton, H T Π H Π (H T Π G )(R F + G T Π G ) 1 (G T Π H ) + R W, =0 (8) The LQ soluton can be easly calculated usng exstng Matlab toolbox. The aggregate frequency F for the -th applcaton s then 2/3 calculated by F = F. In the mplementaton of the LQ regulator, an ntegrator s appended to reduce the steady-state trackng error n meetng response tme SLA. Thus overall controller s F (k + 1) = F (k)+ = F (k) R 2/3 F F 2/3 (k +1) j Π G W(k) W «ff 2/3 W (k 1) W (9) for k = 1,,U S 1. The weghts R F and R W, n (Eq. 4) are desgn parameters. The rule of thumb s to set R F as 1/( F,max) 2 and R W, as 1/(W,max W ) 2, where the subscrpt max denotes the maxmum possble value Server Allocaton Gven the aggregate frequency F (u, s), an on-lne optmzaton s formulated to allocate servers to each applcaton. Ths s based on the tradeoff between the energy cost of runnng servers wth hgher frequences versus the cost of turnng on more servers. Defne m(u) = P N m(u), whch denotes the total number of servers beng turned on across all applcatons, and defne F (u) = P N (maxs=1,,s F(u, s)), whch denotes the total capacty that m(u) should provde at each tme nterval of T.We allocate the number of servers for each applcaton m (u) proportonal to ts aggregate frequency,.e., maxs F(u, s) m (u) = m(u) (10) F (u) Snce server allocaton occurs at u =1,,U, by usng f (u, s) = F (u, s)/m (u) F (u)/m(u) n the cost functon defned n Secton 4, we mnmze the followng cost, UX (K $ m(u) T (P fxed + P f (F (u)/m(u)) 3 )) u=1 + UX B 0 [m(u) m(u 1)] + (11) u=1 Each server s frequency should be operated wthn [f mn,f max]. Consequently, for provdng the aggregate frequency F (u), the total number of servers that are allowed to run across all applcatons at u s constraned by m(u) m(u) m(u), where m(u) = F (u) f max and m(u) =mn( F (u) f mn,m). Equaton (11) s a cost functon wth decson varable m(u). We compute m(u) at the begnnng of each u, but at that tme the value of F calculated based on the feedback control n secton s

8 only up to F (u 1) and the values of F (u) to F (U) are not avalable. Therefore, an on-lne greedy algorthm s desgned to solve m(u) n mnmzng Equaton (11). Further, we use the capacty from the prevous nterval F (u 1,s) and F (u 1) to calculate m (u) n Equaton (10). A pseudo-code of ths on-lne algorthm can be found n [13]. The general dea behnd ths on-lne server allocaton algorthm s explaned as follows. If we gnore the server turn-on costs n Equaton (11), mnmzng the number of servers (denoted by m (u)) for the energy cost can be acheved by settng the frst dervatve of Equaton (11) wth respect to m(u) as zero,.e. δ δm(u) (K $ m(u) T (P fxed + P f (F (u)/m(u)) 3 ) whch gves = K $ T (P fxed 2P f F 3 (u) )=0 (12) m 3 (u) m (u) = F (u) ( 2P f ) 1/3 (13) P fxed Ths m (u) gves the break even number of servers based on the trade-off between the ncrease n cost due to turnng on more servers, and the correspondng decrease n power expended because of allowng them to operate at a lower frequency. If we add one more server beyond m (u), the ncrease n cost due to the addton wll outwegh the cost reducton due to the decrease of server frequency. Therefore, the number of servers beng turned on m(u) should be always less than m (u) even wthout consderng the cost of server turn-ons. If we take nto account the cost of turnng on servers (B o), at any u, u =1,U, there are two cases: 1. If the number of runnng servers n the prevous tme perod m(u 1) s hgher than the mnmum of the break-even number m (u) and the upper-bound m(u), the number of servers m(u) should be brought down to mn(m (u), m(u)) - there s no assocated boot cost for ths. 2. Otherwse, addtonal servers beyond m(u 1) could be turned on, but the total number of runnng servers s stll upper-bounded by mn(m (u), m(u)). For the second case, the number of servers s determned teratvely. Assumng that the number of servers has been ncreased from m(u 1) to, turnng on one more server beyond wll add a one-tme boot cost B 0, whle t wll reduce power cost by K $ (j u +1) T ( P fxed + 2P f F 3 (u) ) (whch s the dervatve of the frst term n Equaton (11) at m(u) 3 =) only f no server wll be turned off durng tme u to tme j. The latter condton s checked by verfyng f s less than the mnmum of m (u) and m(u) up to tme j. Note that at current tme perod u, wedonot have future aggregate frequency F (j) for j u to calculate m(j) - we use an estmated value m(j) to restrct - whch could cause the algorthm to become more greedy. Based on ths argument, the algorthm searches f there exsts such j wthn the whole U tme perods durng whch the savng n power cost s larger than the one-tme boot cost for turnng on one more server. More servers wll be turned on when such j exsts untl the number of runnng servers touches ts upper bound. After the aggregate frequency and number of servers for each applcaton (Equaton (10)) has been determned, the server frequency s calculated as F(u, s) f (u, s) =D( m ) (14) (u) for u =1,,U and s =1,,S, where D( ) denotes roundng up the frequency to dscrete levels n F =(f1,f2,...fl). 5.3 Hybrd Approach The queung approach, descrbed above, can be vewed to be pro-actve, snce t predcts workload behavor for the near future and tunes server allocaton and frequency settng approprately. Whle t can do well when the predctons are farly accurate, and the steady state stochastc behavor s obeyed (requrng a farly coarse-graned wndow), t may not be adaptve enough for fnegraned transent behavor. On the other hand, the control theoretc approach can be vewed to be reactve snce t bases ts decsons on feedback (we are usng feedback control here rather than feed-forward), and s thus expected to be better at fner granulartes, though t can possbly mss out on the predctve nformaton for a longer granularty of optmzaton. Ths leads us to consder a hybrd scheme, where we use the predctve nformaton of the queung approach to determne server allocaton (secton 5.1) at a granularty of T, and the feedback-based control theoretc approach for frequency settng (secton 5.2.1) at the smaller granularty of t. Note that ths strategy s also n agreement wth our underlyng phlosophy where server provsonng costs (n terms of tme for server turn-on and applcaton mgraton) are antcpated to be much hgher than frequency control, and are thus expected to be done less frequently (.e. T>t). 6. EXPERIMENTAL EVALUATION 6.1 Expermental set up We used real HTTP traces obtaned from [1] durng the last week of September and early October 2004, for our evaluaton, whose arrval tme characterstcs are shown n Fgure 4. There are 3 traces n all, denoted as Applcatons 1 to 3, wth each trace beng for a 3 day duraton. We use the data of the frst 2 days (called tranng data) to buld the predcton model for the Queung approach and the system dentfcaton model for the control-theoretc approach, whle only the 3rd day s trace s used n the actual smulaton/evaluaton. We mx these three applcatons n our experments to get dfferent workloads, WL1 to WL3: WL1 contans only one applcaton, namely Applcaton 2; WL2 ncludes both Applcatons 1 and 2; whle WL3 ncludes all 3 applcatons. The technques have been evaluated usng a smulator bult on top of the CSIM smulaton package. The smulator takes the server frequency f and the number of servers m for each tme perod as nputs, and generates response tme W as system output for a gven statc HTTP request trace. Durng the smulaton, t also calculates the energy consumed, number of reboots, and cost of operaton. The cost of reboots and mgratng a server from one applcaton to another are nputs to the smulator. In the nterest of space, we gve a bref descrpton of the smulaton model below, and the reader s referred to [13] for further detals. Snce we are prmarly nterested n the CPU power, whch as explaned n secton 2 domnates over the other components, we assume that the statc HTTP requests ht n the cache. Further, as ponted out n [15], 99% of the HTTP requests can be easly served from the cache. We ran mcrobenchmarks of requests wth dfferent fle szes on a server machne that ht n the cache and used ths as the servce tmes for the requests at the hghest operatng frequency (2.6 GHz). As expected, the relatonshp between fle szes and servce tmes was more or less lnear. We also conducted smlar experments on a laptop wth DVS capabltes to confrm that the servce tme s nversely proportonal to the operatng frequency of the CPU, and approprately adjusted the servce tmes n our sm-

9 ulaton model for the server class CPU. Each node n the smulated cluster uses these servce tmes n servng requests n FCFS order. The maxmum power consumpton at the hghest frequency n our smulator s 100 W, whch roughly matches the consumpton on current server CPUs. We then use the well-known cubc relatonshp between frequency and power, to model the power at each dscrete frequency, whch s smlar to the technque n [15]. We restrct the results presented here to the parameter values shown n Table 2 n the nterest of space. The table gves the dfferent possble CPU frequences, and ther assocated power consumpton values. The electrcty cost (K $ ) s close to that beng charged currently, and the dollar cost assocated wth wear-and-tear per reboot s based approxmately on the cost of a dsk replacement (say $200 ncludng personnel) and the rated MTBF (say startstop cycles). Though our model allows per applcaton W, n the nterest of clarty we use the same target response tme for all applcatons of 6 ms. We have ensured that ths s achevable for each workload by provsonng an approprate number of servers (M)as gven n the table. We consder dfferent combnatons of T and t, wth t<t,.e. DVS beng done much more frequently than server allocaton, wth (T,t) beng used to denote the experment. Parameter Value F (n GHz) (1.4, 1.57, 1.74, 1.91, 2.08, 2.25, 2.42, 2.6) P (n Watts for each f) (60, 63, 66.8, 71.3, 76.8, 83.2, 90.7, 100) K $ (cents/kwh) 10 C r (cents/boot) 20,000/40,000 = 0.5 B o (cents/boot) C r = T reboot, T mgrate 90 secs, 20 secs M 13 (for WL1), 31 (for WL2), 39 (for WL3) W (n msec) 6 T (n mnutes) (60, 20, 10) t (n mnutes) (60, 20, 10, 5) wth t T Table 2: Default Parameter Values for Smulaton 6.2 Results Need for Sophstcated Strateges Before evaluatng our schemes, we frst present some results to motvate ther need, by examnng the followng three smple schemes runnng WL1: 1. Fxed servers, constant frequency durng the entre experment: Results wth such a scheme for the maxmum servers (=13) by runnng the experment n a brute-force fashon wth each frequency level s shown n Table 3 (a). If we want any vable opton wth ths scheme (.e. RT < 6.6), the lowest cost we can get s 196 cents, whch s around 15% hgher than what Queung can provde at (60,60) - see Table 4 - and we can do even better at other granulartes. 2. Hghest frequency, constant number of servers durng the entre experment: Ths corresponds to statcally confgurng the hostng center wth the mnmum number of servers needed to meet the SLA f they were all to run at the maxmum speed. Lookng at the results for ths scheme n Table 3 (b), we see that t takes at least 8 servers operatng at full frequency, before ths becomes a vable opton. Even at ths number of servers, the energy consumpton and cost are hgher than for our schemes at dfferent tme granulartes (compare wth Table 4). 3. Maxmum number of servers, swtch between f mn and f max: Whenever there s no request, a server swtches to the mnmum frequency (no cost s assumed for such swtchng), and whenever a request arrves t swtches to the maxmum frequency untl t has fnshed servng the request. Clearly, the response tme s not any dfferent from operatng at maxmum frequency. However, the energy (72.9% of havng them all operate at hghest frequency) and cost ( cents) are much hgher than what we can get wth our schemes. Arrval rate App 1 App 2 App Tme (hour) (a) Tranng Data Arrval rate Tme (hour) (b) Experment Data Fgure 4: Tme Varyng Arrval Rates App 1 App 2 App 3 Metrcs:. The man statstcs that we present here nclude () the objectve functon (.e. Cost), () the energy consumpton (Energy %) expressed as a percentage of the energy that would be consumed f all the servers were runnng all the tme at the hghest frequency,.e. wthout any power management, () the number of reboots (#reboots), and (v) the mean response tme (RT). When evaluatng any scheme we want to frst check whether t s able to meet the response tme SLA (.e. a target of 6 ms), before checkng ts energy consumpton or cost. However, t should be noted that some schemes may very margnally volate ths SLA whle savng substantal cost. In order to not penalze such a scheme, we allow a 10% slack n the SLA,.e. we consder schemes that can go as hgh as 6.6 ms n ts mean response tme when comparng them, and we term such a scheme to be a vable opton. Note that the frst two schemes requre a brute-force evaluaton of all alternatves, and are thus not sutable n practce. Our pont n showng ther results was to pont out that even f one had a very good statc server capacty allocaton mechansm for the hostng center, results wth a dynamc power management strategy can provde better savngs due to tme-varyng behavor of the workload. One could envson the above thrd scheme to be a nobraner power management strategy f there are no costs to transton between frequences. However, wth reasonable load condtons, there s not that much scope for operatng at lower frequences. The reader should note that we could use any of our three proposals n conjuncton wth ths thrd smple mechansm n order to set the maxmum frequency at whch to serve a request dynamcally (and to otherwse brng t down to f mn). However, we do not consder that n the evaluatons. Freq. Cost RT Energy (GHz) (cents) (ms) (%) (a) Fxed Servers = 13, Const. Freq. Serv. Cost RT Energy (#) (cents) (ms) (%) (b) Max. Freq. = 2.6 GHz, Const. Servers Table 3: Results for the smple strateges

10 WL1 WL2 WL3 (T,t) Schemes Cost RT Energy #reboots Cost RT Energy #reboots Cost RT Energy #reboots (cents) (ms) (%) (cents) (ms) (%) (cents) (ms) (%) (60, 60) Queueng Hybrd IBM (60, 20) Queueng Hybrd (60, 10) Queueng Hybrd (60, 5) Queueng Hybrd (20, 20) Queueng Hybrd IBM (20, 10) Queueng Hybrd (20, 5) Queueng Hybrd (10, 10) Queueng Hybrd IBM (10, 5) Queueng Hybrd Table 4: Comparng the Schemes for the three workloads. Only the costs wth a next to them are vable Comparson wth IBM method The most closely related power management mechansm from IBM [15] uses a rather smple feedback control mechansm to determne the frequency and number of servers for the next nterval gven those of the current nterval and the observed utlzaton/response tme, and these two mechansms are ntegrated (.e. T = t). It s a purely feedback mechansm and does not use any predcton, and t s not ntended to manage server allocaton across applcatons. In the nterest of farness, we should manly compare results for WL1 (the sngle applcaton workload n Table 4), and even there we see that whle ther method gves better energy savngs (around 70%), t s not vable (.e. the response tmes are never meetng the requred SLA). Snce ther method treats response tme on a best effort bass, rather than a constrant to obey, energy optmzaton takes center-stage, leadng to much worse response tmes. As mentoned earler, the SLA s usually much more mportant (the revenue earner) to the hostng center. Our schemes are able to meet the SLA n many cases, whle stll provdng reasonable energy savngs (around 50%) Comparng our schemes Table 4 can be used to compare our three strateges for the three workloads. We manly focus on the vable executons,.e. those where the average response tme s less than 6.6 ms. As descrbed earler, the pro-actve Queung approach uses predctons to antcpate workload behavor and steady-state analyss, both of whch need larger tme granulartes for better accuracy. At these larger granulartes, the theoretc approach does not have any predcton nformaton to optmze for the next nterval, usng feedback from the last nterval that s agan coarse-graned. Consequently, we see that the Queung approach does much better than the reactve scheme at large (T,t) granulartes. For example, at (60,60), Queung s around 10% better n cost than for WL1. To understand the executon characterstcs of these experments at large (T,t), we gve an example executon of Applcaton 3 n WL3 n terms of the number of servers provsoned (m ), ther frequency (f ) and the correspondng response tme (W ) over tme n Fgure 5 (b). If we examne the response tme behavor of (60,60), we see that there s a spke at hour 7. Ths s because there was a load burst n ths hour, whch the mechansm could not detect at the end of hour 6 (because there s only feedback). On the other hand, Queung has advance knowledge of the workload for the next hour, and s able to make better judgments on server allocaton. Ths also reflects on the correspondng frequency that s beng set, resultng n the lower cost for Queung. When we move to the other extreme of small tme granulartes (say (10,5)), we see that the naccuraces of the Queung strategy sgnfcantly mpact ts vablty, and ths opton does not meet the SLA n any of the 3 workloads. However, by gettng very frequent feedback from the system, the reactve scheme s able to perform a better job of server allocaton and frequency modulaton to lower the costs. The result of naccuraces of Queung at small (T,t) s obvous n Fgure 5 (a), whch shows the tme varyng behavor for Applcaton 2 n WL3. The Queung scheme appears to under-estmate the requred capacty, thus turnng down more servers than. Ths results n a hgher response tme. The hybrd scheme that s ntended to beneft from the merts of these two schemes, does gve a cost between these two at ether extremes of tme granulartes. When we look at the large T and short t ranges (e.g. (60,5)), hybrd benefts from the pro-actve mechansm for server allocaton from the queung strategy (t uses the same number of servers as Queung), and the reactve feedback

11 Allocated servers Frequency (GHz) Mean resp tme (msec.) from the underlyng system to set frequences based on short term varatons to meet the response tme SLA # Servers # Servers Q(10,5) H(60, 5) C(10, 5) Tme (hour) Frequency Q(10,5) 1.6 H(60, 5) C(10, 5) 1.4 Tme (hour) Resp. Tme Q(10,5) H(60, 5) C(10, 5) 10 0 Tme (hour) Allocated servers Frequency (GHz) Mean resp tme (msec.) Q(60,60)/H(60, 5) C(60, 60) 4 Tme (hour) Frequency Q(60,60) 1.6 H(60, 5) C(60, 60) 1.4 Tme (hour) Resp. Tme Q(60,60) H(60, 5) C(60, 60) 10 0 Tme (hour) (a) Appln. 2 (b) Appln. 3 Fgure 5: Tme Varyng Behavor for Appln. 2 and Appln. 3 n WL3 6.3 Varyng the Parameters One would be nterested n fndng out how these schemes scale as we move to a larger envronment wth a lot more servers that host more applcatons. We have conducted an experment wth M = 285 servers runnng N =30applcatons. Snce t s dffcult to procure so many dfferent traces, we take the three traces used earler and replcate them by usng randomzed phase dfferences to synthetcally generate workloads. Results for ths workload are gven n Table 5 for representatve (T, t) granulartes, vz., (60,60), (60,5) and (10,5). As before, the Queung approach s more vable at the larger tme granularty, and the naccuraces cause a sgnfcant degradaton n response tmes at the fner granularty. In fact, these naccuraces at fne granulartes result n poor choce of server allocaton n Hybrd at (10, 5). However, at the (60, 5) granularty, Hybrd s agan able to beneft from the pro-actve predcton of Queung for server allocaton, and the feedback of for frequency modulaton, gvng the lowest response tmes. Note that by modulatng parameters of our system model we can target our optmzatons at a wde spectrum of systems, operatng condtons and cost functons. For nstance, settng T to mples conductng server provsonng once (.e. a statc confguraton of the number of servers), and then managng power only wth DVS. Smlarly, settng t to mples that only server turn off/on s em- (T,t) Schemes Cost RT #reboots (cents) (ms) (60, 60) Queueng Hybrd (60, 5) Queueng Hybrd (10, 5) Queueng Hybrd Table 5: Results for Workload wth 30 applcatons and 285 servers ployed for power management. We have also conducted experments varyng the dollar (B o) and tme (T reboot ) of reboots (B o), and the tme for applcaton mgraton (T mgrate). The reader s referred to [13] for these results. Further, we can gnore the current objectve functon, set F = f max, and fnd the mnmum number of servers needed to meet W SLA for each applcaton at any tme. Ths s the tradtonal dynamc server provsonng problem wthout consderng energy consumpton. We have conducted such an experment for WL3, and we fnd that the number of servers that are needed n all s 29 where the m keeps changng over tme (at 60 mnute granularty usng Queung). Ths s lower than = 33 servers that would be needed f each of the applcatons were to be run n solaton,.e. the server farm s parttoned between the applcatons to ensure SLA for each. Ths s because the peak demands across applcatons are not necessarly comng at the same tme. Note that we are able to optmze energy by consderng server provsonng and DVS smultaneously. On the other hand, schemes such as [15], rely on a hgher level provsonng mechansm after whch they can perform ther energy optmzatons. 7. CONCLUDING REMARKS Ths paper has presented the frst formalsm to the problem of reducng server energy consumpton at hostng centers runnng multple applcatons towards the goal of meetng performance based SLAs to clent requests. Though pror studes have shown energy savngs for server clusters by server turn-offs and DVS, these savngs come at a rather expensve cost n terms of volatng the performance-based SLAs (whch are extremely mportant to mantan the revenue stream). Further, prevous proposals have not consdered the cost of server turn-offs, not just n terms of tme overheads, but also n the wear-and-tear of components over an extended perod of tme. Our soluton strateges couple () server provsonng for dfferent applcatons, and () DVS towards enhancng power savngs, whle stll allowng dfferent mechansms for achevng these two actons, unlke any pror work. We have presented three new soluton strateges to ths problem. The frst s a pro-actve soluton (Queung) that predcts workload behavor for the near future, and uses ths nformaton to conduct non-lnear optmzaton based on steady state queung analyss. The second s a reactve soluton (), whch uses perodc feedback of system executon n a control-theoretc framework to acheve the goals. Whle Queung can be a better alternatve than when the workload behavor for the near future s dfferent from the recent past, the steady state assumptons may not hold over shorter granulartes. Feedback s preferable at these shorter tme granulartes, but the nformaton obtaned at the last montorng perod may not be very recent at larger tme granulartes. The Hybrd strategy, on the other hand, can use the predctve nformaton of Queung at coarse

12 tme granulartes for server provsonng, and the feedback of at short granulartes for DVS. Ths Hybrd scheme s also well suted for a more practcal settng where one would lke to perform server provsonng less frequently (due to the hgh tme overhead of brngng up servers or mgratng applcatons), and perform DVS control more frequently. We have demonstrated these deas usng real web server traces, showng how our generc framework can capture a wde spectrum of system confguratons, workload behavor, and the targets for optmzaton. Fnally, we would lke to menton that an mplementaton of the proposed schemes strategy s not tme-consumng. For nstance, n the 285 servers runnng 30 applcatons experment, each server allocaton nvocaton for Hybrd (whch s done once n 60 mnutes) takes around 30 seconds, and the frequency calculatons (done once n 5 mnutes) take less than a second, and these would not even need to run on the actual server nodes. By dong ths control, we are savng on the average around $35 per day n electrcty for the 285 server system. If we consder a more realstc hostng center wth 10 tmes as many servers, ths can work out to a savngs of over $125K per year. We are currently prototypng ths framework on a server cluster. We are also refnng our soluton strateges further, and evaluatng them wth a wder-spectrum of workloads. 8. ACKNOWLEDGEMENTS Ths research has been funded n part by NSF grants , , and an IBM Faculty Award. 9. REFERENCES [1] Web Cachng project. [2] Intel Outlnes Platform Innovatons for More Manageable, Balanced and Secure Enterprse Computng. Intel Press Release, February [3] T. Abdelzaher, K. G. Shn, and N. Bhatt. Performance guarantees for Web server end-systems: A control-theoretcal approach. IEEE Transactons on Parallel and Dstrbuted Systems, 13(1), [4] B. D. O. Anderson and J. B. Moore. Optmal : Lnear Quadratc Methods. Prentce Hall, [5] K. Appleby, S. Fakhour, L. Fong, G. Goldszmdt, M. Kalantar, S. Krshnakumar, D. Pazel, J. Pershng, and B. Rochwerger. Oceano-SLA Based Management of a Computng Utlty. In Proceedngs of the IEEE/IFIP Integrated Network Management, May [6] P. Bohrer, M. Elnozahy, M. Kstler, C. Lefurgy, C. McDowell, and R. Rajamony. The Case for Power Management n Web Servers. In R. Graybll and R. Melhem, edtors, Power Aware Computng. Kluwer Academc Publshers, [7] G. Bolch, S. Grener, H. Meer, and K. S. Trved. Queueng Networks and Markov Chans: Modelng and Performance Evaluaton wth Computer Scence Applcatons. John Wley, New York, [8] E. Carrera, E. Pnhero, and R. Banchn. Conservng Dsk Energy n Network Servers. In Proceedngs of the 17th Internatonal Conference on Supercomputng, [9] A. Chandra, P. Goyal, and P. Shenoy. Quantfyng the Benefts of Resource Multplexng n On-Demand Data Centers. In Proceedngs of Frst ACM Workshop on Algorthms and Archtectures for Self-Managng Systems, June [10] A. Chandrakasan and R. W. Brodersen. Low-Power CMOS Desgn. Wley-IEEE Press, [11] J. Chase, D. Anderson, P. Thakur, and A. Vahdat. Managng Energy and Server Resources n Hostng Centers. In Proceedngs of the 18th Symposum on Operatng Systems Prncples SOSP 01, October [12] J. Chase and R. Doyle. Balance of Power: Energy Management for Server Clusters. In Proceedngs of the 8th Workshop on Hot Topcs n Operatng Systems, May [13] Y. Chen, A. Das, W. Qn, A. Svasubramanam, Q. Wang, and N. Gautam. Managng Server Energy and Operatonal Costs n Hostng Centers. Techncal Report CSE , The Pennsylvana State Unversty, February [14] J. G. Elerath. Specfyng Relablty n the Dsk Drve Industry: No More MTBFs. In Proceedngs of the Annual Relablty and Mantanablty Symposum, pages , [15] M. Elnozahy, M. Kstler, and R. Rajamony. Energy-Effcent Server Clusters. In Proceedngs of the Second Workshop on Power Aware Computng Systems, February [16] M. Elnozahy, M. Kstler, and R. Rajamony. Energy Conservaton Polces for Web Servers. In Proceedngs of the 4th USENIX Symposum on Internet Technologes and Systems, March [17] K. Flautner, S. Renhardt, and T. Mudge. Automatc performance settng for dynamc voltage scalng. In Proceedngs of the 7th annual nternatonal conference on Moble computng and networkng, pages , [18] N. Gandh, S. Parekh, J Hellersten, and D. Tlbury. Feedback of a Lotus Notes Server: Modelng and Desgn. In Proceedngs of the Amercan Conference, [19] D. Grunwald, P. Levs, K. I. Farkas, C. B. Morrey III, and M. Neufeld. Polces for Dynamc Clock Schedulng. In Proceedngs of the Symposum on Operatng Systems Desgn and Implementaton, pages 73 86, [20] S. Gurumurth, A. Svasubramanam, M. Kandemr, and H. Franke. DRPM: Dynamc Speed for Power Management n Server Class Dsks. In Proceedngs of the Internatonal Symposum on Computer Archtecture, pages , [21] D. P. Helmbold, D. E. Long, T. L. Sconyers, and B. Sherrod. Adaptve dsk spndown for moble computers. Mob. Netw. Appl., 5(4): , [22] J. Jones and B. Fonseca. Energy Crss Pnches Hostng Vendors. /01/01/08/010108hnpower.xml. [23] C. Lefurgy, K. Rajaman, F. Rawson, W. Felter, M. Kstler, and T. W. Kelle. Energy Management for Commercal Servers. IEEE Computer, 36(12):39 48, [24] K. L, R. Kumpf, P. Horton, and T. E. Anderson. Quanttatve Analyss of Dsk Drve Power Management n Portable Computers. Techncal report, Unversty of Calforna at Berkeley, [25] J. R. Lorch and A. J. Smth. Improvng Dynamc Voltage Scalng Algorthms wth PACE. In Proceedngs of ACM SIGMETRICS, June [26] C. Lu, T. F. Abdelzaher, J. A. Stankovc, and S. H. Son. A Feedback Approach for Guaranteeng Relatve Delays n Web Servers. In Proceedngs of the Seventh Real-Tme Technology and Applcatons Symposum, page 51, [27] C. D. Patel, C. E. Bash, C. Belady, L. Stahl, and D. Sullvan. Computatonal Flud Dynamcs Modelng of Hgh Compute Densty Data Centers to Assure System Inlet Ar Specfcatons. In Proceedngs of the Pacfc Rm ASME Internatonal Electronc Packagng Techncal Conference and Exhbton (IPACK), [28] T. Perng, T. Burd, and R. Brodersen. The smulaton and evaluaton of dynamc voltage scalng algorthms. In Proceedngs of the 1998 nternatonal symposum on Low power electroncs and desgn, pages 76 81, [29] E. Pnhero, R. Banchn, E. V. Carrera, and T. Heath. Load Balancng and Unbalancng for Power and Performance n Cluster-Based Systems. In Proceedngs of the Workshop on Complers and Operatng Systems for Low Power, September [30] P. Pradhan, R. Tewar, S. Sahu, A. Chandra, and P. Shenoy. An Observaton-based Approach Towards Self-managng Web Servers. In Proceedngs of ACM/IEEE Intl Workshop on Qualty of Servce, May [31] S. Ranjan, J. Rola, H. Fu, and E. Knghtly. QoS-Drven Server Mgraton for Internet Data Centers. In Proceedngs of Internatonal Workshop on QoS, May [32] K. Roy and S. Prasad. Low-Power CMOS VLSI Crcut Desgn. John Wley and Sons, New York, [33] V. Sharma, A. Thomas, T. Abdelzaher, and K. Skadron. Power-aware QoS Management n Web Servers. In Proceedngs of the Real-Tme Systems Symposum, December [34] K. Shen, H. Tang, T. Yang, and L. Chu. Integrated resource management for cluster-based Internet servces. SIGOPS Oper. Syst. Rev., 36(11): , [35] D. S. Stoffer and R. H. Shumway. Tme Seres Analyss and Its Applcatons. Sprnger Verlag, New York, [36] B. Urgaonkar and P. Shenoy. Cataclysm: Handlng Extreme Overloads n Internet Servces. In Proceedngs of ACM Prncples of Dstrbuted Computng, July [37] M. Weser, B. Welch, A. J. Demers, and S. Shenker. Schedulng for Reduced CPU Energy. In Proceedngs of the Symposum on Operatng Systems Desgn and Implementaton, pages 13 23, [38] Q. Zhang, E. Smrn, and G. Cardo. Proft-drven Servce Dfferentaton n Transent Envronments. In Proceedngs of the 11th IEEE/ACM Internatonal Symposum on Modelng, Analyss and Smulaton of Computer Telecommuncatons Systems, 2003.

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,

More information

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

Credit Limit Optimization (CLO) for Credit Cards

Credit Limit Optimization (CLO) for Credit Cards Credt Lmt Optmzaton (CLO) for Credt Cards Vay S. Desa CSCC IX, Ednburgh September 8, 2005 Copyrght 2003, SAS Insttute Inc. All rghts reserved. SAS Propretary Agenda Background Tradtonal approaches to credt

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers J. Parallel Dstrb. Comput. 71 (2011) 732 749 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: www.elsever.com/locate/pdc Envronment-conscous schedulng of HPC applcatons

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña Proceedngs of the 2008 Wnter Smulaton Conference S. J. Mason, R. R. Hll, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

Calculating the high frequency transmission line parameters of power cables

Calculating the high frequency transmission line parameters of power cables < ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,

More information

Outsourcing inventory management decisions in healthcare: Models and application

Outsourcing inventory management decisions in healthcare: Models and application European Journal of Operatonal Research 154 (24) 271 29 O.R. Applcatons Outsourcng nventory management decsons n healthcare: Models and applcaton www.elsever.com/locate/dsw Lawrence Ncholson a, Asoo J.

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy 4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.

More information

Period and Deadline Selection for Schedulability in Real-Time Systems

Period and Deadline Selection for Schedulability in Real-Time Systems Perod and Deadlne Selecton for Schedulablty n Real-Tme Systems Thdapat Chantem, Xaofeng Wang, M.D. Lemmon, and X. Sharon Hu Department of Computer Scence and Engneerng, Department of Electrcal Engneerng

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits

Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits Lnear Crcuts Analyss. Superposton, Theenn /Norton Equalent crcuts So far we hae explored tmendependent (resste) elements that are also lnear. A tmendependent elements s one for whch we can plot an / cure.

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

Activity Scheduling for Cost-Time Investment Optimization in Project Management

Activity Scheduling for Cost-Time Investment Optimization in Project Management PROJECT MANAGEMENT 4 th Internatonal Conference on Industral Engneerng and Industral Management XIV Congreso de Ingenería de Organzacón Donosta- San Sebastán, September 8 th -10 th 010 Actvty Schedulng

More information

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika.

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika. VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

Traffic State Estimation in the Traffic Management Center of Berlin

Traffic State Estimation in the Traffic Management Center of Berlin Traffc State Estmaton n the Traffc Management Center of Berln Authors: Peter Vortsch, PTV AG, Stumpfstrasse, D-763 Karlsruhe, Germany phone ++49/72/965/35, emal peter.vortsch@ptv.de Peter Möhl, PTV AG,

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo

More information

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management

More information

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet 2008/8 An ntegrated model for warehouse and nventory plannng Géraldne Strack and Yves Pochet CORE Voe du Roman Pays 34 B-1348 Louvan-la-Neuve, Belgum. Tel (32 10) 47 43 04 Fax (32 10) 47 43 01 E-mal: corestat-lbrary@uclouvan.be

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

Forecasting the Direction and Strength of Stock Market Movement

Forecasting the Direction and Strength of Stock Market Movement Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract - Stock market s one of the most complcated systems

More information

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy Fnancal Tme Seres Analyss Patrck McSharry patrck@mcsharry.net www.mcsharry.net Trnty Term 2014 Mathematcal Insttute Unversty of Oxford Course outlne 1. Data analyss, probablty, correlatons, vsualsaton

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

Cloud Auto-Scaling with Deadline and Budget Constraints

Cloud Auto-Scaling with Deadline and Budget Constraints Prelmnary verson. Fnal verson appears In Proceedngs of 11th ACM/IEEE Internatonal Conference on Grd Computng (Grd 21). Oct 25-28, 21. Brussels, Belgum. Cloud Auto-Scalng wth Deadlne and Budget Constrants

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

How To Calculate The Accountng Perod Of Nequalty

How To Calculate The Accountng Perod Of Nequalty Inequalty and The Accountng Perod Quentn Wodon and Shlomo Ytzha World Ban and Hebrew Unversty September Abstract Income nequalty typcally declnes wth the length of tme taen nto account for measurement.

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

Can Auto Liability Insurance Purchases Signal Risk Attitude?

Can Auto Liability Insurance Purchases Signal Risk Attitude? Internatonal Journal of Busness and Economcs, 2011, Vol. 10, No. 2, 159-164 Can Auto Lablty Insurance Purchases Sgnal Rsk Atttude? Chu-Shu L Department of Internatonal Busness, Asa Unversty, Tawan Sheng-Chang

More information

Analysis of Premium Liabilities for Australian Lines of Business

Analysis of Premium Liabilities for Australian Lines of Business Summary of Analyss of Premum Labltes for Australan Lnes of Busness Emly Tao Honours Research Paper, The Unversty of Melbourne Emly Tao Acknowledgements I am grateful to the Australan Prudental Regulaton

More information

Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting

Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting Causal, Explanatory Forecastng Assumes cause-and-effect relatonshp between system nputs and ts output Forecastng wth Regresson Analyss Rchard S. Barr Inputs System Cause + Effect Relatonshp The job of

More information

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs

Cost Minimization using Renewable Cooling and Thermal Energy Storage in CDNs Cost Mnmzaton usng Renewable Coolng and Thermal Energy Storage n CDNs Stephen Lee College of Informaton and Computer Scences UMass, Amherst stephenlee@cs.umass.edu Rahul Urgaonkar IBM Research rurgaon@us.bm.com

More information

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network 700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School

More information

Lecture 3: Force of Interest, Real Interest Rate, Annuity

Lecture 3: Force of Interest, Real Interest Rate, Annuity Lecture 3: Force of Interest, Real Interest Rate, Annuty Goals: Study contnuous compoundng and force of nterest Dscuss real nterest rate Learn annuty-mmedate, and ts present value Study annuty-due, and

More information

Number of Levels Cumulative Annual operating Income per year construction costs costs ($) ($) ($) 1 600,000 35,000 100,000 2 2,200,000 60,000 350,000

Number of Levels Cumulative Annual operating Income per year construction costs costs ($) ($) ($) 1 600,000 35,000 100,000 2 2,200,000 60,000 350,000 Problem Set 5 Solutons 1 MIT s consderng buldng a new car park near Kendall Square. o unversty funds are avalable (overhead rates are under pressure and the new faclty would have to pay for tself from

More information

Allocating Time and Resources in Project Management Under Uncertainty

Allocating Time and Resources in Project Management Under Uncertainty Proceedngs of the 36th Hawa Internatonal Conference on System Scences - 23 Allocatng Tme and Resources n Project Management Under Uncertanty Mark A. Turnqust School of Cvl and Envronmental Eng. Cornell

More information

Omega 39 (2011) 313 322. Contents lists available at ScienceDirect. Omega. journal homepage: www.elsevier.com/locate/omega

Omega 39 (2011) 313 322. Contents lists available at ScienceDirect. Omega. journal homepage: www.elsevier.com/locate/omega Omega 39 (2011) 313 322 Contents lsts avalable at ScenceDrect Omega journal homepage: www.elsever.com/locate/omega Supply chan confguraton for dffuson of new products: An ntegrated optmzaton approach Mehd

More information

NONLINEAR OPTIMIZATION FOR PROJECT SCHEDULING AND RESOURCE ALLOCATION UNDER UNCERTAINTY

NONLINEAR OPTIMIZATION FOR PROJECT SCHEDULING AND RESOURCE ALLOCATION UNDER UNCERTAINTY NONLINEAR OPTIMIZATION FOR PROJECT SCHEDULING AND RESOURCE ALLOCATION UNDER UNCERTAINTY A Dssertaton Presented to the Faculty of the Graduate School of Cornell Unversty In Partal Fulfllment of the Requrements

More information

RELIABILITY, RISK AND AVAILABILITY ANLYSIS OF A CONTAINER GANTRY CRANE ABSTRACT

RELIABILITY, RISK AND AVAILABILITY ANLYSIS OF A CONTAINER GANTRY CRANE ABSTRACT Kolowrock Krzysztof Joanna oszynska MODELLING ENVIRONMENT AND INFRATRUCTURE INFLUENCE ON RELIABILITY AND OPERATION RT&A # () (Vol.) March RELIABILITY RIK AND AVAILABILITY ANLYI OF A CONTAINER GANTRY CRANE

More information

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services When Network Effect Meets Congeston Effect: Leveragng Socal Servces for Wreless Servces aowen Gong School of Electrcal, Computer and Energy Engeerng Arzona State Unversty Tempe, AZ 8587, USA xgong9@asuedu

More information

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application Internatonal Journal of mart Grd and lean Energy Performance Analyss of Energy onsumpton of martphone Runnng Moble Hotspot Applcaton Yun on hung a chool of Electronc Engneerng, oongsl Unversty, 511 angdo-dong,

More information

Many e-tailers providing attended home delivery, especially e-grocers, offer narrow delivery time slots to

Many e-tailers providing attended home delivery, especially e-grocers, offer narrow delivery time slots to Vol. 45, No. 3, August 2011, pp. 435 449 ssn 0041-1655 essn 1526-5447 11 4503 0435 do 10.1287/trsc.1100.0346 2011 INFORMS Tme Slot Management n Attended Home Delvery Nels Agatz Department of Decson and

More information

Solving Factored MDPs with Continuous and Discrete Variables

Solving Factored MDPs with Continuous and Discrete Variables Solvng Factored MPs wth Contnuous and screte Varables Carlos Guestrn Berkeley Research Center Intel Corporaton Mlos Hauskrecht epartment of Computer Scence Unversty of Pttsburgh Branslav Kveton Intellgent

More information

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall SP 2005-02 August 2005 Staff Paper Department of Appled Economcs and Management Cornell Unversty, Ithaca, New York 14853-7801 USA Farm Savngs Accounts: Examnng Income Varablty, Elgblty, and Benefts Brent

More information

Formulating & Solving Integer Problems Chapter 11 289

Formulating & Solving Integer Problems Chapter 11 289 Formulatng & Solvng Integer Problems Chapter 11 289 The Optonal Stop TSP If we drop the requrement that every stop must be vsted, we then get the optonal stop TSP. Ths mght correspond to a ob sequencng

More information

Efficient Project Portfolio as a tool for Enterprise Risk Management

Efficient Project Portfolio as a tool for Enterprise Risk Management Effcent Proect Portfolo as a tool for Enterprse Rsk Management Valentn O. Nkonov Ural State Techncal Unversty Growth Traectory Consultng Company January 5, 27 Effcent Proect Portfolo as a tool for Enterprse

More information

BERNSTEIN POLYNOMIALS

BERNSTEIN POLYNOMIALS On-Lne Geometrc Modelng Notes BERNSTEIN POLYNOMIALS Kenneth I. Joy Vsualzaton and Graphcs Research Group Department of Computer Scence Unversty of Calforna, Davs Overvew Polynomals are ncredbly useful

More information

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS 21 22 September 2007, BULGARIA 119 Proceedngs of the Internatonal Conference on Informaton Technologes (InfoTech-2007) 21 st 22 nd September 2007, Bulgara vol. 2 INVESTIGATION OF VEHICULAR USERS FAIRNESS

More information

IMPACT ANALYSIS OF A CELLULAR PHONE

IMPACT ANALYSIS OF A CELLULAR PHONE 4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng

More information

Calculation of Sampling Weights

Calculation of Sampling Weights Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample

More information

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Self-Adaptive SLA-Driven Capacity Management for Internet Services Self-Adaptve SLA-Drven Capacty Management for Internet Servces Bruno Abrahao, Vrglo Almeda and Jussara Almeda Computer Scence Department Federal Unversty of Mnas Geras, Brazl Alex Zhang, Drk Beyer and

More information

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

Section 5.4 Annuities, Present Value, and Amortization

Section 5.4 Annuities, Present Value, and Amortization Secton 5.4 Annutes, Present Value, and Amortzaton Present Value In Secton 5.2, we saw that the present value of A dollars at nterest rate per perod for n perods s the amount that must be deposted today

More information

Simple Interest Loans (Section 5.1) :

Simple Interest Loans (Section 5.1) : Chapter 5 Fnance The frst part of ths revew wll explan the dfferent nterest and nvestment equatons you learned n secton 5.1 through 5.4 of your textbook and go through several examples. The second part

More information

Dynamic Pricing for Smart Grid with Reinforcement Learning

Dynamic Pricing for Smart Grid with Reinforcement Learning Dynamc Prcng for Smart Grd wth Renforcement Learnng Byung-Gook Km, Yu Zhang, Mhaela van der Schaar, and Jang-Won Lee Samsung Electroncs, Suwon, Korea Department of Electrcal Engneerng, UCLA, Los Angeles,

More information

Cloud-based Social Application Deployment using Local Processing and Global Distribution

Cloud-based Social Application Deployment using Local Processing and Global Distribution Cloud-based Socal Applcaton Deployment usng Local Processng and Global Dstrbuton Zh Wang *, Baochun L, Lfeng Sun *, and Shqang Yang * * Bejng Key Laboratory of Networked Multmeda Department of Computer

More information

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT Toshhko Oda (1), Kochro Iwaoka (2) (1), (2) Infrastructure Systems Busness Unt, Panasonc System Networks Co., Ltd. Saedo-cho

More information

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 畫 類 別 : 個 別 型 計 畫 半 導 體 產 業 大 型 廠 房 之 設 施 規 劃 計 畫 編 號 :NSC 96-2628-E-009-026-MY3 執 行 期 間 : 2007 年 8 月 1 日 至 2010 年 7 月 31 日 計 畫 主 持 人 : 巫 木 誠 共 同

More information

Statistical Methods to Develop Rating Models

Statistical Methods to Develop Rating Models Statstcal Methods to Develop Ratng Models [Evelyn Hayden and Danel Porath, Österrechsche Natonalbank and Unversty of Appled Scences at Manz] Source: The Basel II Rsk Parameters Estmaton, Valdaton, and

More information

CLoud computing technologies have enabled rapid

CLoud computing technologies have enabled rapid 1 Cost-Mnmzng Dynamc Mgraton of Content Dstrbuton Servces nto Hybrd Clouds Xuana Qu, Hongxng L, Chuan Wu, Zongpeng L and Francs C.M. Lau Department of Computer Scence, The Unversty of Hong Kong, Hong Kong,

More information

Heuristic Static Load-Balancing Algorithm Applied to CESM

Heuristic Static Load-Balancing Algorithm Applied to CESM Heurstc Statc Load-Balancng Algorthm Appled to CESM 1 Yur Alexeev, 1 Sher Mckelson, 1 Sven Leyffer, 1 Robert Jacob, 2 Anthony Crag 1 Argonne Natonal Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439,

More information

A Lyapunov Optimization Approach to Repeated Stochastic Games

A Lyapunov Optimization Approach to Repeated Stochastic Games PROC. ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, OCT. 2013 1 A Lyapunov Optmzaton Approach to Repeated Stochastc Games Mchael J. Neely Unversty of Southern Calforna http://www-bcf.usc.edu/

More information

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays VoIP Playout Buffer Adjustment usng Adaptve Estmaton of Network Delays Mroslaw Narbutt and Lam Murphy* Department of Computer Scence Unversty College Dubln, Belfeld, Dubln, IRELAND Abstract The poor qualty

More information

A Dynamic Energy-Efficiency Mechanism for Data Center Networks

A Dynamic Energy-Efficiency Mechanism for Data Center Networks A Dynamc Energy-Effcency Mechansm for Data Center Networks Sun Lang, Zhang Jnfang, Huang Daochao, Yang Dong, Qn Yajuan A Dynamc Energy-Effcency Mechansm for Data Center Networks 1 Sun Lang, 1 Zhang Jnfang,

More information

An Energy-Efficient Data Placement Algorithm and Node Scheduling Strategies in Cloud Computing Systems

An Energy-Efficient Data Placement Algorithm and Node Scheduling Strategies in Cloud Computing Systems 2nd Internatonal Conference on Advances n Computer Scence and Engneerng (CSE 2013) An Energy-Effcent Data Placement Algorthm and Node Schedulng Strateges n Cloud Computng Systems Yanwen Xao Massve Data

More information

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems 1 Applcaton of Mult-Agents for Fault Detecton and Reconfguraton of Power Dstrbuton Systems K. Nareshkumar, Member, IEEE, M. A. Choudhry, Senor Member, IEEE, J. La, A. Felach, Senor Member, IEEE Abstract--The

More information

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid Feasblty of Usng Dscrmnate Prcng Schemes for Energy Tradng n Smart Grd Wayes Tushar, Chau Yuen, Bo Cha, Davd B. Smth, and H. Vncent Poor Sngapore Unversty of Technology and Desgn, Sngapore 138682. Emal:

More information

Preventive Maintenance and Replacement Scheduling: Models and Algorithms

Preventive Maintenance and Replacement Scheduling: Models and Algorithms Preventve Mantenance and Replacement Schedulng: Models and Algorthms By Kamran S. Moghaddam B.S. Unversty of Tehran 200 M.S. Tehran Polytechnc 2003 A Dssertaton Proposal Submtted to the Faculty of the

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and Ths artcle appeared n a journal publshed by Elsever. The attached copy s furnshed to the author for nternal non-commercal research and educaton use, ncludng for nstructon at the authors nsttuton and sharng

More information

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING FORMAL ANALYSIS FOR REAL-TIME SCHEDULING Bruno Dutertre and Vctora Stavrdou, SRI Internatonal, Menlo Park, CA Introducton In modern avoncs archtectures, applcaton software ncreasngly reles on servces provded

More information

Abstract. 260 Business Intelligence Journal July IDENTIFICATION OF DEMAND THROUGH STATISTICAL DISTRIBUTION MODELING FOR IMPROVED DEMAND FORECASTING

Abstract. 260 Business Intelligence Journal July IDENTIFICATION OF DEMAND THROUGH STATISTICAL DISTRIBUTION MODELING FOR IMPROVED DEMAND FORECASTING 260 Busness Intellgence Journal July IDENTIFICATION OF DEMAND THROUGH STATISTICAL DISTRIBUTION MODELING FOR IMPROVED DEMAND FORECASTING Murphy Choy Mchelle L.F. Cheong School of Informaton Systems, Sngapore

More information

Analysis of Energy-Conserving Access Protocols for Wireless Identification Networks

Analysis of Energy-Conserving Access Protocols for Wireless Identification Networks From the Proceedngs of Internatonal Conference on Telecommuncaton Systems (ITC-97), March 2-23, 1997. 1 Analyss of Energy-Conservng Access Protocols for Wreless Identfcaton etworks Imrch Chlamtac a, Chara

More information

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression Novel Methodology of Workng Captal Management for Large Publc Constructons by Usng Fuzzy S-curve Regresson Cheng-Wu Chen, Morrs H. L. Wang and Tng-Ya Hseh Department of Cvl Engneerng, Natonal Central Unversty,

More information

Latent Class Regression. Statistics for Psychosocial Research II: Structural Models December 4 and 6, 2006

Latent Class Regression. Statistics for Psychosocial Research II: Structural Models December 4 and 6, 2006 Latent Class Regresson Statstcs for Psychosocal Research II: Structural Models December 4 and 6, 2006 Latent Class Regresson (LCR) What s t and when do we use t? Recall the standard latent class model

More information

Dynamic Resource Allocation for MapReduce with Partitioning Skew

Dynamic Resource Allocation for MapReduce with Partitioning Skew Ths artcle has been accepted for publcaton n a future ssue of ths journal, but has not been fully edted. Content may change pror to fnal publcaton. Ctaton nformaton: DOI 1.119/TC.216.253286, IEEE Transactons

More information

1. Math 210 Finite Mathematics

1. Math 210 Finite Mathematics 1. ath 210 Fnte athematcs Chapter 5.2 and 5.3 Annutes ortgages Amortzaton Professor Rchard Blecksmth Dept. of athematcal Scences Northern Illnos Unversty ath 210 Webste: http://math.nu.edu/courses/math210

More information

Risk-based Fatigue Estimate of Deep Water Risers -- Course Project for EM388F: Fracture Mechanics, Spring 2008

Risk-based Fatigue Estimate of Deep Water Risers -- Course Project for EM388F: Fracture Mechanics, Spring 2008 Rsk-based Fatgue Estmate of Deep Water Rsers -- Course Project for EM388F: Fracture Mechancs, Sprng 2008 Chen Sh Department of Cvl, Archtectural, and Envronmental Engneerng The Unversty of Texas at Austn

More information

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications Methodology to Determne Relatonshps between Performance Factors n Hadoop Cloud Computng Applcatons Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng and

More information

In some supply chains, materials are ordered periodically according to local information. This paper investigates

In some supply chains, materials are ordered periodically according to local information. This paper investigates MANUFACTURING & SRVIC OPRATIONS MANAGMNT Vol. 12, No. 3, Summer 2010, pp. 430 448 ssn 1523-4614 essn 1526-5498 10 1203 0430 nforms do 10.1287/msom.1090.0277 2010 INFORMS Improvng Supply Chan Performance:

More information