Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers

Size: px
Start display at page:

Download "Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers"

Transcription

1 1 Domnant Resource Farness n Cloud Computng Systems wth Heterogeneous Servers We Wang, Baochun L, Ben Lang Department of Electrcal and Computer Engneerng Unversty of Toronto arxv:138.83v1 [cs.dc] 1 Aug 213 Abstract We study the mult-resource allocaton problem n cloud computng systems where the resource pool s constructed from a large number of heterogeneous servers, representng dfferent ponts n the confguraton space of resources such as processng, memory, and storage. We desgn a mult-resource allocaton mechansm, called DRFH, that generalzes the noton of Domnant Resource Farness (DRF) from a sngle server to multple heterogeneous servers. DRFH provdes a number of hghly desrable propertes. Wth DRFH, no user prefers the allocaton of another user; no one can mprove ts allocaton wthout decreasng that of the others; and more mportantly, no user has an ncentve to le about ts resource demand. As a drect applcaton, we desgn a smple heurstc that mplements DRFH n real-world systems. Large-scale smulatons drven by Google cluster traces show that DRFH sgnfcantly outperforms the tradtonal slot-based scheduler, leadng to much hgher resource utlzaton wth substantally shorter job completon tmes. I. INTRODUCTION Resource allocaton under the noton of farness and effcency s a fundamental problem n the desgn of cloud computng systems. Unlke tradtonal applcaton-specfc clusters and grds, a cloud computng system dstngushes tself wth unprecedented server and workload heterogenety. Modern datacenters are lkely to be constructed from a varety of server classes, wth dfferent confguratons n terms of processng capabltes, memory szes, and storage spaces [1]. Asynchronous hardware upgrades, such as addng new servers and phasng out exstng ones, further aggravate such dversty, leadng to a wde range of server specfcatons n a cloud computng system [2]. Table I llustrates the heterogenety of servers n one of Google s clusters [2], [3]. In addton to server heterogenety, cloud computng systems also represent much hgher dversty n resource demand profles. Dependng on the underlyng applcatons, the workload spannng multple cloud users may requre vastly dfferent amounts of resources (e.g., CPU, memory, and storage). For example, numercal computng tasks are usually CPU ntensve, whle database operatons typcally requre hgh-memory support. The heterogenety of both servers and workload demands poses sgnfcant techncal challenges on the resource allocaton mechansm, gvng rse to many delcate ssues notably farness and effcency that must be carefully addressed. Despte the unprecedented heterogenety n cloud computng systems, state-of-the-art computng frameworks employ rather smple abstractons that fall short. For example, Hadoop [4] and Dryad [5], the two most wdely deployed cloud computng frameworks, partton a server s resources nto bundles known as slots that contan fxed amounts TABLE I CONFIGURATIONS OF SERVERS IN ONE OF GOOGLE S CLUSTERS [2], [3]. CPU AND MEMORY UNITS ARE NORMALIZED TO THE MAXIMUM SERVER (HIGHLIGHTED BELOW). Number of servers CPUs Memory of dfferent resources. The system then allocates resources to users at the granularty of these slots. Such a sngle resource abstracton gnores the heterogenety of both server specfcatons and demand profles, nevtably leadng to a farly neffcent allocaton [6]. Towards addressng the neffcency of the current allocaton system, many recent works focus on mult-resource allocaton mechansms. Notably, Ghods et al. [6] suggest a compellng alternatve known as the Domnant Resource Farness (DRF) allocaton, n whch each user s domnant share the maxmum rato of any resource that the user has been allocated n a server s equalzed. The DRF allocaton possesses a set of hghly desrable farness propertes, and has quckly receved sgnfcant attenton n the lterature [7], [8], [9], [1]. Whle DRF and ts subsequent works address the demand heterogenety of multple resources, they all gnore the heterogenety of servers, lmtng the dscussons to a hypothetcal scenaro where all resources are concentrated n one super computer 1. Such an all-n-one resource model drastcally contrasts the state-of-the-practce nfrastructure of cloud computng systems. In fact, wth heterogeneous servers, even the defnton of domnant resource s unclear: Dependng on the underlyng server confguratons, a computng task may bottleneck on dfferent resources n dfferent servers. We shall note that nave extensons, such as applyng the DRF allocaton to each server separately, leads to a hghly neffcent allocaton (detals n Sec. III-D). Ths paper represents the frst rgorous study to propose a soluton wth provable operatonal benefts that brdge the gap between the exstng mult-resource allocaton models and the prevalent datacenter nfrastructure. We propose DRFH, a generalzaton of DRF mechansm n Heterogeneous en- 1 Whle [6] brefly touches on the case where resources are dstrbuted to small servers (known as the dscrete scenaro), ts coverage s rather nformal.

2 2 vronments where resources are pooled by a large amount of heterogeneous servers, representng dfferent ponts n the confguraton space of resources such as processng, memory, and storage. DRFH generalzes the ntuton of DRF by seekng an allocaton that equalzes every user s global domnant share, whch s the maxmum rato of any resources the user has been allocated n the entre cloud resource pool. We systematcally analyze DRFH and show that t retans most of the desrable propertes that the all-n-one DRF model provdes [6]. Specfcally, DRFH s Pareto optmal, where no user s able to ncrease ts allocaton wthout decreasng other users allocatons. Meanwhle, DRFH s envy-free n that no user prefers the allocaton of another user. More mportantly, DRFH s truthful n that a user cannot schedule more computng tasks by clamng more resources that are not needed, and hence has no ncentve to msreport ts actual resource demand. DRFH also satsfes a set of other mportant propertes, namely sngleserver DRF, sngle-resource farness, bottleneck farness, and populaton monotoncty (detals n Sec. III-C). As a drect applcaton, we desgn a heurstc schedulng algorthm that mplements DRFH n real-world systems. We conduct large-scale smulatons drven by Google cluster traces [3]. Our smulaton results show that compared to the tradtonal slot schedulers adopted n prevalent cloud computng frameworks, the DRFH algorthm sutably matches demand heterogenety to server heterogenety, sgnfcantly mprovng the system s resource utlzaton, yet wth a substantal reducton of job completon tmes. II. RELATED WORK Despte the extensve computng system lterature on far resource allocaton, much of the exstng works lmt ther dscussons to the allocaton of a sngle resource type, e.g., CPU tme [11], [12] and lnk bandwdth [13], [14], [15], [16], [17]. Varous farness notons have also been proposed throughout the years, rangng from applcaton-specfc allocatons [18], [19] to general farness measures [13], [2], [21]. As for mult-resource allocaton, state-of-the-art cloud computng systems employ nave sngle resource abstractons. For example, the two far sharng schedulers currently supported n Hadoop [22], [23] partton a node nto slots wth fxed fractons of resources, and allocate resources jontly at the slot granularty. Quncy [24], a far scheduler developed for Dryad [5], models the far schedulng problem as a mn-cost flow problem to schedule jobs nto slots. The recent work [25] takes the job placement constrants nto consderaton, yet t stll uses a slot-based sngle resource abstracton. Ghods et al. [6] are the frst n the lterature to present a systematc nvestgaton on the mult-resource allocaton problem n cloud computng systems. They propose DRF to equalze the domnant share of all users, and show that a number of desrable farness propertes are guaranteed n the resultng allocaton. DRF has quckly attracted a substantal amount of attenton and has been generalzed to many dmensons. Notably, Joe-Wong et al. [7] generalze the DRF measure and ncorporate t nto a unfyng framework that captures the trade-offs between allocaton farness and effcency. Dolev et al. [8] suggest another noton of farness for mult-resource allocaton, known as Bottleneck-Based Farness (BBF), under whch two farness propertes that DRF possesses are also guaranteed. Gutman and Nsan [9] consder another settngs of DRF wth a more general doman of user utltes, and show ther connectons to the BBF mechansm. Parkes et al. [1], on the other hand, extend DRF n several ways, ncludng the presence of zero demands for certan resources, weghted user endowments, and n partcular the case of ndvsble tasks. They also study the loss of socal welfare under the DRF rules. More recently, the ongong work of Kash et al. [26] extends the DRF model to a dynamc settng where users may jon the system over tme but wll never leave. Though motvated by the resource allocaton problem n cloud computng systems, all the works above restrct ther dscussons to a hypothetcal scenaro where the resource pool contans only one bg server, whch s not the case n the state-of-the-practce datacenter systems. Other related works nclude far-dvson problems n the economcs lterature, n partcular the egaltaran dvson under Leontef preferences [27] and the cake-cuttng problem [28]. However, these works also assume the all-n-one resource model, and hence cannot be drectly appled to cloud computng systems wth heterogeneous servers. III. SYSTEM MODEL AND ALLOCATION PROPERTIES In ths secton, we model mult-resource allocaton n a cloud computng system wth heterogeneous servers. We formalze a number of desrable propertes that are deemed the most mportant for allocaton mechansms n cloud computng envronments. A. Basc Settng In a cloud computng system, the resource pool s composed of a cluster of heterogeneous servers S = {1,...,k}, each contrbutng m hardware resources (e.g., CPU, memory, storage) denoted by R = {1,...,m}. For each server l, let c l = (c l1,...,c lm ) T be ts resource capacty vector, where each element c lr denotes the total amount of resource r avalable n server l. Wthout loss of generalty, for every resource r, we normalze the total capacty of all servers to 1,.e., c lr = 1, r = 1,2,...,m. l S Let U = {1,...,n} be the set of cloud users sharng the cloud system. For every user, let D = (D 1,...,D m ) T be ts resource demand vector, where D r s the fracton (share) of resource r requred by each task of user over the entre system. For smplcty, we assume postve demands for all users,.e., D r >, U,r R. We say resource r s the global domnant resource of user f r argmax D r. r R In other words, r s the most heavly demanded resource requred by user s task n the entre resource pool. For all user and resource r, we defne d r = D r /D r

3 3 Memory CPUs Server 1 Server 2 (2 CPUs, 12 GB) (12 CPUs, 2 GB) Fg. 1. An example of a system contanng two heterogeneous servers shared by two users. Each computng task of user 1 requres.2 CPU tme and 1 GB memory, whle the computng task of user 2 requres 1 CPU tme and.2 GB memory. as the normalzed demand and denote byd = (d 1,...,d m ) T the normalzed demand vector of user. As a concrete example, consder Fg. 1 where the system contans two heterogeneous servers. Server 1 s hghmemory wth 2 CPUs and 12 GB memory, whle server 2 s hgh-cpu wth 12 CPUs and 2 GB memory. Snce the system contans 14 CPUs and 14 GB memory n total, the normalzed capacty vectors of server 1 and 2 are c 1 = (CPU share, memory share) T = (1/7,6/7) T and c 2 = (6/7,1/7) T, respectvely. Now suppose there are two users. User 1 has memory-ntensve tasks each requrng.2 CPU tme and 1 GB memory, whle user 2 has CPU-heavy tasks each requrng 1 CPU tme and.2 GB memory. The demand vector of user 1 s D 1 = (1/7,1/14) T and the normalzed vector s d 1 = (1/5,1) T, where memory s the global domnant resource. Smlarly, user 2 has D 2 = (1/14,1/7) T and d 2 = (1,1/5) T, and CPU s ts global domnant resource. For now, we assume users have an nfnte number of tasks to be scheduled, and all tasks are dvsble [6], [8], [9], [1], [26]. We wll dscuss how these assumptons can be relaxed n Sec. V. B. Resource Allocaton For every user and server l, let A l = (A l1,...,a lm ) T be the resource allocaton vector, where A lr s the share of resource r allocated to user n server l. Let A = (A 1,...,A k ) be the allocaton matrx of user, and A = (A 1,...,A n ) the overall allocaton for all users. We say an allocaton A s feasble f no server s requred to use more than any of ts total resources,.e., A lr c lr, l S,r R. U For all user, gven allocaton A l n server l, the maxmum number of tasks (possbly fractonal) that t can schedule s calculated as N l (A l ) = mn r R {A lr/d r }. The total number of tasks user can schedule under allocaton A s hence N (A ) = l S N l (A l ). (1) Intutvely, a user prefers an allocaton that allows t to schedule more tasks. A well-justfed allocaton should never gve a user more resources than t can actually use n a server. Followng the termnology used n the economcs lterature [27], we call such an allocaton non-wasteful: Defnton 1: For user and server l, an allocaton A l s non-wasteful f takng out any resources reduces the number of tasks scheduled,.e., for all A l A l 2, we have that N l (A l) < N l (A l ). User s allocaton A = (A l ) s non-wasteful f A l s nonwasteful for all server l, and allocaton A = (A ) s nonwasteful f A s non-wasteful for all user. Note that one can always convert an allocaton to nonwasteful by revokng those resources that are allocated but have never been actually used, wthout changng the number of tasks scheduled for any user. Therefore, unless otherwse specfed, we lmt the dscussons to non-wasteful allocatons. C. Allocaton Mechansm and Desrable Propertes A resource allocaton mechansm takes user demands as nput and outputs the allocaton result. In general, an allocaton mechansm should provde the followng essental propertes that are wdely recognzed as the most mportant farness and effcency measures n both cloud computng systems [6], [7], [25] and the economcs lterature [27], [28]. Envy-freeness: An allocaton mechansm s envy-free f no user prefers the other s allocaton to ts own,.e., N (A ) N (A j ) for any two users,j U. Ths property essentally embodes the noton of farness. Pareto optmalty: An allocaton mechansm s Pareto optmal f t returns an allocaton A such that for all feasble allocatons A, f N (A ) > N (A ) for some user, then there exsts a user j such that N j (A j ) < N j(a j ). In other words, there s no other allocaton where all users are at least as well off and at least one user s strctly better off. Ths property ensures the allocaton effcency and s crtcal for hgh resource utlzaton. Truthfulness: An allocaton mechansm s truthful f no user can schedule more tasks by msreportng ts resource demand (assumng a user s demand s ts prvate nformaton), rrespectve of other users behavour. Specfcally, gven the demands clamed by other users, let A be the resultng allocaton when user truthfully reports ts resource demand D, and let A be the allocaton returned when user msreports by D D. Then under a truthful mechansm we have N (A ) N (A ). Truthfulness s of a specal mportance for a cloud computng system, as t s common to observe n real-world systems that users try to le about ther resource demands to manpulate the schedulers for more allocaton [6], [25]. In addton to these essental propertes, we also consder four other mportant propertes below: Sngle-server DRF: If the system contans only one server, then the resultng allocaton should be reduced to the DRF allocaton. 2 For any two vectors x and y, we say x y f x y, and for some j we have strct nequalty: x j < y j.

4 4 1% 5% % User1 42% 1% 5% User2 8% % CPU Memory CPU Memory Server 1 Server 2 Fg. 2. DRF allocaton for the example shown n Fg. 1, where user 1 s allocated 5 tasks n server 1 and 1 n server 2, whle user 2 s allocated 1 task n server 1 and 5 n server 2. Sngle-resource farness: If there s a sngle resource n the system, then the resultng allocaton should be reduced to a max-mn far allocaton. Bottleneck farness: If all users bottleneck on the same resource (.e., havng the same global domnant resource), then the resultng allocaton should be reduced to a max-mn far allocaton for that resource. Populaton monotoncty: If a user leaves the system and relnqushes all ts allocatons, then the remanng users wll not see any reducton n the number of tasks scheduled. In addton to the aforementoned propertes, sharng ncentve s another mportant property that has been frequently mentoned n the lterature [6], [7], [8], [1]. It ensures that every user s allocaton s not worse off than that obtaned by evenly dvdng the entre resource pool. Whle ths property s well defned for a sngle server, t s not for a system contanng multple heterogeneous servers, as there s an nfnte number of ways to evenly dvde the resource pool among users, and t s unclear whch one should be chosen as a benchmark. We defer the dscussons to Sec. IV-D, where we justfy between two possble alternatves. For now, our objectve s to desgn an allocaton mechansm that guarantees all the propertes defned above. D. Nave DRF Extenson and Its Ineffcency It has been shown n [6], [1] that the DRF allocaton satsfes all the desrable propertes mentoned above when there s only one server n the system. The key ntuton s to equalze the fracton of domnant resources allocated to each user n the server. When resources are dstrbuted to many heterogeneous servers, a nave generalzaton s to separately apply the DRF allocaton per server. Snce servers are heterogeneous, a user mght have dfferent domnant resources n dfferent servers. For nstance, n the example of Fg. 1, user 1 s domnant resource n server 1 s CPU, whle ts domnant resource n server 2 s memory. Now apply DRF n server 1. Because CPU s also user 2 s domnant resource, the DRF allocaton lets both users have an equal share of the server s CPUs, each allocated 1. As a result, user 1 schedules 5 tasks onto server 1, whle user 2 schedules 1 onto the same server. Smlarly, n server 2, memory s the domnant resource of both users and s evenly allocated, leadng to 1 task scheduled for user 1 and 5 for user 2. The resultng allocatons n the two servers are llustrated n Fg. 2, where both users schedule 6 tasks. Unfortunately, ths allocaton volates Pareto optmalty and s hghly neffcent. If we nstead allocate server 1 exclusvely to user 1, and server 2 exclusvely to user 2, then both users schedule 1 tasks, more than those scheduled under the DRF allocaton. In fact, we see that navely applyng DRF per server may lead to an allocaton wth arbtrarly low resource utlzaton. The falure of the nave DRF extenson to the heterogeneous envronment necesstates an alternatve allocaton mechansm, whch s the man theme of the next secton. IV. DRFH ALLOCATION AND ITS PROPERTIES In ths secton, we descrbe DRFH, a generalzaton of DRF n a heterogeneous cloud computng system where resources are dstrbuted n a number of heterogeneous servers. We analyze DRFH and show that t provdes all the desrable propertes defned n Sec. III. A. DRFH Allocaton Instead of allocatng separately n each server, DRFH jontly consders resource allocaton across all heterogeneous servers. The key ntuton s to acheve the max-mn far allocaton for the global domnant resources. Specfcally, gven allocaton A l, let G l (A l ) be the fracton of global domnant resources user receves n server l,.e., G l (A l ) = N l (A l )D r = mn r R {A lr/d r }. (2) We call G l (A l ) the global domnant share user receves n server l under allocaton A l. Therefore, gven the overall allocaton A, the global domnant share user receves s G (A ) = l S G l (A l ) = l S mn r R {A lr/d r }. (3) DRFH allocaton ams to maxmze the mnmum global domnant share among all users, subject to the resource constrants per server,.e., max A s.t. mn G (A ) U A lr c lr, l S,r R. U Recall that wthout loss of generalty, we assume nonwasteful allocaton A (see Sec. III-B). We have the followng structural result. Lemma 1: For user and server l, an allocaton A l s non-wasteful f and only f there exsts some g l such that A l = g l d. In partcular, g l s the global domnant share user receves n server l under allocaton A l,.e., g l = G l (A l ). Proof: ( ) We start wth the necessty proof. Snce A l = g l d, for all resource r R, we have As a result, A lr /D r = g l d r /D r = g l D r. N l (A l ) = mn r R {A lr/d r } = g l D r. (4)

5 5 Now for any A l A l, suppose A lr < A lr for some resource r. We have N l (A l ) = mn r R {A lr /D r} Resource Share 5/7 User1 User2 6/7 6/7 A lr /D r < A lr /D r = N l (A l ). Hence by defnton, allocaton A l s non-wasteful. ( ) We next present the suffcency proof. Snce A l s non-wasteful, for any two resources r 1,r 2 R, we must have A lr1 /D r1 = A lr2 /D r2. Otherwse, wthout loss of generalty, suppose A lr1 /D r1 > A lr2 /D r2. There must exst some ǫ >, such that (A lr1 ǫ)/d r1 > A lr2 /D r2. Now construct an allocaton A l, such that { A A lr = lr1 ǫ, r = r 1 ; A lr, o.w. Clearly, A l A l. However, t s easy to see that N l (A l ) = mn r R {A lr /D r} = mn{a lr /D r} r r 1 = mn{a lr /D r } r r 1 = mn {A lr/d r } = N l (A l ), r R whch contradcts the fact thata l s non-wasteful. As a result, there exts some n l, such that for all resource r R, we have A lr = n l D r = n l D r d r. Now lettng g l = n l D r, we see A l = g l d. Intutvely, Lemma 1 ndcates that under a non-wasteful allocaton, resources are allocated n proporton to the user s demand. Lemma 1 mmedately suggests the followng relatonshp for all user and ts non-wasteful allocaton A : G (A ) = l S G l (A l ) = l S g l. Problem (4) can hence be equvalently wrtten as max mn g l {g l } U l S s.t. g l d r c lr, l S,r R, U where the constrants are derved from Lemma 1. Now let g = mn l S g l. Va straghtforward algebrac operaton, we see that (6) s equvalent to the followng problem: max {g l } s.t. g g l d r c lr, l S,r R, U g l = g, U. l U Note that the second constrant ensures the farness wth respect to the equalzed global domnant share g. By solvng (5) (6) (7) 1/7 1/7 CPU Memory CPU Memory Server 1 Server 2 Fg. 3. An alternatve allocaton wth hgher system utlzaton for the example of Fg. 1. Server 1 and 2 are exclusvely assgned to user 1 and 2, respectvely. Both users schedule 1 tasks. (7), DRFH allocates each user the maxmum global domnant share g, under the constrants of both server capacty and farness. By Lemma 1, the allocaton receved by each user n server l s smply A l = g l d. For example, Fg. 3 llustrates the resultng DRFH allocaton n the example of Fg. 1. By solvng (7), DRFH allocates server 1 exclusvely to user 1 and server 2 exclusvely to user 2, allowng each user to schedule 1 tasks wth the maxmum global domnant share g = 5/7. We next analyze the propertes of DRFH allocaton obtaned by solvng (7) n the followng two subsectons. B. Analyss of Essental Propertes Our analyss of DRFH starts wth the three essental resource allocaton propertes, namely, envy-freeness, Pareto optmalty, and truthfulness. We frst show that under the DRFH allocaton, no user prefers other s allocaton to ts own. Proposton 1 (Envy-freeness): The DRFH allocaton obtaned by solvng (7) s envy-free. Proof: Let {g l } be the soluton to problem (7). For all user, ts DRFH allocaton n server l s A l = g l d. To show N (A j ) N (A ) for any two users and j, t s equvalent to prove N (A j ) N (A ). We have G (A j ) = l G l(a jl ) = l mn r{g jl d jr /d r } l g jl = G (A ), where the nequalty holds because mn r {d jr /d r } d jr /d r 1, where r s user s global domnant resource. We next show that DRFH leads to an effcent allocaton under whch no user can mprove ts allocaton wthout decreasng that of the others. Proposton 2 (Pareto optmalty): The DRFH allocaton obtaned by solvng (7) s Pareto optmal. Proof: Let {g l }, and the correspondng g, be the soluton to problem (7). For all user, ts DRFH allocaton n server l s A l = g l d. Snce (6) and (7) are equvalent, {g l } also solves (6), wth g beng the maxmum value of the objectve of (6). Assume, by way of contradcton, that allocaton A s not Pareto optmal,.e., there exsts some allocaton A, such

6 6 that N (A ) N (A ) for all user, and for some user j we have strct nequalty: N j (A j ) > N j(a j ). Equvalently, ths mples that G (A ) G (A ) for all user, and G j (A j ) > G j(a j ) for user j. Wthout loss of generalty, let A be non-wasteful. By Lemma 1, for all user and server l, there exsts some g l such that A l = g l d. We show that based on {g l }, one can construct some {ĝ l} such that {ĝ l } s a feasble soluton to (6), yet leads to a hgher objectve than g, contradctng the fact that {g l } optmally solve (6). To see ths, consder user j. We have G j (A j ) = l g jl = g < G j (A j ) = l g jl. For user j, there exsts a server l and some ǫ >, such that after reducng g jl to g jl ǫ, the resultng global domnant share remans hgher than g,.e., l g jl ǫ g. Ths leads to at least ǫd j dle resources n server l. We construct {ĝ l } by redstrbutng these dle resources to all users. Denote by {g l } the domnant share after reducng g jl to g jl ǫ,.e., g { g l = jl ǫ, = j,l = l ; g l, o.w. The correspondng non-wasteful allocaton s A l = g l d for all user and server l. Note that allocaton A s preferred over the orgnal allocaton A by all users,.e., for all user, we have G (A ) = l g l = { lg jl ǫ g = G j(a j ), = j; l g l = G (A ) G (A ), o.w. We now construct {ĝ l } by redstrbutng the ǫd j dle resources n server l to all users, each ncreasng ts global domnant share g l by δ = mn r {ǫd jr / d r},.e., { g ĝ l = l +δ, l = l ; o.w. g l, It s easy to check that {ĝ l } remans a feasble allocaton. To see ths, t suffces to check server l. For all ts resource r, we have ĝld r = (g l +δ)d r = g l d r ǫd jr +δ d r c lr (ǫd jr δ d r) c lr. where the frst nequalty holds because A s a feasble allocaton. On the other hand, for all user U, we have lĝl = l g l +δ = G (A )+δ G (A )+δ > g. Ths contradcts the premse that g s optmal for (6). For now, all our dscussons are based on a crtcal assumpton that all users truthfully report ther resource demands. However, n a real-world system, t s common to observe users to attempt to manpulate the scheduler by msreportng ther resource demands, so as to receve more allocaton [6], [25]. More often than not, these strategc behavours would sgnfcantly hurt those honest users and reduce the number of ther tasks scheduled, nevtably leadng to a farly neffcent allocaton outcome. Fortunately, we show by the followng proposton that DRFH s mmune to these strategc behavours, as reportng the true demand s always the best strategy for every user, rrespectve of the others behavour. Proposton 3 (Truthfulness): The DRFH allocaton obtaned by solvng (7) s truthful. Proof: For any user, fxng all other users clamed demands d = (d 1,...,d 1,d +1,...,d n) (whch may not be ther true demands), let A be the resultng allocaton when truthfully reports ts demand d, that s, A l = g l d and A jl = g jl d j for all user j and server l, where g l and g jl are the global domnant shares users and j receve on server l under A l and A jl, respectvely. Smlarly, let A be the resultng allocaton when user msreports ts demand as d. Let g and g be the global domnant share user receves under A and A, respectvely. We check the followng two cases and show that G (A ) G (A ), whch s equvalent to N (A ) N (A ). Case 1: g g. In ths case, let ρ = mn r {d r /d r} be defned for user. Clearly, where r ρ = mn r {d r /d r} d r /d r 1, s the domnant resource of user. We then have G (A ) = l G l(a l ) = mn r {d r /d r} l g l = ρ g g = G (A ). Case 2: g > g. For all user j, when user truthfully reports ts demand, let G j (A j,d j ) be the global domnant share of user j w.r.t. ts clamed demand d j,.e., G j (A j,d j ) = l mn r{g jl d jr /d jr } = l g jl = g. Smlarly, when usermsreports, letg j (A j,d j ) be the global domnant share of user j w.r.t. ts clamed demand d j,.e., G j (A j,d j ) = l mn r{g jl d jr /d jr } = l g jl = g, As a result, We must have G j (A j,d j ) > G j(a j,d j ), j. G (A ) < G (A ). Otherwse, allocaton A s preferred over A by all users and s strctly preferred by user j w.r.t. the clamed demands (d,d ). Ths contradcts the Pareto optmalty of DRFH allocaton. (Recall that allocaton A s an DRFH allocaton gven the clamed demands (d,d ). ) C. Analyss of Important Propertes In addton to the three essental propertes shown n the prevous subsecton, DRFH also provdes a number of other mportant propertes. Frst, snce DRFH generalzes DRF to heterogeneous envronments, t naturally reduces to the DRF allocaton when there s only one server contaned n the system, where the global domnant resource defned n DRFH s exactly the same as the domnant resource defned n DRF.

7 7 Proposton 4 (Sngle-server DRF): The DRFH leads to the same allocaton as DRF when all resources are concentrated n one server. Next, by defnton, we see that both sngle-resource farness and bottleneck farness trvally hold for the DRFH allocaton. We hence omt the proofs of the followng two propostons. Proposton 5 (Sngle-resource farness): The DRFH allocaton satsfes sngle-resource farness. Proposton 6 (Bottleneck farness): The DRFH allocaton satsfes bottleneck farness. Fnally, we see that when a user leaves the system and relnqushes all ts allocatons, the remanng users wll not see any reducton of the number of tasks scheduled. Formally, Proposton 7 (Populaton monotoncty): The DRFH allocaton satsfes populaton monotoncty. Proof: Let A be the resultng DRFH allocaton, then for all user and server l, A l = g l d and G (A ) = g, where {g l } andg solve (7). Suppose userj leaves the system, changng the resultng DRFH allocaton to A. By DRFH, for all user j and server l, we have A l = g l d and G (A ) = g, where {g l } j and g solve the followng optmzaton problem: max g l, j g s.t. j g l d r c lr, l S,r R, l U g l = g, j. To show N (A ) N (A ) for all user j, t s equvalent to prove G (A ) G (A ). It s easy to verfy that g,{g l } j satsfy all the constrants of (8) and are hence feasble to (8). As a result, g g. Ths s exactly G (A ) G (A ). D. Dscussons of Sharng Incentve In addton to the aforementoned propertes, sharng ncentve s another mportant allocaton property that has been frequently mentoned n the lterature, e.g., [6], [7], [8], [1], [25]. It ensures that every user s allocaton s at least as good as that obtaned by evenly parttonng the entre resource pool. When the system contans only a sngle server, ths property s well defned, as evenly dvdng the server s resources leads to a unque allocaton. However, for the system contanng multple heterogeneous servers, there s an nfnte number of ways to evenly dvde the resource pool, and t s unclear whch one should be chosen as the benchmark for comparson. For example, n Fg. 1, two users share the system wth 14 CPUs and 14 GB memory n total. The followng two allocatons both allocate each user 7 CPUs and 7 GB memory: (a) User 1 s allocated 1/2 resources of server 1 and 1/2 resources of server 2, whle user 2 receves the rest; (b) user 1 s allocated (1.5 CPUs, 5.5 GB) n server 1 and (5.5 CPUs, 1.5 GB) n server 2, whle user 2 receves the rest. One mght thnk that allocaton (a) s a more reasonable benchmark as t allows all n users to have an equal share of every server, each recevng 1/n of the server s resources. However, ths benchmark has lttle practcal meanng: Wth a large n, each user wll only receve a small fracton of resources on each server, whch lkely cannot be utlzed by any computng task. In other words, havng a small slce (8) of resources n each server s essentally meanngless. We therefore consder another benchmark that s more practcal. Snce cloud systems are constructed by poolng hundreds of thousands of servers [1], [2], the number of users s typcally far smaller than the number of servers [6], [25],.e., k n. An equal dvson would allocate to each user k/n servers drawn from the same dstrbuton of the system s server confguratons. For each user, the allocated k/n servers are then treated as a dedcated cloud that s exclusve to the user. The number of tasks scheduled on ths dedcated cloud s then used as a benchmark and s compared to the number of tasks scheduled n the orgnal cloud computng system shared wth all other users. We wll evaluate such a sharng ncentve property va trace-drven smulatons n Sec. VI. V. PRACTICAL CONSIDERATIONS So far, all our dscussons are based on several assumptons that may not be the case n a real-world system. In ths secton, we relax these assumptons and dscuss how DRFH can be mplemented n practce. A. Weghted Users wth a Fnte Number of Tasks In the prevous sectons, users are assumed to be assgned equal weghts and have nfnte computng demands. Both assumptons can be easly removed wth some mnor modfcatons of DRFH. When users are assgned uneven weghts, let w be the weght assocated wth user. DRFH seeks an allocaton that acheves the weghted max-mn farness across users. Specfcally, we maxmze the mnmum normalzed global domnant share (w.r.t the weght) of all users under the same resource constrants as n (4),.e., max A s.t. mn G (A )/w U A lr c lr, l S,r R. U When users have a fnte number of tasks, the DRFH allocaton s computed teratvely. In each round, DRFH ncreases the global domnant share allocated to all actve users, untl one of them has all ts tasks scheduled, after whch the user becomes nactve and wll no longer be consdered n the followng allocaton rounds. DRFH then starts a new teraton and repeats the allocaton process above, untl no user s actve or no more resources could be allocated to users. Our analyss presented n Sec. IV also extends to weghted users wth a fnte number of tasks. B. Schedulng Tasks as Enttes Untl now, we have assumed that all tasks are dvsble. In a real-world system, however, fractonal tasks may not be accepted. To schedule tasks as enttes, one can apply progressve fllng as a smple mplementaton of DRFH. That s, whenever there s a schedulng opportunty, the scheduler always accommodates the user wth the lowest global domnant share. To do ths, t pcks the frst server that fts the user s task. Whle ths Frst-Ft algorthm offers a farly good

8 8 approxmaton to DRFH, we propose another smple heurstc that can lead to a better allocaton wth hgher resource utlzaton. Smlar to Frst-Ft, the heurstc also chooses user wth the lowest global domnant share to serve. However, nstead of randomly pckng a server, the heurstc chooses the best one that most sutably matches user s tasks, and s hence referred to as the Best-Ft DRFH. Specfcally, for user wth resource demand vector D = (D 1,...,D m ) T and a server l wth avalable resource vector c l = ( c l1,..., c lm ) T, where c lr s the share of resource r remanng avalable n server l, we defne the followng heurstc functon to measure the task s ftness for the server: H(,l) = D /D 1 c l / c l1 1, (9) where 1 s the L 1 -norm. Intutvely, the smaller H(,l), the more smlar the resource demand vector D appears to the server s avalable resource vector c l, and the better ft user s task s for server l. For example, a CPU-heavy task s more sutable to run n a server wth more avalable CPU resources. Best-Ft DRFH schedules user s tasks to server l wth the least H(, l). We evaluate both Frst-Ft DRFH and Best-Ft DRFH va trace-drven smulatons n the next secton. VI. SIMULATION RESULTS In ths secton, we evaluate the performance of DRFH va extensve smulatons drven by Google cluster-usage traces [3]. The traces contan resource demand/usage nformaton of over 9 users (.e., Google servces and engneers) on a cluster of 12K servers. The server confguratons are summarzed n Table I, where the CPUs and memory of each server are normalzed so that the maxmum server s 1. Each user submts computng jobs, dvded nto a number of tasks, each requrng a set of resources (.e., CPU and memory). From the traces, we extract the computng demand nformaton the requred amount of resources and task runnng tme and use t as the demand nput of the allocaton algorthms for evaluaton. Dynamc allocaton: Our frst evaluaton focuses on the allocaton farness of the proposed Best-Ft DRFH when users dynamcally jon and depart the system. We smulate 3 users submttng tasks wth dfferent resource requrements to a small cluster of 1 servers. The server confguratons are randomly drawn from the dstrbuton of Google cluster servers n Table I, leadng to a resource pool contanng CPU unts and memory unts n total. User 1 jons the system at the begnnng, requrng.2 CPU and.3 memory for each of ts task. As shown n Fg. 4, snce only user 1 s actve at the begnnng, t s allocated 4% CPU share and 62% memory share. Ths allocaton contnues untl 2 s, at whch tme user 2 jons and submts CPU-heavy tasks, each requrng.5 CPU and.1 memory. Both users now compete for computng resources, leadng to a DRFH allocaton n whch both users receve 44% global domnant share. At 5 s, user 3 starts to submt memory-ntensve tasks, each requrng.1 CPU and.3 memory. The algorthm now allocates the same global domnant share of 26% to all three users untl user 1 fnshes ts tasks and departs at 18 s. After that, only users 2 and CPU Share (%) Memory Share (%) Domnant Share (%) 1 8 User 1 User 2 User Tme (s) 1 8 User 1 User 2 User Tme (s) 1 8 User 1 User 2 User Tme (s) Fg. 4. CPU, memory, and global domnant share for three users on a 1- server system wth CPU unts and memory unts n total. TABLE II RESOURCE UTILIZATION OF THE SLOTS SCHEDULER WITH DIFFERENT SLOT SIZES. Number of Slots CPU Utlzaton Memory Utlzaton 1 per maxmum server 35.1% 23.4% 12 per maxmum server 42.2% 27.4% 14 per maxmum server 43.9% 28.% 16 per maxmum server 45.4% 24.2% 2 per maxmum server 4.6% 2.% 3 share the system, each recevng the same share on ther global domnant resources. A smlar process repeats untl all users fnsh ther tasks. Throughout the smulaton, we see that the Best-Ft DRFH algorthm precsely acheves the DRFH allocaton at all tmes. Resource utlzaton: We next evaluate the resource utlzaton of the proposed Best-Ft DRFH algorthm. We take the 24-hour computng demand data from the Google traces and smulate t on a smaller cloud computng system of 2, servers so that farness becomes relevant. The server confguratons are randomly drawn from the dstrbuton of Google cluster servers n Table I. We compare Best-Ft DRFH wth two other benchmarks, the tradtonal Slots schedulers that schedules tasks onto slots of servers (e.g., Hadoop Far Scheduler [23]), and the Frst-Ft DRFH that chooses the frst server that fts the task. For the former, we try dfferent slot szes and chooses the one wth the hghest CPU and memory utlzaton. Table II summarzes our observatons, where dvdng the maxmum server (1 CPU and 1 memory n Table I) nto 14 slots leads to the hghest overall utlzaton. Fg. 5 depcts the tme seres of CPU and memory utlzaton of the three algorthms. We see that the two DRFH mplementatons sgnfcantly outperform the tradtonal Slots scheduler wth much hgher resource utlzaton, manly because the latter gnores the heterogenety of both servers and workload.

9 9 CPU Utlzaton Memory Utlzaton Best Ft DRFH Frst Ft DRFH Slots Tme (mn) Best Ft DRFH Frst Ft DRFH Slots Tme (mn) Fg. 5. Tme seres of CPU and memory utlzaton Best Ft DRFH Slots Job Completon Tme (s) (a) CDF of job completon tmes. Completon Tme Reducton % 2% 25% 43% >1 Job Sze (tasks) 62% (b) Job completon tme reducton. Fg. 6. DRFH mprovements on job completon tmes over Slots scheduler. Ths observaton s consstent wth fndngs n the homogeneous envronment where all servers are of the same hardware confguratons [6]. As for the DRFH mplementatons, we see that Best-Ft DRFH leads to unformly hgher resource utlzaton than the Frst-Ft alternatve at all tmes. The hgh resource utlzaton of Best-Ft DRFH naturally translates to shorter job completon tmes shown n Fg. 6a, where the CDFs of job completon tmes for both Best-Ft DRFH and Slots scheduler are depcted. Fg. 6b offers a more detaled breakdown, where jobs are classfed nto 5 categores based on the number of ts computng tasks, and for each category, the mean completon tme reducton s computed. Whle DRFH shows no mprovement over Slots scheduler for small jobs, a sgnfcant completon tme reducton has been observed for those contanng more tasks. Generally, the larger the job s, the more mprovement one may expect. Smlar observatons have also been found n the homogeneous envronments [6]. Fg. 6 does not account for partally completed jobs and focuses only on those havng all tasks fnshed n both Best- Ft and Slots. As a complementary study, Fg. 7 computes the task completon rato the number of tasks completed over the number of tasks submtted for every user usng Best- Ft DRFH and Slots schedulers, respectvely. The radus of the crcle s scaled logarthmcally to the number of tasks the user submtted. We see that Best-Ft DRFH leads to hgher task completon rato for almost all users. Around 2% users have all ther tasks completed under Best-Ft DRFH but do not under Slots. Task completon rato w/ DRFH y = x Task completon rato w/ Slots Fg. 7. Task completon rato of users usng Best-Ft DRFH and Slots schedulers, respectvely. Each bubble s sze s logarthmc to the number of tasks the user submtted. Task completon rato n SC y = x Task completon rato n DC Fg. 8. Task completon rato of users runnng on dedcated clouds (DCs) and the shared cloud (SC). Each crcle s radus s logarthmc to the number of tasks submtted. Sharng ncentve: Our fnal evaluaton s on the sharng ncentve property of DRFH. As mentoned n Sec. IV-D, for each user, we run ts computng tasks on a dedcated cloud (DC) that s a proportonal subset of the orgnal shared cloud (SC). We then compare the task completon rato n DC wth that obtaned n SC. Fg. 8 llustrates the results. Whle DRFH does not guarantee 1% sharng ncentve for all users, t benefts most of them by poolng ther DCs together. In partcular, only 2% users see fewer tasks fnshed n the shared envronment. Even for these users, the task completon rato decreases only slghtly, as can be seen from Fg. 8. VII. CONCLUDING REMARKS In ths paper, we study a mult-resource allocaton problem n a heterogeneous cloud computng system where the resource pool s composed of a large number of servers wth dfferent confguratons n terms of resources such as processng, memory, and storage. The proposed mult-resource allocaton mechansm, known as DRFH, equalzes the global domnant share allocated to each user, and hence generalzes the DRF allocaton from a sngle server to multple heterogeneous servers. We analyze DRFH and show that t retans almost all desrable propertes that DRF provdes n the sngle-server scenaro. Notably, DRFH s envy-free, Pareto optmal, and truthful. We desgn a Best-Ft heurstc that mplements DRFH n a real-world system. Our large-scale smulatons drven by Google cluster traces show that, compared to the tradtonal sngle-resource abstracton such as a slot scheduler, DRFH acheves sgnfcant mprovements n resource utlzaton, leadng to much shorter job completon tmes. REFERENCES [1] M. Armbrust, A. Fox, R. Grffth, A. Joseph, R. Katz, A. Konwnsk, G. Lee, D. Patterson, A. Rabkn, I. Stoca, and M. Zahara, A vew of cloud computng, Commun. ACM, vol. 53, no. 4, pp. 5 58, 21. [2] C. Ress, A. Tumanov, G. Ganger, R. Katz, and M. Kozuch, Heterogenety and dynamcty of clouds at scale: Google trace analyss, n Proc. ACM SoCC, 212. [3] C. Ress, J. Wlkes, and J. L. Hellersten, Google Cluster-Usage Traces, [4] Apache Hadoop, [5] M. Isard, M. Budu, Y. Yu, A. Brrell, and D. Fetterly, Dryad: dstrbuted data-parallel programs from sequental buldng blocks, n Proc. EuroSys, 27. [6] A. Ghods, M. Zahara, B. Hndman, A. Konwnsk, S. Shenker, and I. Stoca, Domnant resource farness: Far allocaton of multple resource types, n Proc. USENIX NSDI, 211.

10 [7] C. Joe-Wong, S. Sen, T. Lan, and M. Chang, Mult-resource allocaton: Farness-effcency tradeoffs n a unfyng framework, n Proc. IEEE INFOCOM, 212. [8] D. Dolev, D. Fetelson, J. Halpern, R. Kupferman, and N. Lnal, No justfed complants: On far sharng of multple resources, n Proc. ACM ITCS, 212. [9] A. Gutman and N. Nsan, Far allocaton wthout trade, n Proc. AAMAS, 212. [1] D. Parkes, A. Procacca, and N. Shah, Beyond domnant resource farness: Extensons, lmtatons, and ndvsbltes, n Proc. ACM EC, 212. [11] S. Baruah, J. Gehrke, and C. Plaxton, Fast schedulng of perodc tasks on multple resources, n Proc. IEEE IPPS, [12] S. Baruah, N. Cohen, C. Plaxton, and D. Varvel, Proportonate progress: A noton of farness n resource allocaton, Algorthmca, vol. 15, no. 6, pp , [13] F. Kelly, A. Maulloo, and D. Tan, Rate control for communcaton networks: Shadow prces, proportonal farness and stablty, J. Oper. Res. Soc., vol. 49, no. 3, pp , [14] J. Mo and J. Walrand, Far end-to-end wndow-based congeston control, IEEE/ACM Trans. Networkng, vol. 8, no. 5, pp , 2. [15] J. Klenberg, Y. Raban, and É. Tardos, Farness n routng and load balancng, n Proc. IEEE FOCS, [16] J. Blanquer and B. Özden, Far queung for aggregated multple lnks, n Proc. ACM SIGCOMM, 21. [17] Y. Lu and E. Knghtly, Opportunstc far schedulng over multple wreless channels, n Proc. IEEE INFOCOM, 23. [18] C. Koksal, H. Kassab, and H. Balakrshnan, An analyss of short-term farness n wreless meda access protocols, n Proc. ACM SIGMET- RICS (poster sesson), 2. [19] M. Bredel and M. Fdler, Understandng farness and ts mpact on qualty of servce n IEEE 82.11, n Proc. IEEE INFOCOM, 29. [2] R. Jan, D. Chu, and W. Hawe, A quanttatve measure of farness and dscrmnaton for resource allocaton n shared computer system. Eastern Research Laboratory, Dgtal Equpment Corporaton, [21] T. Lan, D. Kao, M. Chang, and A. Sabharwal, An axomatc theory of farness n network resource allocaton, n Proc. IEEE INFOCOM, 21. [22] Hadoop Capacty Scheduler, scheduler.html. [23] Hadoop Far Scheduler, scheduler.html. [24] M. Isard, V. Prabhakaran, J. Currey, U. Weder, K. Talwar, and A. Goldberg, Quncy: Far schedulng for dstrbuted computng clusters, n Proc. ACM SOSP, 29. [25] A. Ghods, M. Zahara, S. Shenker, and I. Stoca, Choosy: Max-mn far sharng for datacenter jobs wth constrants, n Proc. ACM EuroSys, 213. [26] I. Kash, A. Procacca, and N. Shah, No agent left behnd: Dynamc far dvson of multple resources, 212. [27] J. L and J. Xue, Egaltaran dvson under leontef preferences, 211, manuscrpt. [28] A. D. Procacca, Cake cuttng: Not just chld s play, Commun. ACM,

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems 1 Mult-Resource Far Allocaton n Heterogeneous Cloud Computng Systems We Wang, Student Member, IEEE, Ben Lang, Senor Member, IEEE, Baochun L, Senor Member, IEEE Abstract We study the mult-resource allocaton

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers

Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers IEEE INFOCOM 24 - IEEE Conference on Computer Communications Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers Wei Wang, Baochun Li, Ben Liang Department of Electrical and

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

Recurrence. 1 Definitions and main statements

Recurrence. 1 Definitions and main statements Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

A Probabilistic Theory of Coherence

A Probabilistic Theory of Coherence A Probablstc Theory of Coherence BRANDEN FITELSON. The Coherence Measure C Let E be a set of n propostons E,..., E n. We seek a probablstc measure C(E) of the degree of coherence of E. Intutvely, we want

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

General Auction Mechanism for Search Advertising

General Auction Mechanism for Search Advertising General Aucton Mechansm for Search Advertsng Gagan Aggarwal S. Muthukrshnan Dávd Pál Martn Pál Keywords game theory, onlne auctons, stable matchngs ABSTRACT Internet search advertsng s often sold by an

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression Novel Methodology of Workng Captal Management for Large Publc Constructons by Usng Fuzzy S-curve Regresson Cheng-Wu Chen, Morrs H. L. Wang and Tng-Ya Hseh Department of Cvl Engneerng, Natonal Central Unversty,

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

Availability-Based Path Selection and Network Vulnerability Assessment

Availability-Based Path Selection and Network Vulnerability Assessment Avalablty-Based Path Selecton and Network Vulnerablty Assessment Song Yang, Stojan Trajanovsk and Fernando A. Kupers Delft Unversty of Technology, The Netherlands {S.Yang, S.Trajanovsk, F.A.Kupers}@tudelft.nl

More information

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures Mnmal Codng Network Wth Combnatoral Structure For Instantaneous Recovery From Edge Falures Ashly Joseph 1, Mr.M.Sadsh Sendl 2, Dr.S.Karthk 3 1 Fnal Year ME CSE Student Department of Computer Scence Engneerng

More information

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks Energy Effcent Routng n Ad Hoc Dsaster Recovery Networks Gl Zussman and Adran Segall Department of Electrcal Engneerng Technon Israel Insttute of Technology Hafa 32000, Israel {glz@tx, segall@ee}.technon.ac.l

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

Extending Probabilistic Dynamic Epistemic Logic

Extending Probabilistic Dynamic Epistemic Logic Extendng Probablstc Dynamc Epstemc Logc Joshua Sack May 29, 2008 Probablty Space Defnton A probablty space s a tuple (S, A, µ), where 1 S s a set called the sample space. 2 A P(S) s a σ-algebra: a set

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña Proceedngs of the 2008 Wnter Smulaton Conference S. J. Mason, R. R. Hll, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION

More information

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook)

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook) MIT 8.996: Topc n TCS: Internet Research Problems Sprng 2002 Lecture 7 March 20, 2002 Lecturer: Bran Dean Global Load Balancng Scrbe: John Kogel, Ben Leong In today s lecture, we dscuss global load balancng

More information

Activity Scheduling for Cost-Time Investment Optimization in Project Management

Activity Scheduling for Cost-Time Investment Optimization in Project Management PROJECT MANAGEMENT 4 th Internatonal Conference on Industral Engneerng and Industral Management XIV Congreso de Ingenería de Organzacón Donosta- San Sebastán, September 8 th -10 th 010 Actvty Schedulng

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems Schedulablty Bound of Weghted Round Robn Schedulers for Hard Real-Tme Systems Janja Wu, Jyh-Charn Lu, and We Zhao Department of Computer Scence, Texas A&M Unversty {janjaw, lu, zhao}@cs.tamu.edu Abstract

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

Economic-Robust Transmission Opportunity Auction in Multi-hop Wireless Networks

Economic-Robust Transmission Opportunity Auction in Multi-hop Wireless Networks Economc-Robust Transmsson Opportunty Aucton n Mult-hop Wreless Networks Mng L, Pan L, Mao Pan, and Jnyuan Sun Department of Electrcal and Computer Engneerng, Msssspp State Unversty, Msssspp State, MS 39762

More information

On the Interaction between Load Balancing and Speed Scaling

On the Interaction between Load Balancing and Speed Scaling On the Interacton between Load Balancng and Speed Scalng Ljun Chen, Na L and Steven H. Low Engneerng & Appled Scence Dvson, Calforna Insttute of Technology, USA Abstract Speed scalng has been wdely adopted

More information

On the Interaction between Load Balancing and Speed Scaling

On the Interaction between Load Balancing and Speed Scaling On the Interacton between Load Balancng and Speed Scalng Ljun Chen and Na L Abstract Speed scalng has been wdely adopted n computer and communcaton systems, n partcular, to reduce energy consumpton. An

More information

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network)

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network) Network-Wde Load Balancng Routng Wth Performance Guarantees Kartk Gopalan Tz-cker Chueh Yow-Jan Ln Florda State Unversty Stony Brook Unversty Telcorda Research kartk@cs.fsu.edu chueh@cs.sunysb.edu yjln@research.telcorda.com

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

Value Driven Load Balancing

Value Driven Load Balancing Value Drven Load Balancng Sherwn Doroud a, Esa Hyytä b,1, Mor Harchol-Balter c,2 a Tepper School of Busness, Carnege Mellon Unversty, 5000 Forbes Ave., Pttsburgh, PA 15213 b Department of Communcatons

More information

Downlink Power Allocation for Multi-class. Wireless Systems

Downlink Power Allocation for Multi-class. Wireless Systems Downlnk Power Allocaton for Mult-class 1 Wreless Systems Jang-Won Lee, Rav R. Mazumdar, and Ness B. Shroff School of Electrcal and Computer Engneerng Purdue Unversty West Lafayette, IN 47907, USA {lee46,

More information

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet 2008/8 An ntegrated model for warehouse and nventory plannng Géraldne Strack and Yves Pochet CORE Voe du Roman Pays 34 B-1348 Louvan-la-Neuve, Belgum. Tel (32 10) 47 43 04 Fax (32 10) 47 43 01 E-mal: corestat-lbrary@uclouvan.be

More information

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services When Network Effect Meets Congeston Effect: Leveragng Socal Servces for Wreless Servces aowen Gong School of Electrcal, Computer and Energy Engeerng Arzona State Unversty Tempe, AZ 8587, USA xgong9@asuedu

More information

A Secure Password-Authenticated Key Agreement Using Smart Cards

A Secure Password-Authenticated Key Agreement Using Smart Cards A Secure Password-Authentcated Key Agreement Usng Smart Cards Ka Chan 1, Wen-Chung Kuo 2 and Jn-Chou Cheng 3 1 Department of Computer and Informaton Scence, R.O.C. Mltary Academy, Kaohsung 83059, Tawan,

More information

Distributed Optimal Contention Window Control for Elastic Traffic in Wireless LANs

Distributed Optimal Contention Window Control for Elastic Traffic in Wireless LANs Dstrbuted Optmal Contenton Wndow Control for Elastc Traffc n Wreless LANs Yalng Yang, Jun Wang and Robn Kravets Unversty of Illnos at Urbana-Champagn { yyang8, junwang3, rhk@cs.uuc.edu} Abstract Ths paper

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

VoIP over Multiple IEEE 802.11 Wireless LANs

VoIP over Multiple IEEE 802.11 Wireless LANs SUBMITTED TO IEEE TRANSACTIONS ON MOBILE COMPUTING 1 VoIP over Multple IEEE 80.11 Wreless LANs An Chan, Graduate Student Member, IEEE, Soung Chang Lew, Senor Member, IEEE Abstract IEEE 80.11 WLAN has hgh

More information

1 Example 1: Axis-aligned rectangles

1 Example 1: Axis-aligned rectangles COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton

More information

Logical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem

Logical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME, ISSUE, FEBRUARY ISSN 77-866 Logcal Development Of Vogel s Approxmaton Method (LD- An Approach To Fnd Basc Feasble Soluton Of Transportaton

More information

Formulating & Solving Integer Problems Chapter 11 289

Formulating & Solving Integer Problems Chapter 11 289 Formulatng & Solvng Integer Problems Chapter 11 289 The Optonal Stop TSP If we drop the requrement that every stop must be vsted, we then get the optonal stop TSP. Ths mght correspond to a ob sequencng

More information

A Lyapunov Optimization Approach to Repeated Stochastic Games

A Lyapunov Optimization Approach to Repeated Stochastic Games PROC. ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, OCT. 2013 1 A Lyapunov Optmzaton Approach to Repeated Stochastc Games Mchael J. Neely Unversty of Southern Calforna http://www-bcf.usc.edu/

More information

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by 6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng

More information

Cloud-based Social Application Deployment using Local Processing and Global Distribution

Cloud-based Social Application Deployment using Local Processing and Global Distribution Cloud-based Socal Applcaton Deployment usng Local Processng and Global Dstrbuton Zh Wang *, Baochun L, Lfeng Sun *, and Shqang Yang * * Bejng Key Laboratory of Networked Multmeda Department of Computer

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo

More information

Dynamic Online-Advertising Auctions as Stochastic Scheduling

Dynamic Online-Advertising Auctions as Stochastic Scheduling Dynamc Onlne-Advertsng Auctons as Stochastc Schedulng Isha Menache and Asuman Ozdaglar Massachusetts Insttute of Technology {sha,asuman}@mt.edu R. Srkant Unversty of Illnos at Urbana-Champagn rsrkant@llnos.edu

More information

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy 4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.

More information

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers J. Parallel Dstrb. Comput. 71 (2011) 732 749 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: www.elsever.com/locate/pdc Envronment-conscous schedulng of HPC applcatons

More information

A Programming Model for the Cloud Platform

A Programming Model for the Cloud Platform Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna luxaodongxht@qq.com

More information

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 畫 類 別 : 個 別 型 計 畫 半 導 體 產 業 大 型 廠 房 之 設 施 規 劃 計 畫 編 號 :NSC 96-2628-E-009-026-MY3 執 行 期 間 : 2007 年 8 月 1 日 至 2010 年 7 月 31 日 計 畫 主 持 人 : 巫 木 誠 共 同

More information

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall SP 2005-02 August 2005 Staff Paper Department of Appled Economcs and Management Cornell Unversty, Ithaca, New York 14853-7801 USA Farm Savngs Accounts: Examnng Income Varablty, Elgblty, and Benefts Brent

More information

Real-Time Process Scheduling

Real-Time Process Scheduling Real-Tme Process Schedulng ktw@cse.ntu.edu.tw (Real-Tme and Embedded Systems Laboratory) Independent Process Schedulng Processes share nothng but CPU Papers for dscussons: C.L. Lu and James. W. Layland,

More information

denote the location of a node, and suppose node X . This transmission causes a successful reception by node X for any other node

denote the location of a node, and suppose node X . This transmission causes a successful reception by node X for any other node Fnal Report of EE359 Class Proect Throughput and Delay n Wreless Ad Hoc Networs Changhua He changhua@stanford.edu Abstract: Networ throughput and pacet delay are the two most mportant parameters to evaluate

More information

AN APPROACH TO WIRELESS SCHEDULING CONSIDERING REVENUE AND USERS SATISFACTION

AN APPROACH TO WIRELESS SCHEDULING CONSIDERING REVENUE AND USERS SATISFACTION The Medterranean Journal of Computers and Networks, Vol. 2, No. 1, 2006 57 AN APPROACH TO WIRELESS SCHEDULING CONSIDERING REVENUE AND USERS SATISFACTION L. Bada 1,*, M. Zorz 2 1 Department of Engneerng,

More information

On File Delay Minimization for Content Uploading to Media Cloud via Collaborative Wireless Network

On File Delay Minimization for Content Uploading to Media Cloud via Collaborative Wireless Network On Fle Delay Mnmzaton for Content Uploadng to Meda Cloud va Collaboratve Wreless Network Ge Zhang and Yonggang Wen School of Computer Engneerng Nanyang Technologcal Unversty Sngapore Emal: {zh0001ge, ygwen}@ntu.edu.sg

More information

Calculation of Sampling Weights

Calculation of Sampling Weights Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample

More information

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers Far Vrtual Bandwdth Allocaton Model n Vrtual Data Centers Yng Yuan, Cu-rong Wang, Cong Wang School of Informaton Scence and Engneerng ortheastern Unversty Shenyang, Chna School of Computer and Communcaton

More information

Period and Deadline Selection for Schedulability in Real-Time Systems

Period and Deadline Selection for Schedulability in Real-Time Systems Perod and Deadlne Selecton for Schedulablty n Real-Tme Systems Thdapat Chantem, Xaofeng Wang, M.D. Lemmon, and X. Sharon Hu Department of Computer Scence and Engneerng, Department of Electrcal Engneerng

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems STAN-CS-73-355 I SU-SE-73-013 An Analyss of Central Processor Schedulng n Multprogrammed Computer Systems (Dgest Edton) by Thomas G. Prce October 1972 Techncal Report No. 57 Reproducton n whole or n part

More information

How To Calculate An Approxmaton Factor Of 1 1/E

How To Calculate An Approxmaton Factor Of 1 1/E Approxmaton algorthms for allocaton problems: Improvng the factor of 1 1/e Urel Fege Mcrosoft Research Redmond, WA 98052 urfege@mcrosoft.com Jan Vondrák Prnceton Unversty Prnceton, NJ 08540 jvondrak@gmal.com

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid Feasblty of Usng Dscrmnate Prcng Schemes for Energy Tradng n Smart Grd Wayes Tushar, Chau Yuen, Bo Cha, Davd B. Smth, and H. Vncent Poor Sngapore Unversty of Technology and Desgn, Sngapore 138682. Emal:

More information

Network Aware Load-Balancing via Parallel VM Migration for Data Centers

Network Aware Load-Balancing via Parallel VM Migration for Data Centers Network Aware Load-Balancng va Parallel VM Mgraton for Data Centers Kun-Tng Chen 2, Chen Chen 12, Po-Hsang Wang 2 1 Informaton Technology Servce Center, 2 Department of Computer Scence Natonal Chao Tung

More information

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing Effcent Bandwdth Management n Broadband Wreless Access Systems Usng CAC-based Dynamc Prcng Bader Al-Manthar, Ndal Nasser 2, Najah Abu Al 3, Hossam Hassanen Telecommuncatons Research Laboratory School of

More information

Traffic State Estimation in the Traffic Management Center of Berlin

Traffic State Estimation in the Traffic Management Center of Berlin Traffc State Estmaton n the Traffc Management Center of Berln Authors: Peter Vortsch, PTV AG, Stumpfstrasse, D-763 Karlsruhe, Germany phone ++49/72/965/35, emal peter.vortsch@ptv.de Peter Möhl, PTV AG,

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS 21 22 September 2007, BULGARIA 119 Proceedngs of the Internatonal Conference on Informaton Technologes (InfoTech-2007) 21 st 22 nd September 2007, Bulgara vol. 2 INVESTIGATION OF VEHICULAR USERS FAIRNESS

More information

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

How To Understand The Results Of The German Meris Cloud And Water Vapour Product Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

Optimal resource capacity management for stochastic networks

Optimal resource capacity management for stochastic networks Submtted for publcaton. Optmal resource capacty management for stochastc networks A.B. Deker H. Mlton Stewart School of ISyE, Georga Insttute of Technology, Atlanta, GA 30332, ton.deker@sye.gatech.edu

More information

Fair and Efficient User-Network Association Algorithm for Multi-Technology Wireless Networks

Fair and Efficient User-Network Association Algorithm for Multi-Technology Wireless Networks Far and Effcent User-Network Assocaton Algorthm for Mult-Technology Wreless Networks Perre Coucheney, Cornne Touat and Bruno Gaujal INRIA Rhône-Alpes and LIG, MESCAL project, Grenoble France, {perre.coucheney,

More information

taposh_kuet20@yahoo.comcsedchan@cityu.edu.hk rajib_csedept@yahoo.co.uk, alam_shihabul@yahoo.com

taposh_kuet20@yahoo.comcsedchan@cityu.edu.hk rajib_csedept@yahoo.co.uk, alam_shihabul@yahoo.com G. G. Md. Nawaz Al 1,2, Rajb Chakraborty 2, Md. Shhabul Alam 2 and Edward Chan 1 1 Cty Unversty of Hong Kong, Hong Kong, Chna taposh_kuet20@yahoo.comcsedchan@ctyu.edu.hk 2 Khulna Unversty of Engneerng

More information

Research of concurrency control protocol based on the main memory database

Research of concurrency control protocol based on the main memory database Research of concurrency control protocol based on the man memory database Abstract Yonghua Zhang * Shjazhuang Unversty of economcs, Shjazhuang, Shjazhuang, Chna Receved 1 October 2014, www.cmnt.lv The

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

Fisher Markets and Convex Programs

Fisher Markets and Convex Programs Fsher Markets and Convex Programs Nkhl R. Devanur 1 Introducton Convex programmng dualty s usually stated n ts most general form, wth convex objectve functons and convex constrants. (The book by Boyd and

More information

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints Effectve Network Defense Strateges aganst Malcous Attacks wth Varous Defense Mechansms under Qualty of Servce Constrants Frank Yeong-Sung Ln Department of Informaton Natonal Tawan Unversty Tape, Tawan,

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

Online Procurement Auctions for Resource Pooling in Client-Assisted Cloud Storage Systems

Online Procurement Auctions for Resource Pooling in Client-Assisted Cloud Storage Systems Onlne Procurement Auctons for Resource Poolng n Clent-Asssted Cloud Storage Systems Jan Zhao, Xaowen Chu, Ha Lu, Yu-Wng Leung Department of Computer Scence Hong Kong Baptst Unversty Emal: {janzhao, chxw,

More information

What should (public) health insurance cover?

What should (public) health insurance cover? Journal of Health Economcs 26 (27) 251 262 What should (publc) health nsurance cover? Mchael Hoel Department of Economcs, Unversty of Oslo, P.O. Box 195 Blndern, N-317 Oslo, Norway Receved 29 Aprl 25;

More information

A New Paradigm for Load Balancing in Wireless Mesh Networks

A New Paradigm for Load Balancing in Wireless Mesh Networks A New Paradgm for Load Balancng n Wreless Mesh Networks Abstract: Obtanng maxmum throughput across a network or a mesh through optmal load balancng s known to be an NP-hard problem. Desgnng effcent load

More information

Pricing Model of Cloud Computing Service with Partial Multihoming

Pricing Model of Cloud Computing Service with Partial Multihoming Prcng Model of Cloud Computng Servce wth Partal Multhomng Zhang Ru 1 Tang Bng-yong 1 1.Glorous Sun School of Busness and Managment Donghua Unversty Shangha 251 Chna E-mal:ru528369@mal.dhu.edu.cn Abstract

More information

Efficient On-Demand Data Service Delivery to High-Speed Trains in Cellular/Infostation Integrated Networks

Efficient On-Demand Data Service Delivery to High-Speed Trains in Cellular/Infostation Integrated Networks IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. XX, NO. XX, MONTH 2XX 1 Effcent On-Demand Data Servce Delvery to Hgh-Speed Trans n Cellular/Infostaton Integrated Networks Hao Lang, Student Member,

More information

Ad-Hoc Games and Packet Forwardng Networks

Ad-Hoc Games and Packet Forwardng Networks On Desgnng Incentve-Compatble Routng and Forwardng Protocols n Wreless Ad-Hoc Networks An Integrated Approach Usng Game Theoretcal and Cryptographc Technques Sheng Zhong L (Erran) L Yanbn Grace Lu Yang

More information

8 Algorithm for Binary Searching in Trees

8 Algorithm for Binary Searching in Trees 8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the

More information

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING FORMAL ANALYSIS FOR REAL-TIME SCHEDULING Bruno Dutertre and Vctora Stavrdou, SRI Internatonal, Menlo Park, CA Introducton In modern avoncs archtectures, applcaton software ncreasngly reles on servces provded

More information

QoS-based Scheduling of Workflow Applications on Service Grids

QoS-based Scheduling of Workflow Applications on Service Grids QoS-based Schedulng of Workflow Applcatons on Servce Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted System Laboratory Dept. of Computer Scence and Software Engneerng The Unversty

More information

A Self-Organized, Fault-Tolerant and Scalable Replication Scheme for Cloud Storage

A Self-Organized, Fault-Tolerant and Scalable Replication Scheme for Cloud Storage A Self-Organzed, Fault-Tolerant and Scalable Replcaton Scheme for Cloud Storage Ncolas Bonvn, Thanass G. Papaoannou and Karl Aberer School of Computer and Communcaton Scences École Polytechnque Fédérale

More information