Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems"

Transcription

1 1 Mult-Resource Far Allocaton n Heterogeneous Cloud Computng Systems We Wang, Student Member, IEEE, Ben Lang, Senor Member, IEEE, Baochun L, Senor Member, IEEE Abstract We study the mult-resource allocaton problem n cloud computng systems where the resource pool s constructed from a large number of heterogeneous servers, representng dfferent ponts n the confguraton space of resources such as processng, memory, and storage. We desgn a mult-resource allocaton mechansm, called DRFH, that generalzes the noton of Domnant Resource Farness (DRF) from a sngle server to multple heterogeneous servers. DRFH provdes a number of hghly desrable propertes. Wth DRFH, no user prefers the allocaton of another user; no one can mprove ts allocaton wthout decreasng that of the others; and more mportantly, no coalton behavor of msreportng resource demands can beneft all ts members. DRFH also ensures some level of servce solaton among the users. As a drect applcaton, we desgn a smple heurstc that mplements DRFH n real-world systems. Large-scale smulatons drven by Google cluster traces show that DRFH sgnfcantly outperforms the tradtonal slot-based scheduler, leadng to much hgher resource utlzaton wth substantally shorter job completon tmes. Index Terms Cloud computng, heterogeneous servers, job schedulng, mult-resource allocaton, farness. F 1 INTRODUCTION Resource allocaton under the noton of farness and effcency s a fundamental problem n the desgn of cloud computng systems. Unlke tradtonal applcaton-specfc clusters and grds, cloud computng systems dstngush themselves wth unprecedented server and workload heterogenety. Modern datacenters are lkely to be constructed from a varety of server classes, wth dfferent confguratons n terms of processng capabltes, memory szes, and storage spaces [2]. Asynchronous hardware upgrades, such as addng new servers and phasng out exstng ones, further aggravate such dversty, leadng to a wde range of server specfcatons n a cloud computng system [3] [7]. Table 1 llustrates the heterogenety of servers n one of Google s clusters [3], [8]. Smlar server heterogenety has also been observed n publc clouds, such as Amazon EC2 and Rackspace [4], [5]. In addton to server heterogenety, cloud computng systems also represent much hgher dversty n resource demand profles. Dependng on the underlyng applcatons, the workload spannng multple cloud users may requre vastly dfferent amounts of resources (e.g., CPU, memory, and storage). For example, numercal computng tasks are usually CPU ntensve, whle database operatons typcally requre hghmemory support. The heterogenety of both servers and workload demands poses sgnfcant techncal challenges on the resource allocaton mechansm, gvng rse to many delcate ssues notably farness and effcency that must be carefully addressed. Despte the unprecedented heterogenety n cloud com- W. Wang, B. Lang and B. L are wth the Department of Electrcal and Computer Engneerng, Unversty of Toronto, Toronto, ON, Canada. E-mal: {wewang, Part of ths paper has appeared n [1]. Ths new verson contans substantal revson wth new llustratve examples, property analyses, proofs, and dscussons. TABLE 1 Confguratons of servers n one of Google s clusters [3], [8]. CPU and memory unts are normalzed to the maxmum server (hghlghted below). Number of servers CPUs Memory putng systems, state-of-the-art computng frameworks employ rather smple abstracton that falls short. For example, Hadoop [9] and Dryad [1], the two most wdely deployed cloud computng frameworks, partton a server s resources nto bundles known as slots that contan fxed amounts of dfferent resources. The system then allocates resources to users at the granularty of these slots. Such a sngle resource abstracton gnores the heterogenety of both server specfcatons and demand profles, nevtably leadng to a farly neffcent allocaton [11]. Towards addressng the neffcency of the current allocaton module, many recent works focus on mult-resource allocaton mechansms. Notably, Ghods et al. [11] suggest a compellng alternatve known as the Domnant Resource Farness (DRF) allocaton, n whch each user s domnant share the maxmum rato of any resource that the user has been allocated s equalzed. The DRF allocaton possesses a set of hghly desrable farness propertes, and has quckly receved sgnfcant attenton n the lterature [12] [15]. Whle DRF and ts subsequent works address the demand heterogenety of multple resources, they all lmt the dscusson to a smplfed

2 2 model where resources are pooled n one place and the entre resource pool s abstracted as one bg server 1. Such an all-n-one resource model not only contrasts the prevalent datacenter nfrastructure where resources are dstrbuted to a large number of servers but also gnores the server heterogenety: the allocatons depend only on the total amount of resources pooled n the system, rrespectve of the underlyng resource dstrbuton of servers. In fact, when servers are heterogeneous, even the defnton of domnant resource s not so clear. Dependng on the underlyng server confguratons, a computng task may bottleneck on dfferent resources n dfferent servers. We shall see that nave extensons, such as applyng the DRF allocaton to each server separately, may lead to a hghly neffcent allocaton (detals n Sec. 3.4). Ths paper represents a rgorous study to propose a soluton wth provable operatonal benefts that brdge the gap between the exstng mult-resource allocaton models and the state-of-the-art datacenter nfrastructure. We propose DRFH, a DRF generalzaton n Heterogeneous envronments where resources are pooled by a large number of heterogeneous servers, representng dfferent ponts n the confguraton space of resources such as processng, memory, and storage. DRFH generalzes the ntuton of DRF by seekng an allocaton that equalzes every user s global domnant share, whch s the maxmum rato of any resources the user has been allocated n the entre resource pool. We systematcally analyze DRFH and show that t retans most of the desrable propertes that the all-n-one DRF model provdes for a sngle server [11]. Specfcally, DRFH s Pareto optmal, where no user s able to ncrease ts allocaton wthout decreasng other users allocatons. Meanwhle, DRFH s envy-free n that no user prefers the allocaton of another one. More mportantly, DRFH s group strategyproof n that whenever a coalton of users collude wth each other to msreport ther resource demands, there s a member of the coalton that cannot strctly gan. As a result, the coalton s better off not formed. In addton, DRFH offers some level of servce solaton by ensurng the sharng ncentve property n a weak sense t allows users to execute more tasks than those under some equal partton where the entre resource pool s evenly allocated among all users. DRFH also satsfes a set of other mportant propertes, namely sngle-server DRF, sngle-resource farness, bottleneck farness, and populaton monotoncty (detals n Sec. 3.3). As a drect applcaton, we desgn a heurstc schedulng algorthm that mplements DRFH n real-world systems. We conduct large-scale smulatons drven by Google cluster traces [8]. Our smulaton results show that compared wth the tradtonal slot schedulers adopted n prevalent cloud computng frameworks, the DRFH algorthm sutably matches demand heterogenety to server heterogenety, sgnfcantly mprovng the system s resource utlzaton, yet wth a substantal reducton of job completon tmes. The remander of ths paper s organzed as follows. We brefly revst the DRF allocaton and pont out ts lmtatons n heterogeneous envronments n Sec. 2. We then formulate 1. Whle [11] brefly touches on the case where resources are dstrbuted to small servers (known as the dscrete scenaro), ts coverage s rather nformal. Memory CPUs Server 1 Server 2 (1 CPU, 14 GB) (8 CPUs, 4 GB) Fg. 1. An example of a system consstng of two heterogeneous servers, n whch user 1 can schedule at most two tasks each demandng 1 CPU and 4 GB memory. The resources requred to execute the two tasks are also hghlghted n the fgure. the allocaton problem wth heterogeneous servers n Sec. 3, where a set of desrable allocaton propertes are also defned. In Sec. 4, we propose DRFH and analyze ts propertes. Sec. 6 dedcates to some practcal ssues on mplementng DRFH. We evaluate the performance of DRFH va trace-drven smulatons n Sec. 6. We survey the related work n Sec. 7 and conclude the paper n Sec LIMITATIONS OF DRF ALLOCATION IN HET- EROGENEOUS SYSTEMS In ths secton, we brefly revew the DRF allocaton [11] and show that t may lead to an nfeasble allocaton when a cloud system s composed of multple heterogeneous servers. In DRF, the domnant resource s defned for each user as the one that requres the largest fracton of the total avalablty. The mechansm seeks a maxmum allocaton that equalzes each user s domnant share, defned as the fracton of the domnant resource the user has been allocated. Consder an example gven n [11]. Suppose that a computng system has 9 CPUs and 18 GB memory, and s shared by two users. User 1 wshes to schedule a set of (dvsble) tasks each requrng h1 CPU, 4 GB, and user 2 has a set of (dvsble) tasks each requrng h3 CPU, 1 GB. In ths example, the domnant resource of user 1 s the memory as each of ts task demands 1/9 of the total CPU and 2/9 of the total memory. On the other hand, the domnant resource of user 2 s CPU, as each of ts task requres 1/3 of the total CPU and 1/18 of the total memory. The DRF mechansm then allocates h3 CPU, 12 GB to user 1 and h6 CPU, 2 GB to user 2, where user 1 schedules three tasks and user 2 schedules two. It s easy to verfy that both users receve the same domnant share (.e., 2/3) and no one can schedule more tasks by allocatng addtonal resources (there s 2 GB memory left unallocated). The DRF allocaton above s based on a smplfed all-n-one resource model, where the entre system s modeled as one bg server. The allocaton hence depends only on the total amount of resources pooled n the system. In the example above, no matter how many servers the system has, and what each server specfcaton s, as long as the system has 9 CPUs and 18 GB memory n total, the DRF allocaton wll always schedule three tasks for user 1 and two for user 2. However, ths allocaton may not be possble to mplement, especally when the system

3 3 conssts of heterogeneous servers. For example, suppose that the resource pool s provded by two servers. Server 1 has 1 CPU and 14 GB memory, and server 2 has 8 CPUs and 4 GB memory. As shown n Fg. 1, even allocatng both servers exclusvely to user 1, at most two tasks can be scheduled, one n each server. Moreover, even for some server specfcatons where the DRF allocaton s feasble, the mechansm only gves the total amount of resources each user should receve. It remans unclear how many resources a user should be allocated n each server. These problems sgnfcantly lmt the applcaton of the DRF mechansm. In general, the allocaton s vald only when the system contans a sngle server or multple homogeneous servers, whch s rarely a case under the prevalent datacenter nfrastructure. Despte the lmtaton of the all-n-one resource model, DRF s shown to possess a set of hghly desrable allocaton propertes for cloud computng systems [11], [15]. A natural queston s: how should the DRF ntuton be generalzed to a heterogeneous envronment to acheve smlar propertes? Note that ths s not an easy queston to answer. In fact, wth heterogeneous servers, even the defnton of domnant resource s not so clear. Dependng on the server specfcatons, a resource most demanded n one server (n terms of the fracton of the server s avalablty) mght be the least-demanded n another. For nstance, n the example of Fg. 1, user 1 demands CPU the most n server 1. But n server 2, t demands memory the most. Should the domnant resource be defned separately for each server, or should t be defned for the entre resource pool? How should the allocaton be conducted? And what propertes do the resultng allocaton preserve? We shall answer these questons n the followng sectons. 3 SYSTEM MODEL AND ALLOCATION PROP- ERTIES In ths secton, we model mult-resource allocaton n a cloud computng system wth heterogeneous servers. We formalze a number of desrable propertes that are deemed the most mportant for allocaton mechansms n cloud computng envronments. 3.1 Basc Settngs Let S = {1,...,k} be the set of heterogeneous servers a cloud computng system has n ts resource pool. Let R = {1,...,m} be the set of m hardware resources provded by each server, e.g., CPU, memory, storage, etc. Let c l =(c l1,...,c lm ) T be the resource capacty vector of server l 2 S, where each component c lr denotes the total amount of resource r avalable n ths server. Wthout loss of generalty, we normalze the total avalablty of every resource to 1,.e., c lr =1, 8r 2 R. Let U = {1,...,n} be the set of cloud users sharng the entre system. For every user, let D =(D 1,...,D m ) T be ts resource demand vector, where D r s the amount of resource r requred by each nstance of the task of user. For smplcty, we assume postve demands,.e., D r > for all Memory CPUs Server 1 Server 2 (2 CPUs, 12 GB) (12 CPUs, 2 GB) Fg. 2. An example of a system contanng two heterogeneous servers shared by two users. Each computng task of user 1 requres.2 CPU tme and 1 GB memory, whle the computng task of user 2 requres 1 CPU tme and.2 GB memory. user and resource r. We say resource r the global domnant resource of user f r 2 arg max D r. r2r In other words, resource r s the most heavly demanded resource requred by each nstance of the task of user, over the entre resource pool. For each user and resource r, we defne d r = D r /D r as the normalzed demand and denote by d =(d 1,...,d m ) T the normalzed demand vector of user. As a concrete example, consder Fg. 2 where the system conssts of two heterogeneous servers. Server 1 s hghmemory wth 2 CPUs and 12 GB memory, whle server 2 s hgh-cpu wth 12 CPUs and 2 GB memory. Snce the system has 14 CPUs and 14 GB memory n total, the normalzed capacty vectors of server 1 and 2 are c 1 =(CPU share, memory share) T =(1/7, 6/7) T and c 2 = (6/7, 1/7) T, respectvely. Now suppose that there are two users. User 1 has memory-ntensve tasks each requrng.2 CPU tme and 1 GB memory, whle user 2 has CPU-heavy tasks each requrng 1 CPU tme and.2 GB memory. The demand vector of user 1 s D 1 =(1/7, 1/14) T and the normalzed vector s d 1 =(1/5, 1) T, where memory s the global domnant resource. Smlarly, user 2 has D 2 =(1/14, 1/7) T and d 2 =(1, 1/5) T, and CPU s ts global domnant resource. For now, we assume users have an nfnte number of tasks to be scheduled, and all tasks are dvsble [11], [13] [16]. We shall dscuss how these assumptons can be relaxed n Sec Resource Allocaton For every user and server l, let A l =(A l1,...,a lm ) T be the resource allocaton vector, where A lr s the amount of resource r allocated to user n server l. Let A = (A 1,...,A k ) be the allocaton matrx of user, and A = (A 1,...,A n ) the overall allocaton for all users. We say an allocaton A feasble f no server s requred to use more than any of ts total resources,.e., A lr apple c lr, 8l 2 S, r 2 R. 2U

4 4 For each user, gven allocaton A l n server l, let N l (A l ) be the maxmum number of tasks (possbly fractonal) t can schedule. We have As a result, N l (A l )D r apple A lr, 8r 2 R. N l (A l )=mn r2r {A lr/d r }. The total number of tasks user can schedule under allocaton A s hence N (A )= N l (A l ). (1) Intutvely, a user prefers an allocaton that allows t to schedule more tasks. A well-justfed allocaton should never gve a user more resources than t can actually use n a server. Followng the termnology used n the economcs lterature [17], we call such an allocaton non-wasteful: Defnton 1: For user and server l, an allocaton A l s non-wasteful f reducng any resource decreases the number of tasks scheduled,.e., for all A l A 2 l, we have N l (A l) <N l (A l ). Further, user s allocaton A =(A l ) s non-wasteful f A l s non-wasteful for all server l, and allocaton A =(A ) s non-wasteful f A s non-wasteful for all user. Note that one can always convert an allocaton to nonwasteful by revokng those resources that are allocated but have never been actually used, wthout changng the number of tasks scheduled for any user. Unless otherwse specfed, we lmt the dscusson to non-wasteful allocatons. 3.3 Allocaton Mechansm and Desrable Propertes A resource allocaton mechansm takes user demands as nput and outputs the allocaton result. In general, an allocaton mechansm should provde the followng essental propertes that are wdely recognzed as the most mportant farness and effcency measures n both cloud computng systems [11], [12], [18] and the economcs lterature [17], [19]. Envy-freeness: An allocaton mechansm s envy-free f no user prefers the other s allocaton to ts own,.e., N (A ) N (A j ), 8, j 2 U. Ths property essentally embodes the noton of farness. Pareto optmalty: An allocaton mechansm s Pareto optmal f t returns an allocaton A such that for all feasble allocatons A, f N (A ) >N (A ) for some user, then there exsts a user j 6= such that N j (A j ) <N j(a j ). In other words, allocaton A cannot be further mproved such that all users are at least as well off and at least one user s strctly better off. Ths property ensures the allocaton effcency and s crtcal to acheve hgh resource utlzaton. Group strategyproofness: An allocaton mechansm s group strategyproof f whenever a coalton of users msreport 2. For any two vectors x and y, we say x y f x apple y, 8 and for some j we have strct nequalty: x j <y j. ther resource demands (assumng a user s demand s ts prvate nformaton), there s a member of the coalton who would schedule less tasks and hence has no ncentve to jon the coalton. Specfcally, let M U be the coalton of manpulators n whch user 2 M msreports ts demand as D 6= D. Let A be the allocaton returned. Also, let A be the allocaton returned when all users truthfully report ther demands. The allocaton mechansm s group strategyproof f there exsts a manpulator 2 M who cannot schedule more tasks than beng truthful,.e., N (A ) apple N (A ). In other words, user s better off qutng the coalton. Group strategyproofness s of a specal mportance for a cloud computng system, as t s common to observe n a real-world system that users try to manpulate the scheduler for more allocatons by lyng about ther resource demands [11], [18]. Sharng ncentve s another crtcal property that has been frequently mentoned n the lterature [11] [13], [15]. It ensures that every user s allocaton s not worse off than that obtaned by evenly dvdng the entre resource pool. Whle ths property s well defned for a sngle server, t s not for a system contanng multple heterogeneous servers, as there s an nfnte number of ways to evenly dvde the resource pool among users, and t s unclear whch one should be selected as a benchmark to compare wth. We shall gve a specfc dscusson to Sec. 4.5, where we justfy between two reasonable alternatves. In addton to the four essental allocaton propertes above, we also consder four other mportant propertes as follows: Sngle-server DRF: If the system contans only one server, then the resultng allocaton should be reduced to the DRF allocaton. Sngle-resource farness: If there s a sngle resource n the system, then the resultng allocaton should be reduced to a max-mn far allocaton. Bottleneck farness: If all users bottleneck on the same resource (.e., havng the same global domnant resource), then the resultng allocaton should be reduced to a max-mn far allocaton for that resource. Populaton monotoncty: If a user leaves the system and relnqushes all ts allocatons, then the remanng users wll not see any reducton n the number of tasks scheduled. To summarze, our objectve s to desgn an allocaton mechansm that guarantees all the propertes defned above. 3.4 Nave DRF Extenson and Its Ineffcency It has been shown n [11], [15] that the DRF allocaton satsfes all the desrable propertes mentoned above when the entre resource pool s modeled as one server. When resources are dstrbuted to multple heterogeneous servers, a nave generalzaton s to separately apply the DRF allocaton per server. For nstance, consder the example of Fg. 2. We frst apply DRF n server 1. Because CPU s the domnant resource of both users, t s equally dvded for both of them, each recevng 1. As a result, user 1 schedules 5 tasks onto server 1, whle user 2 schedules one. Smlarly, n server 2,

5 5 1% 5% % User1 42% 1% 5% User2 8% % CPU Memory CPU Memory Server 1 Server 2 Fg. 3. DRF allocaton for the example shown n Fg. 2, where user 1 s allocated 5 tasks n server 1 and 1 n server 2, whle user 2 s allocated 1 task n server 1 and 5 n server 2. memory s the domnant resource of both users and s evenly allocated, leadng to one task scheduled for user 1 and fve for user 2. The resultng allocatons n the two servers are llustrated n Fg. 3, where both users schedule 6 tasks. Unfortunately, ths allocaton volates the Pareto optmalty and s hghly neffcent. If we nstead allocate server 1 exclusvely to user 1, and server 2 exclusvely to user 2, then both users schedule 1 tasks, almost twce the number of tasks scheduled under the DRF allocaton. In fact, a smlar example can be constructed to show that the per-server DRF may lead to arbtrarly low resource utlzaton. The falure of the nave DRF extenson to the heterogeneous envronment necesstates an alternatve allocaton mechansm, whch s the man theme of the next secton. 4 DRFH ALLOCATION AND ITS PROPERTIES In ths secton, we present DRFH, a generalzaton of DRF n a heterogeneous cloud computng system where resources are dstrbuted n a number of heterogeneous servers. We analyze DRFH and show that t provdes all the desrable propertes defned n Sec DRFH Allocaton Instead of allocatng separately n each server, DRFH jontly consders resource allocaton across all heterogeneous servers. The key ntuton s to acheve the max-mn far allocaton for the global domnant resources. Specfcally, gven allocaton A l, let G l (A l )=N l (A l )D r =mn r2r {A lr/d r }. (2) be the amount of global domnant resource user s allocated n server l. Snce the total avalablty of resources s normalzed to 1, we also refer to G l (A l ) the global domnant share user receves n server l. Smply addng up G l (A l ) over all servers gves the global domnant share user receves under allocaton A,.e., G (A )= G l (A l )= mn r2r {A lr/d r }. (3) DRFH allocaton ams to maxmze the mnmum global domnant share among all users, subject to the resource constrants per server,.e., max A s.t. mn G (A ) 2U A lr apple c lr, 8l 2 S, r 2 R. 2U Recall that wthout loss of generalty, we assume nonwasteful allocaton A (see Sec. 3.2). We have the followng structural result. Its proof s deferred to the appendx 3. Lemma 1: For user and server l, an allocaton A l s non-wasteful f and only f there exsts some g l such that A l = g l d. In partcular, g l s the global domnant share user receves n server l under allocaton A l,.e., g l = G l (A l ). Intutvely, Lemma 1 ndcates that under a non-wasteful allocaton, resources are allocated n proporton to the user s demand. Lemma 1 mmedately suggests the followng relatonshp for every user and ts non-wasteful allocaton A : G (A )= G l (A l )= g l. Problem (4) can hence be equvalently wrtten as max mn g l {g l } 2U s.t. g l d r apple c lr, 8l 2 S, r 2 R, 2U where the constrants are derved from Lemma 1. Now let g =mn P g l. Va straghtforward algebrac operatons, we see that (5) s equvalent to the followng problem: max {g l },g s.t. g g l d r apple c lr, 8l 2 S, r 2 R, 2U g l = g, 8 2 U. l2u Note that the second constrant embodes the farness n terms of equalzed global domnant share g. By solvng (6), DRFH allocates each user the maxmum global domnant share g, under the constrants of both server capacty and farness. The allocaton receved by each user n server l s smply A l = g l d. For example, Fg. 4 llustrates the resultng DRFH allocaton n the example of Fg. 2. By solvng (6), DRFH allocates server 1 exclusvely to user 1 and server 2 exclusvely to user 2, allowng each user to schedule 1 tasks wth the maxmum global domnant share g =5/7. We next analyze the propertes of DRFH allocaton obtaned by solvng (6). Our analyses of DRFH start wth the four essental resource allocaton propertes, namely, envy-freeness, Pareto optmalty, group strategyproofness, and sharng ncentve. 3. The appendx s gven n a supplementary document as per the TPDS submsson gudelnes. (4) (5) (6)

6 6 Resource Share 5/7 User1 User2 6/7 6/7 1/7 1/7 CPU Memory CPU Memory Server 1 Server 2 Fg. 4. An alternatve allocaton wth hgher system utlzaton for the example of Fg. 2. Server 1 and 2 are exclusvely assgned to user 1 and 2, respectvely. Both users schedule 1 tasks. 4.2 Envy-Freeness We frst show by the followng proposton that under the DRFH allocaton, no user prefers other s allocaton to ts own. Proposton 1 (Envy-freeness): The DRFH allocaton obtaned by solvng (6) s envy-free. Proof: Let {g l },g be the soluton to problem (6). For all user, ts DRFH allocaton n server l s A l = g l d. To show N (A j ) apple N (A ) for any two users and j, t s equvalent to prove N (A j ) apple N (A ). We have G (A j )= P l G l(a jl ) = P l mn r{g jl d jr /d r } apple P l g jl = G (A ), where the nequalty holds because where r mn r {d jr /d r }appled jr /d r = d jr apple 1, s user s global domnant resource. 4.3 Pareto Optmalty We next show that DRFH leads to an effcent allocaton under whch no user can mprove ts allocaton wthout decreasng that of the others. Proposton 2 (Pareto optmalty): The DRFH allocaton obtaned by solvng (6) s Pareto optmal. Proof: Let {g l },g be the soluton to problem (6). For all user, ts DRFH allocaton n server l s A l = g l d. Snce (5) and (6) are equvalent, {g l } s also the soluton to (5), and g s the maxmum value of the objectve of (5). Assume, by way of contradcton, that allocaton A s not Pareto optmal,.e., there exsts some allocaton A, such that N (A ) N (A ) for all user, and for some user j we have strct nequalty: N j (A j ) >N j(a j ). Equvalently, ths mples G (A ) G (A ) for all user, and G j (A j ) >G j(a j ) for user j. Wthout loss of generalty, let A be non-wasteful. By Lemma 1, for all user and server l, there exsts some gl such that A l = g l d. We show that based on {gl }, one can construct some {ĝ l } such that {ĝ l } s a feasble soluton to (5), yet leads to a hgher objectve than g, contradctng the fact that {g l },g optmally solve (5). To see ths, consder user j. We have G j (A j )= P l g jl = g<g j (A j )=P l g jl. For user j, there exsts a server l and some >, such that after reducng gjl to gjl, the resultng global domnant share remans hgher than g,.e., P l g jl g. Ths leads to at least d j dle resources n server l. We construct {ĝ l } by redstrbutng these dle resources to all users to ncrease ther global domnant share, therefore strctly mprovng the objectve of (5). Denote by {gl } the domnant share after reducng g jl to gjl,.e., g gl = jl, = j, l = l ; gl, o.w. The correspondng non-wasteful allocaton s A l = g l d for all user and server l. Note that allocaton A s preferred to the orgnal allocaton A by all users,.e., for all user, we have G (A )= P gl = P l g jl g = G j (A j ), = j; l l g l = G (A ) G (A ), o.w. We now construct {ĝ l } by redstrbutng the d j dle resources n server l to all users, each ncreasng ts global domnant share gl by =mn r { d jr / P d r},.e., g ĝ l = l +, l = l ; g o.w. l, It s easy to check that {ĝ l } remans a feasble allocaton. To see ths, t suffces to check server l. For all ts resource r, we have P ĝld r = P (g l + )d r = P g l d r d jr + P d r apple c lr ( d jr P d r) apple c lr. where the frst nequalty holds because A s a feasble allocaton. On the other hand, for all user 2 U, we have Pl ĝl = P l g l + = G (A )+ G (A )+ >g. Ths contradcts the premse that g s optmal for (5). 4.4 Group Strategyproofness For now, all our dscussons are based on a crtcal assumpton that all users truthfully report ther resource demands. However, n a real-world system, t s common to observe users to attempt to manpulate the scheduler by msreportng ther resource demands, so as to receve more allocaton [11], [18]. More often than not, these strategc behavours would sgnfcantly hurt those honest users and reduce the number of ther tasks scheduled, nevtably leadng to a farly neffcent allocaton outcome. Fortunately, we show by the followng proposton that DRFH s mmune to these strategc

7 7 behavours, as reportng the true demand s always the domnant strategy for all users, even f they form a coalton to msreport together wth others. Proposton 3 (Group strategyproofness): The DRFH allocaton obtaned by solvng (6) s group strategyproof n that the coalton behavor of msreportng demands cannot strctly beneft every member. Proof: Let M U be the set of strategc users formng a coalton to msreport the normalzed demand vector d M =(d ) 2M, where d 6= d. for all 2 M. Let d be the collecton of normalzed demand vectors submtted by all users, where d = d, for all 2 U \M. Let A be the resultng allocaton obtaned by solvng (6). In partcular, A l = g l d for each user and server l, and g = P l g l, where {g l },g solve (6). On the other hand, let A be the allocaton returned when all users truthfully report ther demands, and {g l },g the soluton to (6) wth the truthful d. Smlarly, for each user and server l, we have A l = g l d, and g = P l g l. We check the followng two cases and show that there exsts a user 2 M, such that G (A ) apple G (A ), whch s equvalent to N (A ) apple N (A ). Case 1: g apple g. In ths case, let =mn r {d r /d r} be defned for all user 2 M. Clearly, where r =mn r {d r /d r} appled r /d r = d r apple 1, s the domnant resource of user. We then have G (A )= P l G l(a l ) = P l G l(g l d ) = P l mn r{g l d r /d r} = g apple g = G (A ). Case 2: g > g. We frst consder users that are not manpulators. Snce they truthfully report ther demands, we have G j (A j)=g >g= G j (A j ), 8j 2 U \ M. (7) Now for those manpulators, there s a user 2 M such that G (A ) <G (A ). Otherwse, allocaton A s strctly preferred to allocaton A by all users. Ths contradcts the facts that A s a Pareto optmal allocaton and A s a feasble allocaton. 4.5 Sharng Incentve In addton to the aforementoned three propertes, sharng ncentve s another crtcal allocaton property that has been frequently mentoned n the lterature, e.g., [11] [13], [15], [18]. The property ensures that every user can execute at least the number of tasks t schedules when the entre resource pool s evenly parttoned. The property provdes servce solatons among the users. Whle the sharng ncentve property s well defned n the all-n-one resource model, t s not for the system wth multple heterogeneous servers. In the former case, snce the entre resource pool s abstracted as a sngle server, evenly dvdng every resource of ths bg server would lead to a unque allocaton. However, when the system conssts of multple heterogeneous servers, there are many dfferent ways to evenly dvde these servers, and t s unclear whch one should be used as a benchmark for comparson. For nstance, n the example of Fg. 2, two users share a system wth 14 CPUs and 14 GB memory n total. The followng two allocatons both allocate each user 7 CPUs and 7 GB memory: (a) User 1 s allocated 1/2 resources of server 1 and 1/2 resources of server 2, whle user 2 s allocated the rest; (b) user 1 s allocated (1.5 CPUs, 5.5 GB) n server 1 and (5.5 CPUs, 1.5 GB) n server 2, whle user 2 s allocated the rest. It s easy to verfy that the two allocatons lead to dfferent number of tasks scheduled for the same user, and can be used as two dfferent allocaton benchmarks. In fact, one can construct many other allocatons that evenly dvde all resources among the users. Despte the general ambguty explaned above, n the next two subsectons, we consder two defntons of the sharng ncentve property, strong and weak, dependng on the choce of the benchmark for equal parttonng of resources Strong Sharng Incentve Among varous allocatons that evenly dvde all servers, perhaps the most straghtforward approach s to evenly partton each server s avalablty c l among all n users. The strong sharng ncentve property s defned by usng ths per-server parttonng as a benchmark. Defnton 2 (Strong sharng ncentve): Allocaton A satsfes the strong sharng ncentve property f each user schedules fewer tasks by evenly parttonng each server,.e., N (A )= N (A l ) N (c l /n), 8 2 U. Before we proceed, t s worth mentonng that the perserver parttonng above cannot be drectly mplemented n practce. Wth a large number of users, n each server, everyone wll be allocated a very small fracton of the server s avalablty. In practce, such a small slce of resources usually cannot be used to run any computng task. However, perserver parttonng may be nterpreted as follows. Snce a cloud system s constructed by poolng hundreds of thousands of servers [2], [3], the number of users s typcally far smaller than the number of servers [11], [18],.e., k n. An equal partton could randomly allocate to each user k/n servers, whch s equvalent to randomly allocatng each server to each user wth probablty 1/n. It s easy to see that the mean number of tasks scheduled for each user under ths random allocaton s P l N (c /n), the same as that obtaned under the per-server parttonng. Unfortunately, the followng proposton shows that DRFH may volate the sharng ncentve property n the strong sense. The proof gves a counterexample. Proposton 4: DRFH does not satsfy the property of strong sharng ncentve. Proof: Consder a system consstng of two servers. Server 1 has 1 CPU and 2 GB memory; server 2 has 4 CPUs and 3 GB memory. There are two users. Each nstance of the task of user 1 demands 1 CPU and 1 GB memory; each of user 2 s

8 8 tasks demands 3 CPUs and 2 GB memory. In ths case, we have c 1 =(1/5, 2/5) T, c 2 =(4/5, 3/5) T, D 1 =(1/5, 1/5) T, D 2 =(3/5, 2/5) T, d 1 =(1, 1) T, and d 2 =(1, 2/3) T. It s easy to verfy that under DRFH, the global domnant share both users receve s 12/25. On the other hand, under the per-server parttonng, the global domnant share that user 2 receves s 1/2, hgher than that receved under DRFH. Whle DRFH may volate the strong sharng ncentve property, we shall show va trace-drven smulatons n Sec. 6 that ths only happens n rare cases Weak Sharng Incentve The strong sharng ncentve property s defned by choosng the per-server parttonng as a benchmark, whch s only one of many dfferent ways to evenly dvde the total avalablty. In general, any equal partton that allocates an equal share of every resource can be used as a benchmark. Ths allows us to relax the sharng ncentve defnton. We frst defne an equal partton as follows. Defnton 3 (Equal partton): Allocaton A s an equal partton f t dvdes every resource evenly among all users,.e., A lr =1/n, 8r 2 R, 2 U. It s easy to verfy that the aforementoned per-server partton s an equal partton. We are now ready to defne the weak sharng ncentve property as follows. Defnton 4 (Weak sharng ncentve): Allocaton A satsfes the weak sharng ncentve property f there exsts an equal partton A under whch each user schedules fewer tasks than those under A,.e., N (A ) N (A ), 8 2 U. In other words, the property of weak sharng ncentve only requres the allocaton to be better off than one equal partton, wthout specfyng ts specfc form. It s hence a more relaxed requrement than the strong sharng ncentve property. The followng proposton shows that DRFH satsfes the sharng ncentve property n the weak sense. The proof s constructve. Proposton 5 (Weak sharng ncentve): DRFH satsfes the property of weak sharng ncentve. Proof: Let g be the global domnant share each user receves under a DRFH allocaton A, and g l the global domnant share user receves n server l. We construct an equal partton A under whch users schedule fewer tasks than those under A. Case 1: g 1/n. In ths case, let A be any equal partton. We show that each user schedules fewer tasks under A than those under A. To see ths, consder the DRFH allocaton A. Snce t s non-wasteful, the number of tasks user schedules s N (A )=g/d r 1/nD r. On the other hand, the number of tasks user schedules under A would be at most N (A )= P mn r{a lr /D r} apple P A lr /D r =1/nD r apple N (A ). Case 2: g<1/n. In ths case, no resource has been fully allocated under A,.e., A lr = g l d r apple g l = g<1 2U 2U 2U 2U for all resource r 2 R. Let L lr = c lr 2U A lr be the amount of resource r left unallocated n server l, Further, let L r = L lr =1 2U A lr be the total amount of resource r left unallocated. We are now ready to construct an equal partton A based on A. Snce A should allocate each user 1/n of the total avalablty of every resource r, the addtonal amount of resource r user needs to obtan s u r =1/n A lr. It s easy to see that u r >, 8 2 U, r 2 R. The demanded fracton of unallocated resource r for user s f r = u r /L r. As a result, we can construct A by reallocatng those leftover resources n each server to users, n proporton to ther demands,.e., A lr = A lr + L lr f r, 8 2 U, l 2 S, r 2 R. It s easy to verfy that A s an equal partton,.e., A lr = L lr f r + =(u r /L r ) = u r + A lr A lr L lr + A lr =1/n, 8 2 U, r 2 R. We now compare the number of tasks scheduled for each user under both allocatons A and A. Because A allocates more resources to each user than A does, we have N (A ) N (A ) for all. On the other hand, by the Pareto optmalty of allocaton A, no user can schedule more tasks wthout decreasng the number of tasks scheduled for others. Therefore, we must have N (A )=N (A ) for all.

9 Dscusson Strong sharng ncentve provdes more predctable servce solaton than weak sharng ncentve does. It assures a user a pror that t can schedule at least the number of tasks when every server s evenly allocated. Ths gves users a concrete dea on the worst Qualty of Servce (QoS) they may receve, allowng them to accurately predct ther computng performance. Whle weak sharng ncentve also provdes some degree of servce solaton, a user cannot nfer the guaranteed number of tasks t can schedule a pror from ths weaker property, and therefore cannot predct the computng performance. We note that the root cause of such degradaton of servce solaton s due to the heterogenety among servers. When all servers are of the same specfcaton of hardware resources, DRFH reduces to DRF, and strong sharng ncentve s guaranteed. Ths s also the case for schedulers adoptng the sngleresource abstracton. For example, n Hadoop, each server s dvded nto several slots (e.g., reducers and mappers). Hadoop Far Scheduler [2] allocates these slots evenly to all users. We see that predctable servce solaton s acheved: each user receves at least k s /n slots, where k s s the number of slots, and n s the number of users. In general, one can vew weak sharng ncentve as the prce pad by DRFH to acheve hgh resource utlzaton. In fact, navely applyng DRF allocaton separately to each server retans strong sharng ncentve: n each server, the DRF allocaton ensures that a user can schedule at least the number of tasks when resources are evenly allocated [11], [15]. However, as we have seen n Sec. 3.4, such a nave DRF extenson may lead to extremely low resource utlzaton that s unacceptable. Smlar problem also exsts for tradtonal schedulers adoptng sngle-resource abstractons. By artfcally dvdng servers nto slots, these schedulers cannot match computng demands to avalable resources at a fne granularty, resultng n poor resource utlzaton n practce [11]. For these reasons, we beleve that slghtly tradng off the degree of servce solaton for much hgher resource utlzaton s well justfed. We shall use trace-drven smulaton to show n Sec. 6.3 that DRFH only volates strong sharng ncentve n rare cases n the Google cluster. 4.6 Other Important Propertes In addton to the four essental propertes shown n the prevous subsecton, DRFH also provdes a number of other mportant propertes. Frst, snce DRFH generalzes DRF to heterogeneous envronments, t naturally reduces to the DRF allocaton when there s only one server contaned n the system, where the global domnant resource defned n DRFH s exactly the same as the domnant resource defned n DRF. Proposton 6 (Sngle-server DRF): The DRFH leads to the same allocaton as DRF when all resources are concentrated n one server. Next, by defnton, we see that both sngle-resource farness and bottleneck farness trvally hold for the DRFH allocaton. We hence omt the proofs of the followng two propostons. Proposton 7 (Sngle-resource farness): The DRFH allocaton satsfes sngle-resource farness. Proposton 8 (Bottleneck farness): The DRFH allocaton satsfes bottleneck farness. Fnally, we see that when a user leaves the system and relnqushes all ts allocatons, the remanng users wll not see any reducton of the number of tasks scheduled. Formally, Proposton 9 (Populaton monotoncty): The DRFH allocaton satsfes populaton monotoncty. Proof: Let A be the resultng DRFH allocaton, then for all user and server l, A l = g l d and G (A )=g, where {g l } and g solve (6). Suppose that user j leaves the system, changng the resultng DRFH allocaton to A. By DRFH, for all user 6= j and server l, we have A l = g l d and G (A )= g, where {g l } 6=j and g solve the followng optmzaton problem: max g gl,6=j s.t. gld r apple c lr, 8l 2 S, r 2 R, 6=j gl = g, 8 6= j. l2u To show N (A ) N (A ) for all user 6= j, t s equvalent to prove G (A ) G (A ). It s easy to verfy that g, {g l } 6=j satsfy all the constrants of (8) and are hence feasble to (8). As a result, g g, whch s exactly G (A ) G (A ). 5 PRACTICAL CONSIDERATIONS So far, all our dscussons are based on several assumptons that may not be the case n a real-world system. In ths secton, we relax these assumptons and dscuss how DRFH can be mplemented n practce. 5.1 Weghted Users wth a Fnte Number of Tasks In the prevous sectons, users are assumed to be assgned equal weghts and have nfnte computng demands. Both assumptons can be easly removed wth some mnor modfcatons of DRFH. When users are assgned uneven weghts, let w be the weght assocated wth user. DRFH seeks an allocaton that acheves the weghted max-mn farness across users. Specfcally, we maxmze the mnmum normalzed global domnant share (w.r.t the weght) of all users under the same resource constrants as n (4),.e., max A s.t. mn G (A )/w 2U A lr apple c lr, 8l 2 S, r 2 R. 2U When users have a fnte number of tasks, the DRFH allocaton s computed teratvely. In each round, DRFH ncreases the global domnant share allocated to all actve users, untl one of them has all ts tasks scheduled, after whch the user becomes nactve and wll no longer be consdered n the followng allocaton rounds. DRFH then starts a new teraton (8)

10 1 and repeats the allocaton process above, untl no user s actve or no more resources could be allocated to users. Because each teraton saturates at least one user s resource demand, the allocaton wll be accomplshed n at most n rounds, where n s the number of users 4 Our analyss presented n Sec. 4 also extends to weghted users wth a fnte number of tasks. 5.2 Schedulng Tasks as Enttes Untl now, we have assumed that all tasks are dvsble. In a real-world system, however, fractonal tasks may not be accepted. To schedule tasks as enttes, one can apply progressve fllng as a smple mplementaton of DRFH 5. That s, whenever there s a schedulng opportunty, the scheduler always accommodates the user wth the lowest global domnant share. To do ths, t pcks the frst server whose remanng resources are suffcent to accommodate the request of the user s task. Whle ths Frst-Ft algorthm offers a farly good approxmaton to DRFH, we propose another smple heurstc that can lead to a better allocaton wth hgher resource utlzaton. Smlar to Frst-Ft, the heurstc also chooses user wth the lowest global domnant share to serve. However, nstead of randomly pckng a server, the heurstc chooses the best one whose remanng resoruces most sutably matches the demand of user s tasks, and s hence referred to as the Best-Ft DRFH. Specfcally, for user wth resource demand vector D =(D 1,...,D m ) T and a server l wth avalable resource vector c l =( c l1,..., c lm ) T, where c lr s the share of resource r remanng avalable n server l, we defne the followng heurstc functon to quanttatvely measure the ftness of the task for server l: H(, l) =kd /D 1 c l / c l1 k 1, (9) where k k 1 s the L 1 -norm. Intutvely, the smaller H(, l), the more smlar the resource demand vector D appears to the server s avalable resource vector c l, and the better ft user s task s for server l. Best-Ft DRFH schedules user s tasks to server l wth the least H(, l). As an llustratve example, suppose that only two types of resources are concerned, CPU and memory. A CPU-heavy task of user wth resource demand vector D =(1/1, 1/3) T s to be scheduled, meanng that the task requres 1/1 of the total CPU avalablty and 1/3 of the total memory avalablty of the system. Only two servers have suffcent remanng resources to accommodate ths task. Server 1 has the avalable resource vector c 1 =(1/5, 1/15) T ; Server 2 has the avalable resource vector c 2 = (1/8, 1/4) T. Intutvely, because the task s CPU-bound, t s more ft for Server 1, whch s CPU-abundant. Ths s ndeed the case as H(, 1) = <H(, 2) = 5/3, and Best-Ft DRFH places the task onto Server 1. Both Frst-Ft and Best-Ft DRFH can be easly mplemented by searchng all k servers n O(k) tme, whch s fast 4. For medum- and large-szed cloud clusters, n s n the order of thousands [3], [8]. 5. Progressve fllng has also been used to mplement the DRF allocaton [11]. However, when the system conssts of multple heterogeneous servers, progressve fllng wll lead to a DRFH allocaton. enough for small- and medum-szed clusters. For a large cluster contanng tens of thousands of servers, ths computaton can be fast approxmated by adaptng the power of two choces load balancng technque [21]. Instead of scannng through all servers, the scheduler randomly probes two servers and places the task on the server that fts the task better. It s worth mentonng that the defnton of the heurstc functon (9) s not unque. In fact, one can use more complex heurstc functon other than (9) to measure the ftness of a task for a server, e.g., cosne smlarty [22]. However, as we shall show n the next secton, Best-Ft DRFH wth (9) as ts heurstc functon already mproves the utlzaton to a level where the system capacty s almost saturated. Therefore, the beneft of usng more complex ftness measure s very lmted, at least for the Google cluster traces [8]. 6 TRACE-DRIVEN SIMULATION In ths secton, we evaluate the performance of DRFH va extensve smulatons drven by Google cluster-usage traces [8]. The traces contan resource demand/usage nformaton of over 9 users (.e., Google servces and engneers) on a cluster of 12K servers. The server confguratons are summarzed n Table 1, where the CPUs and memory of each server are normalzed so that the maxmum server s 1. Each user submts computng jobs, dvded nto a number of tasks, each requrng a set of resources (.e., CPU and memory). From the traces, we extract the computng demand nformaton the requred amount of resources and task runnng tme and use t as the demand nput of the allocaton algorthms for evaluaton. 6.1 Dynamc Allocaton Our frst evaluaton focuses on the allocaton farness of the proposed Best-Ft DRFH when users dynamcally jon and depart the system. We smulate 3 users submttng tasks wth dfferent resource requrements to a small cluster of 1 servers. The server confguratons are randomly drawn from the dstrbuton of Google cluster servers n Table 1, leadng to a resource pool contanng CPU unts and memory unts n total. User 1 jons the system at the begnnng, requrng.2 CPU and.3 memory for each of ts task. As shown n Fg. 5, snce only user 1 s actve at the begnnng, t s allocated 4% CPU share and 62% memory share. Ths allocaton contnues untl 2 s, at whch tme user 2 jons and submts CPU-heavy tasks, each requrng.5 CPU and.1 memory. Both users now compete for computng resources, leadng to a DRFH allocaton n whch both users receve 44% global domnant share. At 5 s, user 3 starts to submt memory-ntensve tasks, each requrng.1 CPU and.3 memory. The algorthm now allocates the same global domnant share of 26% to all three users untl user 1 fnshes ts tasks and departs at 18 s. After that, only users 2 and 3 share the system, each recevng the same share on ther global domnant resources. A smlar process repeats untl all users fnsh ther tasks. Throughout the smulaton, we see that the Best-Ft DRFH algorthm precsely acheves the DRFH allocaton at all tmes.

11 11 CPU Share (%) Memory Share (%) Domnant Share (%) 1 8 User 1 User 2 User Tme (s) 1 8 User 1 User 2 User Tme (s) 1 8 User 1 User 2 User Tme (s) Fg. 5. CPU, memory, and global domnant share for three users on a 1-server system wth CPU unts and memory unts n total. TABLE 2 Resource utlzaton of the Slots scheduler wth dfferent slot szes. Number of Slots CPU Utlzaton Memory Utlzaton 1 per maxmum server 35.1% 23.4% 12 per maxmum server 42.2% 27.4% 14 per maxmum server 43.9% 28.% 16 per maxmum server 45.4% 24.2% 2 per maxmum server 4.6% 2.% 6.2 Resource Utlzaton We next evaluate the resource utlzaton of the proposed Best- Ft DRFH algorthm. We take the 24-hour computng demand data from the Google traces and smulate t on a smaller cloud computng system of 2, servers so that farness becomes relevant. The server confguratons are randomly drawn from the dstrbuton of Google cluster servers n Table 1. We compare Best-Ft DRFH wth two other benchmarks, the tradtonal Slots schedulers that schedules tasks onto slots of servers (e.g., Hadoop Far Scheduler [2]), and the Frst-Ft DRFH that chooses the frst server that fts the task. For the former, we try dfferent slot szes and chooses the one wth the hghest CPU and memory utlzaton. Table 2 summarzes our observatons, where dvdng the maxmum server (1 CPU and 1 memory n Table 1) nto 14 slots leads to the hghest overall utlzaton. Fg. 6 depcts the tme seres of CPU and memory utlzaton of the three algorthms. We see that the two DRFH mplementatons sgnfcantly outperform the tradtonal Slots scheduler wth much hgher resource utlzaton, manly because the latter gnores the heterogenety of both servers and workload. CPU Utlzaton Memory Utlzaton Best Ft DRFH Frst Ft DRFH Slots Tme (mn) Best Ft DRFH Frst Ft DRFH Slots Tme (mn) Fg. 6. Tme seres of CPU and memory utlzaton. Completon Tme Reducton Best Ft DRFH Slots Job Completon Tme (s) (a) CDF of job completon tmes. 1% 2% 25% Job Sze (tasks) 43% >1 62% (b) Job completon tme reducton. Fg. 7. DRFH mprovements on job completon tmes over Slots scheduler. Ths observaton s consstent wth fndngs n the homogeneous envronment where all servers are of the same hardware confguratons [11]. As for the DRFH mplementatons, we see that Best-Ft DRFH leads to unformly hgher resource utlzaton than the Frst-Ft alternatve at all tmes. The hgh resource utlzaton of Best-Ft DRFH naturally translates to shorter job completon tmes shown n Fg. 7a, where the CDFs of job completon tmes for both Best-Ft DRFH and Slots scheduler are depcted. Fg. 7b offers a more detaled breakdown, where jobs are classfed nto 5 categores based on the number of ts computng tasks, and for each category, the mean completon tme reducton s computed.

Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers

Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers 1 Domnant Resource Farness n Cloud Computng Systems wth Heterogeneous Servers We Wang, Baochun L, Ben Lang Department of Electrcal and Computer Engneerng Unversty of Toronto arxv:138.83v1 [cs.dc] 1 Aug

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

Recurrence. 1 Definitions and main statements

Recurrence. 1 Definitions and main statements Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.

More information

1 Approximation Algorithms

1 Approximation Algorithms CME 305: Dscrete Mathematcs and Algorthms 1 Approxmaton Algorthms In lght of the apparent ntractablty of the problems we beleve not to le n P, t makes sense to pursue deas other than complete solutons

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

General Auction Mechanism for Search Advertising

General Auction Mechanism for Search Advertising General Aucton Mechansm for Search Advertsng Gagan Aggarwal S. Muthukrshnan Dávd Pál Martn Pál Keywords game theory, onlne auctons, stable matchngs ABSTRACT Internet search advertsng s often sold by an

More information

Communication Networks II Contents

Communication Networks II Contents 8 / 1 -- Communcaton Networs II (Görg) -- www.comnets.un-bremen.de Communcaton Networs II Contents 1 Fundamentals of probablty theory 2 Traffc n communcaton networs 3 Stochastc & Marovan Processes (SP

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

A Probabilistic Theory of Coherence

A Probabilistic Theory of Coherence A Probablstc Theory of Coherence BRANDEN FITELSON. The Coherence Measure C Let E be a set of n propostons E,..., E n. We seek a probablstc measure C(E) of the degree of coherence of E. Intutvely, we want

More information

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña Proceedngs of the 2008 Wnter Smulaton Conference S. J. Mason, R. R. Hll, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems STAN-CS-73-355 I SU-SE-73-013 An Analyss of Central Processor Schedulng n Multprogrammed Computer Systems (Dgest Edton) by Thomas G. Prce October 1972 Techncal Report No. 57 Reproducton n whole or n part

More information

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression Novel Methodology of Workng Captal Management for Large Publc Constructons by Usng Fuzzy S-curve Regresson Cheng-Wu Chen, Morrs H. L. Wang and Tng-Ya Hseh Department of Cvl Engneerng, Natonal Central Unversty,

More information

On the Interaction between Load Balancing and Speed Scaling

On the Interaction between Load Balancing and Speed Scaling On the Interacton between Load Balancng and Speed Scalng Ljun Chen, Na L and Steven H. Low Engneerng & Appled Scence Dvson, Calforna Insttute of Technology, USA Abstract Speed scalng has been wdely adopted

More information

A Computer Technique for Solving LP Problems with Bounded Variables

A Computer Technique for Solving LP Problems with Bounded Variables Dhaka Unv. J. Sc. 60(2): 163-168, 2012 (July) A Computer Technque for Solvng LP Problems wth Bounded Varables S. M. Atqur Rahman Chowdhury * and Sanwar Uddn Ahmad Department of Mathematcs; Unversty of

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

Value Driven Load Balancing

Value Driven Load Balancing Value Drven Load Balancng Sherwn Doroud a, Esa Hyytä b,1, Mor Harchol-Balter c,2 a Tepper School of Busness, Carnege Mellon Unversty, 5000 Forbes Ave., Pttsburgh, PA 15213 b Department of Communcatons

More information

8 Algorithm for Binary Searching in Trees

8 Algorithm for Binary Searching in Trees 8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the

More information

Allocating Collaborative Profit in Less-than-Truckload Carrier Alliance

Allocating Collaborative Profit in Less-than-Truckload Carrier Alliance J. Servce Scence & Management, 2010, 3: 143-149 do:10.4236/jssm.2010.31018 Publshed Onlne March 2010 (http://www.scrp.org/journal/jssm) 143 Allocatng Collaboratve Proft n Less-than-Truckload Carrer Allance

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

Activity Scheduling for Cost-Time Investment Optimization in Project Management

Activity Scheduling for Cost-Time Investment Optimization in Project Management PROJECT MANAGEMENT 4 th Internatonal Conference on Industral Engneerng and Industral Management XIV Congreso de Ingenería de Organzacón Donosta- San Sebastán, September 8 th -10 th 010 Actvty Schedulng

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

Joint Resource Allocation and Base-Station. Assignment for the Downlink in CDMA Networks

Joint Resource Allocation and Base-Station. Assignment for the Downlink in CDMA Networks Jont Resource Allocaton and Base-Staton 1 Assgnment for the Downlnk n CDMA Networks Jang Won Lee, Rav R. Mazumdar, and Ness B. Shroff School of Electrcal and Computer Engneerng Purdue Unversty West Lafayette,

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

1 Example 1: Axis-aligned rectangles

1 Example 1: Axis-aligned rectangles COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

Downlink Power Allocation for Multi-class. Wireless Systems

Downlink Power Allocation for Multi-class. Wireless Systems Downlnk Power Allocaton for Mult-class 1 Wreless Systems Jang-Won Lee, Rav R. Mazumdar, and Ness B. Shroff School of Electrcal and Computer Engneerng Purdue Unversty West Lafayette, IN 47907, USA {lee46,

More information

A Secure Password-Authenticated Key Agreement Using Smart Cards

A Secure Password-Authenticated Key Agreement Using Smart Cards A Secure Password-Authentcated Key Agreement Usng Smart Cards Ka Chan 1, Wen-Chung Kuo 2 and Jn-Chou Cheng 3 1 Department of Computer and Informaton Scence, R.O.C. Mltary Academy, Kaohsung 83059, Tawan,

More information

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,

More information

Dynamic Resource Allocation and Power Management in Virtualized Data Centers

Dynamic Resource Allocation and Power Management in Virtualized Data Centers Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall SP 2005-02 August 2005 Staff Paper Department of Appled Economcs and Management Cornell Unversty, Ithaca, New York 14853-7801 USA Farm Savngs Accounts: Examnng Income Varablty, Elgblty, and Benefts Brent

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures Mnmal Codng Network Wth Combnatoral Structure For Instantaneous Recovery From Edge Falures Ashly Joseph 1, Mr.M.Sadsh Sendl 2, Dr.S.Karthk 3 1 Fnal Year ME CSE Student Department of Computer Scence Engneerng

More information

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services When Network Effect Meets Congeston Effect: Leveragng Socal Servces for Wreless Servces aowen Gong School of Electrcal, Computer and Energy Engeerng Arzona State Unversty Tempe, AZ 8587, USA xgong9@asuedu

More information

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems Schedulablty Bound of Weghted Round Robn Schedulers for Hard Real-Tme Systems Janja Wu, Jyh-Charn Lu, and We Zhao Department of Computer Scence, Texas A&M Unversty {janjaw, lu, zhao}@cs.tamu.edu Abstract

More information

Cloud-based Social Application Deployment using Local Processing and Global Distribution

Cloud-based Social Application Deployment using Local Processing and Global Distribution Cloud-based Socal Applcaton Deployment usng Local Processng and Global Dstrbuton Zh Wang *, Baochun L, Lfeng Sun *, and Shqang Yang * * Bejng Key Laboratory of Networked Multmeda Department of Computer

More information

A FASTER EXTERNAL SORTING ALGORITHM USING NO ADDITIONAL DISK SPACE

A FASTER EXTERNAL SORTING ALGORITHM USING NO ADDITIONAL DISK SPACE 47 A FASTER EXTERAL SORTIG ALGORITHM USIG O ADDITIOAL DISK SPACE Md. Rafqul Islam +, Mohd. oor Md. Sap ++, Md. Sumon Sarker +, Sk. Razbul Islam + + Computer Scence and Engneerng Dscplne, Khulna Unversty,

More information

Fair and Efficient User-Network Association Algorithm for Multi-Technology Wireless Networks

Fair and Efficient User-Network Association Algorithm for Multi-Technology Wireless Networks Far and Effcent User-Network Assocaton Algorthm for Mult-Technology Wreless Networks Perre Coucheney, Cornne Touat and Bruno Gaujal INRIA Rhône-Alpes and LIG, MESCAL project, Grenoble France, {perre.coucheney,

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

Graph Theory and Cayley s Formula

Graph Theory and Cayley s Formula Graph Theory and Cayley s Formula Chad Casarotto August 10, 2006 Contents 1 Introducton 1 2 Bascs and Defntons 1 Cayley s Formula 4 4 Prüfer Encodng A Forest of Trees 7 1 Introducton In ths paper, I wll

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

Distributed Optimal Contention Window Control for Elastic Traffic in Wireless LANs

Distributed Optimal Contention Window Control for Elastic Traffic in Wireless LANs Dstrbuted Optmal Contenton Wndow Control for Elastc Traffc n Wreless LANs Yalng Yang, Jun Wang and Robn Kravets Unversty of Illnos at Urbana-Champagn { yyang8, junwang3, rhk@cs.uuc.edu} Abstract Ths paper

More information

Logical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem

Logical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME, ISSUE, FEBRUARY ISSN 77-866 Logcal Development Of Vogel s Approxmaton Method (LD- An Approach To Fnd Basc Feasble Soluton Of Transportaton

More information

Availability-Based Path Selection and Network Vulnerability Assessment

Availability-Based Path Selection and Network Vulnerability Assessment Avalablty-Based Path Selecton and Network Vulnerablty Assessment Song Yang, Stojan Trajanovsk and Fernando A. Kupers Delft Unversty of Technology, The Netherlands {S.Yang, S.Trajanovsk, F.A.Kupers}@tudelft.nl

More information

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid

Feasibility of Using Discriminate Pricing Schemes for Energy Trading in Smart Grid Feasblty of Usng Dscrmnate Prcng Schemes for Energy Tradng n Smart Grd Wayes Tushar, Chau Yuen, Bo Cha, Davd B. Smth, and H. Vncent Poor Sngapore Unversty of Technology and Desgn, Sngapore 138682. Emal:

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

Period and Deadline Selection for Schedulability in Real-Time Systems

Period and Deadline Selection for Schedulability in Real-Time Systems Perod and Deadlne Selecton for Schedulablty n Real-Tme Systems Thdapat Chantem, Xaofeng Wang, M.D. Lemmon, and X. Sharon Hu Department of Computer Scence and Engneerng, Department of Electrcal Engneerng

More information

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP)

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP) 6.3 / -- Communcaton Networks II (Görg) SS20 -- www.comnets.un-bremen.de Communcaton Networks II Contents. Fundamentals of probablty theory 2. Emergence of communcaton traffc 3. Stochastc & Markovan Processes

More information

On the Interaction between Load Balancing and Speed Scaling

On the Interaction between Load Balancing and Speed Scaling On the Interacton between Load Balancng and Speed Scalng Ljun Chen and Na L Abstract Speed scalng has been wdely adopted n computer and communcaton systems, n partcular, to reduce energy consumpton. An

More information

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet 2008/8 An ntegrated model for warehouse and nventory plannng Géraldne Strack and Yves Pochet CORE Voe du Roman Pays 34 B-1348 Louvan-la-Neuve, Belgum. Tel (32 10) 47 43 04 Fax (32 10) 47 43 01 E-mal: corestat-lbrary@uclouvan.be

More information

Approximation algorithms for allocation problems: Improving the factor of 1 1/e

Approximation algorithms for allocation problems: Improving the factor of 1 1/e Approxmaton algorthms for allocaton problems: Improvng the factor of 1 1/e Urel Fege Mcrosoft Research Redmond, WA 98052 urfege@mcrosoft.com Jan Vondrák Prnceton Unversty Prnceton, NJ 08540 jvondrak@gmal.com

More information

In some supply chains, materials are ordered periodically according to local information. This paper investigates

In some supply chains, materials are ordered periodically according to local information. This paper investigates MANUFACTURING & SRVIC OPRATIONS MANAGMNT Vol. 12, No. 3, Summer 2010, pp. 430 448 ssn 1523-4614 essn 1526-5498 10 1203 0430 nforms do 10.1287/msom.1090.0277 2010 INFORMS Improvng Supply Chan Performance:

More information

Lecture 7 March 20, 2002

Lecture 7 March 20, 2002 MIT 8.996: Topc n TCS: Internet Research Problems Sprng 2002 Lecture 7 March 20, 2002 Lecturer: Bran Dean Global Load Balancng Scrbe: John Kogel, Ben Leong In today s lecture, we dscuss global load balancng

More information

Learning the Best K-th Channel for QoS Provisioning in Cognitive Networks

Learning the Best K-th Channel for QoS Provisioning in Cognitive Networks 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

What should (public) health insurance cover?

What should (public) health insurance cover? Journal of Health Economcs 26 (27) 251 262 What should (publc) health nsurance cover? Mchael Hoel Department of Economcs, Unversty of Oslo, P.O. Box 195 Blndern, N-317 Oslo, Norway Receved 29 Aprl 25;

More information

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by 6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng

More information

greatest common divisor

greatest common divisor 4. GCD 1 The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no

More information

A Lyapunov Optimization Approach to Repeated Stochastic Games

A Lyapunov Optimization Approach to Repeated Stochastic Games PROC. ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING, OCT. 2013 1 A Lyapunov Optmzaton Approach to Repeated Stochastc Games Mchael J. Neely Unversty of Southern Calforna http://www-bcf.usc.edu/

More information

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy 4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.

More information

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance Calbraton Method Instances of the Cell class (one nstance for each FMS cell) contan ADC raw data and methods assocated wth each partcular FMS cell. The calbraton method ncludes event selecton (Class Cell

More information

INSTITUT FÜR INFORMATIK

INSTITUT FÜR INFORMATIK INSTITUT FÜR INFORMATIK Schedulng jobs on unform processors revsted Klaus Jansen Chrstna Robene Bercht Nr. 1109 November 2011 ISSN 2192-6247 CHRISTIAN-ALBRECHTS-UNIVERSITÄT ZU KIEL Insttut für Informat

More information

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks

Energy Efficient Routing in Ad Hoc Disaster Recovery Networks Energy Effcent Routng n Ad Hoc Dsaster Recovery Networks Gl Zussman and Adran Segall Department of Electrcal Engneerng Technon Israel Insttute of Technology Hafa 32000, Israel {glz@tx, segall@ee}.technon.ac.l

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

Second-Best Combinatorial Auctions The Case of the Pricing-Per-Column Mechanism

Second-Best Combinatorial Auctions The Case of the Pricing-Per-Column Mechanism Proceedngs of the 4th Hawa Internatonal Conference on System Scences - 27 Second-Best Combnatoral Auctons The Case of the Prcng-Per-Column Mechansm Drk Neumann, Börn Schnzler, Ilka Weber, Chrstof Wenhardt

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告 畫 類 別 : 個 別 型 計 畫 半 導 體 產 業 大 型 廠 房 之 設 施 規 劃 計 畫 編 號 :NSC 96-2628-E-009-026-MY3 執 行 期 間 : 2007 年 8 月 1 日 至 2010 年 7 月 31 日 計 畫 主 持 人 : 巫 木 誠 共 同

More information

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers J. Parallel Dstrb. Comput. 71 (2011) 732 749 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: www.elsever.com/locate/pdc Envronment-conscous schedulng of HPC applcatons

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

Dynamic Online-Advertising Auctions as Stochastic Scheduling

Dynamic Online-Advertising Auctions as Stochastic Scheduling Dynamc Onlne-Advertsng Auctons as Stochastc Schedulng Isha Menache and Asuman Ozdaglar Massachusetts Insttute of Technology {sha,asuman}@mt.edu R. Srkant Unversty of Illnos at Urbana-Champagn rsrkant@llnos.edu

More information

Enterprise Master Patient Index

Enterprise Master Patient Index Enterprse Master Patent Index Healthcare data are captured n many dfferent settngs such as hosptals, clncs, labs, and physcan offces. Accordng to a report by the CDC, patents n the Unted States made an

More information

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING FORMAL ANALYSIS FOR REAL-TIME SCHEDULING Bruno Dutertre and Vctora Stavrdou, SRI Internatonal, Menlo Park, CA Introducton In modern avoncs archtectures, applcaton software ncreasngly reles on servces provded

More information

Multi-class Multi-Server Threshold-based Systems: a. Study of Non-instantaneous Server Activation

Multi-class Multi-Server Threshold-based Systems: a. Study of Non-instantaneous Server Activation Mult-class Mult-Server Threshold-based Systems: a Study of Non-nstantaneous Server Actvaton 1 Cheng-Fu Chou, Leana Golubchk, and John C. S. Lu Abstract In ths paper, we consder performance evaluaton of

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

AD-SHARE: AN ADVERTISING METHOD IN P2P SYSTEMS BASED ON REPUTATION MANAGEMENT

AD-SHARE: AN ADVERTISING METHOD IN P2P SYSTEMS BASED ON REPUTATION MANAGEMENT 1 AD-SHARE: AN ADVERTISING METHOD IN P2P SYSTEMS BASED ON REPUTATION MANAGEMENT Nkos Salamanos, Ev Alexogann, Mchals Vazrganns Department of Informatcs, Athens Unversty of Economcs and Busness salaman@aueb.gr,

More information

Extending Probabilistic Dynamic Epistemic Logic

Extending Probabilistic Dynamic Epistemic Logic Extendng Probablstc Dynamc Epstemc Logc Joshua Sack May 29, 2008 Probablty Space Defnton A probablty space s a tuple (S, A, µ), where 1 S s a set called the sample space. 2 A P(S) s a σ-algebra: a set

More information

9.1 The Cumulative Sum Control Chart

9.1 The Cumulative Sum Control Chart Learnng Objectves 9.1 The Cumulatve Sum Control Chart 9.1.1 Basc Prncples: Cusum Control Chart for Montorng the Process Mean If s the target for the process mean, then the cumulatve sum control chart s

More information

VoIP over Multiple IEEE 802.11 Wireless LANs

VoIP over Multiple IEEE 802.11 Wireless LANs SUBMITTED TO IEEE TRANSACTIONS ON MOBILE COMPUTING 1 VoIP over Multple IEEE 80.11 Wreless LANs An Chan, Graduate Student Member, IEEE, Soung Chang Lew, Senor Member, IEEE Abstract IEEE 80.11 WLAN has hgh

More information

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL

More information

Dynamic Pricing for Smart Grid with Reinforcement Learning

Dynamic Pricing for Smart Grid with Reinforcement Learning Dynamc Prcng for Smart Grd wth Renforcement Learnng Byung-Gook Km, Yu Zhang, Mhaela van der Schaar, and Jang-Won Lee Samsung Electroncs, Suwon, Korea Department of Electrcal Engneerng, UCLA, Los Angeles,

More information

Real-Time Process Scheduling

Real-Time Process Scheduling Real-Tme Process Schedulng ktw@cse.ntu.edu.tw (Real-Tme and Embedded Systems Laboratory) Independent Process Schedulng Processes share nothng but CPU Papers for dscussons: C.L. Lu and James. W. Layland,

More information

Logistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification

Logistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification Lecture 4: More classfers and classes C4B Machne Learnng Hlary 20 A. Zsserman Logstc regresson Loss functons revsted Adaboost Loss functons revsted Optmzaton Multple class classfcaton Logstc Regresson

More information

Economic-Robust Transmission Opportunity Auction in Multi-hop Wireless Networks

Economic-Robust Transmission Opportunity Auction in Multi-hop Wireless Networks Economc-Robust Transmsson Opportunty Aucton n Mult-hop Wreless Networks Mng L, Pan L, Mao Pan, and Jnyuan Sun Department of Electrcal and Computer Engneerng, Msssspp State Unversty, Msssspp State, MS 39762

More information

Optimal resource capacity management for stochastic networks

Optimal resource capacity management for stochastic networks Submtted for publcaton. Optmal resource capacty management for stochastc networks A.B. Deker H. Mlton Stewart School of ISyE, Georga Insttute of Technology, Atlanta, GA 30332, ton.deker@sye.gatech.edu

More information

Network-Wide Load Balancing Routing With Performance Guarantees

Network-Wide Load Balancing Routing With Performance Guarantees Network-Wde Load Balancng Routng Wth Performance Guarantees Kartk Gopalan Tz-cker Chueh Yow-Jan Ln Florda State Unversty Stony Brook Unversty Telcorda Research kartk@cs.fsu.edu chueh@cs.sunysb.edu yjln@research.telcorda.com

More information

On File Delay Minimization for Content Uploading to Media Cloud via Collaborative Wireless Network

On File Delay Minimization for Content Uploading to Media Cloud via Collaborative Wireless Network On Fle Delay Mnmzaton for Content Uploadng to Meda Cloud va Collaboratve Wreless Network Ge Zhang and Yonggang Wen School of Computer Engneerng Nanyang Technologcal Unversty Sngapore Emal: {zh0001ge, ygwen}@ntu.edu.sg

More information

A Programming Model for the Cloud Platform

A Programming Model for the Cloud Platform Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna luxaodongxht@qq.com

More information