The Packing Server for Real-Time Scheduling of MapReduce Workflows

Size: px
Start display at page:

Download "The Packing Server for Real-Time Scheduling of MapReduce Workflows"

Transcription

1 The Packng Server for Real-Te Schedulng of MapReduce Workflows Shen L, Shaohan Hu, Tarek Abdelzaher Unversty of Illnos at Urbana-Chapagn {shenl3, shu7, Abstract Ths paper develops new schedulablty bounds for a splfed MapReduce workflow odel. MapReduce s a dstrbuted coputng paradg, deployed n ndustry for over a decade. Dfferent fro conventonal ultprocessor platfors, MapReduce deployents usually span thousands of achnes, and a MapReduce ob ay contan as any as tens of thousands of parallel segents. State-of-the-art MapReduce workflow schedulers operate n a best-effort fashon, but the need for real-te operaton has grown wth the eergence of real-te analytc applcatons. MapReduce workflow detals can be captured by the generalzed parallel task odel fro recent real-te lterature. Under ths odel, the best-known result guarantees schedulablty f the task set utlzaton stays below 50% of total capacty, and the deadlne to crtcal path length rato, whch we call the stretch, surpasses 2. Ths paper proves ths bound further by ntroducng a herarchcal schedulng schee based on the novel noton of a Packng Server, nspred by servers for aperodc tasks. The Packng Server conssts of ultple perodcally replenshed budgets that can execute n parallel and that appear as ndependent tasks to the underlyng scheduler. Hence, the orgnal proble of schedulng MapReduce workflows reduces to that of schedulng ndependent tasks. We prove that the utlzaton bound for schedulablty of MapReduce workflows s U B, where U B s the utlzaton bound of the underlyng ndependent task schedulng polcy, and s a tunable paraeter that controls the axu ndvdual budget utlzaton. By leveragng past schedulablty results for ndependent tasks on ultprocessors, we prove schedulable utlzaton of DAG workflows above 50% of total capacty, when the nuber of processors s large and the largest server budget s (suffcently) saller than ts deadlne. Ths surpasses the best known bounds for the generalzed parallel task odel. Our evaluaton usng a Yahoo! MapReduce trace as well as a physcal cluster of 46 achnes confrs the valdty of the new utlzaton bound for MapReduce workflows. I. INTRODUCTION The past decade has seen MapReduce [ 5] becoe the donant dstrbuted coputng paradg n ndustry. The portance of eetng deadlnes of MapReduce workflows has grown n recent years as well, drven by the advent of real-te analytcs [6 6]. The success of the MapReduce county n addressng real-te constrants, however, reans lted due to the nherent dffculty of the workflow schedulng proble on parallel resources. On the other hand, n real-te schedulng lterature, recent results on schedulablty of generalzed parallel tasks do not offer hgh platfor utlzaton. The prospect of provng schedulablty bounds of generalzed parallel tasks on ultprocessors n the subcases relevant to MapReduce workflows otvates the work reported n ths paper. The MapReduce dstrbuted coputng paradg splts source data nto ndependent chunks and processes the usng two phases: the ap phase apples the ap functon onto each chunk, generatng nteredate key-value pars, whle the reduce phase aggregates and suarzes those key-value pars based on ther keys. The reduce phase cannot start untl the ap phase fnshes, and both phases can be parallelzed to run on a large nuber of slots, where a slot s a resource unt n MapReduce clusters. MapReduce deployents usually span thousands of achnes, connected by a hgh-speed and hgh-bandwdth ntranet. Assung a bounded network delay (that we can subtract fro the end-to-end deadlne), the platfor acts as a very large ultprocessor. One MapReduce ob ay contan tens of thousands of parallel segents [7, 8]. Due to nput/output dependences that are often requred to carry out coplex algorths, MapReduce obs usually for Drected Acyclc Graphs (DAG), called MapReduce workflows. Many data processng algorths used n the context of MapReduce workflows are bulk algorths (as opposed to ncreental-update algorths). They requre a bulk of data to be present at once. Ths leads to a perodc nvocaton odel, where a volue of data s frst collected wthn the current te-slce, and then the MapReduce workflow s nvoked. Beng able to eet workflow deadlnes s often of crucal portance to busnesses, because applcatons supported by producton MapReduce workflows, such as advertseent placeent optzatons, user graph storage parttons, and personalzed content recoendatons, usually drectly affect ste perforance and copany revenue. The state-of-the-art MapReduce workflow schedulers, such as Ooze [9] and WOHA [8], operate n a best-effort fashon, offerng lttle guarantees on workflow copleton tes. Wth the surge of nterest n real-te workflow executon, recent work addressed schedulng extensons that offer resource parttonng [20, 2], reduce preepton cost to support prortzaton [22], or take deadlnes as nput to the scheduler [, 2]. However, these attepts fall short of offerng tng guarantees. Adnstrators cannot tell, quanttatvely, what axu utlzaton ther MapReduce clusters can bear before obs start laggng too far behnd. Ths calls for an analytcally well-otvated schedulng and adsson control polcy, whch s the topc of ths paper. Ths paper akes two contrbutons. Fro the perspectve of MapReduce applcatons, we offer an analytc result and a run-te echans to guarantee schedulablty of MapReduce workflows as long as a schedulablty bound s et.

2 Fro the perspectve of real-te foundatons, we prove the best known bound for schedulablty of the generalzed parallel task odel n a specal subcase of relevance to MapReduce applcatons. The contrbutons rest on the novel dea of the Packng Server. It consttutes a run-te echans that akes concurrent precedence-constraned workflows look lke ndependent perodc tasks to the underlyng MapReduce scheduler. We then derve a converson factor that expresses a bound for schedulablty of MapReduce workflows as a functon of the underlyng utlzaton bound for schedulablty of ndependent perodc tasks. Detals of a MapReduce workflow can be captured by a generalzed parallel task odel [23]. Aong the etrcs used n exstng work, utlzaton bounds and capacty augentaton bounds gve rse to effcent adsson control polces, as they only requre sple nforaton about the task set and the platfor. The best-known result for schedulablty of workflow tasks, that s aenable to a sple adsson control polcy, s a capacty augentaton bound of 2 for plct-deadlne task odels usng a federated schedulng strategy [24]. In order to guarantee schedulablty, ths bound requres the task set utlzaton to be less than 50% of total capacty, and the deadlne to crtcal path length rato, whch we call the stretch (), to be larger than 2. Ths constrant (only 50% of capacty) ay be too restrctve for soe systes. Fortunately, ore advances have been ade on schedulablty of ndependent task sets, achevng uch hgher utlzaton bounds, especally when the nuber of processors s large and the ndvdual unts of work are sall. For exaple, López et al. [25] proved that frst-ft parttoned EDF (EDF-FF) can schedule any syste of ndependent perodc tasks on processors, gven that the total capacty stays below U B + +, where s nverse of the axu ndvdual task utlzaton (.e., u ax ). As another exaple, the global EDF schedulng [26] guarantees to eet all deadlnes ( f ) the total task set utlzaton s less than U B +. Note that, for u ax, the total schedulable utlzaton s larger than /2 (.e., larger than 50% of total capacty). In fact, the bound approaches 00% of capacty as ncreases. These observatons suggest that task set abstractons or transforatons that ake workflows look lke ndependent tasks ay prove schedulablty. The Packng Server echans, presented n ths paper, s nspred by the above observaton. Each Packng server conssts of a nuber of budgets dedcated to a gven MapReduce workflow. The MapReduce syste schedules these budgets as ndependent tasks. When nvoked, the budget runs MapReduce segents n a anner that respects workflow precedence constrants. Hence, the orgnal proble of analyzng schedulablty of MapReduce workflows on ultprocessors s translated nto the well-known proble of analyzng schedulablty of ndependent tasks. The utlzaton bounds for the latter are well known both for EDF and fxed prorty schedulng, as well as both for parttoned and global schedulers. We prove a converson factor between the utlzaton bound for schedulablty of MapReduce workflows acheved by our schee, and the utlzaton bound of the underlyng scheduler for ndependent tasks. Naely, the MapReduce workflow s schedulable f utlzaton s below U B. In the followng, we shall use the conventon of expressng utlzaton as percentage of total cluster capacty. Hence, for exaple, for a cluster of achnes, we shall say 50% when we ean /2, and wll refer to t as the cluster utlzaton. Snce the deadlnes for MapReduce workflows are typcally large (e.g., hours) and the clusters are bg, t s coon that MapReduce workflows enoy a large stretch,, leadng to a hgh utlzaton bound. Hence, we prove the best known results for schedulng MapReduce DAGs on ultprocessors. The paper descrbes how to sze Packng servers, derves the schedulablty bounds attaned, and presents the polces used nsde a server n handlng MapReduce workflows. Evaluaton results confr the proved schedulablty. The reander of ths paper s organzed as follows. Secton II brefly ntroduces the MapReduce ob odel. Secton III and IV develop the converson factor for ndvdual MapReduce obs and workflows, respectvely. We descrbe the applcaton-level schedulng algorth for packng work nsde servers n Secton V. Secton VI presents evaluaton results. We survey the related work n Secton VII. Secton VIII concludes the paper. II. MAP-REDUCE WORKFLOW MODEL We refer by workflow task sets to those task sets that contan nter-dependent sequental obs. Ths s n contrast to ndependent task sets n whch no ob dependences are present. The hgh-level dea of our technque s to transfor a MapReduce workflow task set τ nto an ndependent task set τ, pleented as Packng server budgets, such that the schedulablty of τ s a suffcent condton for the schedulablty of τ. The transforaton s done at the cost of ntroducng ncreased (vrtual) coputaton tes n τ, leavng τ at a hgher utlzaton than τ. We show that the utlzaton of τ s at ost tes larger than τ, where s a tunable paraeter that controls the axu ndvdual server budget sze. Hence, f an ndependent task set scheduler A offers a utlzaton bound U B, the MapReduce workflow set τ can eet all deadlnes provded that ts utlzaton stays below U B. The MapReduce lterature uses ternology dfferently fro the real-te lterature, leadng to a potental confuson over what s eant by such ters as obs and tasks. In ths paper, we follow the defntons coon n real-te lterature as uch as possble. We say that a MapReduce ob conssts of a ap phase and/or a reduce phase. It ay be

3 that a MapReduce ob contans only a sngle ap or reduce phase, although coonly t contans both. Each phase contans ultple segents that ay execute n parallel. We call a segent a apper or a reducer dependng on whether t belongs to a ap phase or a reduce phase. The executon of a apper or a reducer occupes a resource slot (or ust slot), whch could, for exaple, be a core n a ult-core platfor. Wthn one ob, no reducer ay start before all appers fnsh. A MapReduce ppelne chans ultple MapReduce obs together, resultng n a sequence of phases. In a general MapReduce workflow, MapReduce obs collectvely for a Drected Acyclc Graph (DAG), where each node represents a MapReduce phase and each edge ponts fro node a to node b represents the dependency that phase b cannot start before phase a fnshes. A MapReduce workflow task τ s a perodc task that generates a MapReduce workflow every T te unts wth relatve deadlne D. We denote the nuber of segents (appers or reducers) at the th phase of the workflow of task τ by, and the worst-case coputaton te of an ndvdual segent by c. A MapReduce workflow task set τ contans ultple MapReduce workflow tasks. Usually, the nput of a MapReduce workflow task nvocaton depends on the output of the prevous workflow nvocaton fro the sae task, resultng n an plct-deadlne task odel (D T ). We defne the stretch of workflow task τ, denoted by, as the rato of relatve deadlne D over crtcal path length, denoted by L. Let denote the nu stretch of all workflow tasks, n{ }. Note that, f the workflow contans a sngle path, path (.e., t s a ppelne), L path c sued over the path. The crtcal path n a DAG workflow s the longest executon path n the DAG. Hence, L ax pathk DAG path k c Please note that the workflow odel enoys the sae expressveness as the generalzed parallel task odel [23], as any nstance of the latter odel can be transfored nto a workflow odel by constructng a sngle-apper MapReduce ob for each node n the generalzed parallel task odel. Hence, the above ternology s ntroduced erely for seantc convenence of appng results to the MapReduce applcaton world. Dfferent fro the typcal ultprocessor scenaro, the MapReduce platfor usually spans thousands of achnes [27], and a sngle phase ay contan as any as 30 thousand segents [7, 8]. Ths encourages us to pay specal attenton to cases where s large. Moreover, snce deadlnes and parallels are large, we are nterested n scenaros of large stretch,. III. THE PACKING SERVER UTILIZATION BOUND In ths secton, we restrct each workflow to contan a sngle MapReduce ob. The analyss s generalzed to ppelnes and DAGs n Secton IV. budgets Packng Server Packng Server 2 Packng Server n Schedulng Sgnals Schedulng Sgnals... 2 n budgets... 2 n... budgets Underlyng Independent Task Scheduler (e.g., EDF-FF) Fgure : Packng Server Archtecture A. The Packng Server For each task τ n the MapReduce task set τ, we propose to create a Packng server, τ, gven by several parallel budgets, where the sze of each budget of server τ s denoted c and the nuber of budgets (also called, server concurrency). The nae, Packng server, was chosen because the server packs segents of the orgnal workflow task nto a saller set of budgets. The set of Packng servers s collectvely called set τ. As we show later n ths paper, a property of how segents are packed nto budgets s that these budgets can be scheduled by the underlyng scheduler as f they were ndependent tasks. They can be grated aong cores, preepted, and prortzed as the underlyng schedulng polcy requres, wthout pactng the ablty of the Packng server to respect workflow synchronzaton (.e., segent precedence) constrants. Clearly, the budgets have to be szed such that: () collectvely, they ft all workflow segents and () ndvdually, they ft the crtcal path of the workflow. Below, we descrbe how budget sze s chosen, then prove a converson factor bound that expresses the utlzaton bound for schedulablty of workflows n ters of the utlzaton bound of the underlyng scheduler. Fgure shows the hgh-level archtecture of how Packng servers work. Note that, all budgets n the sae Packng server have the sae budget sze. A Packng server s vald f ts budget sze s saller than the relatve deadlne of the orgnal MapReduce ob (c D ). B. The Case of a Sngle Job We frst consder the specal case where the workflow task, τ, s coposed of a sngle path represented by a successon of one ap phase and one reduce phase. We gve a saller ndex to the phase wth the larger nuber of segents. Hence, 2. Wthout loss of generalty, we assue that s a ap phase. (The dscusson apples equally f was a reduce phase.) In choosng server concurrency,, we note that soe ndependent task schedulng algorths on ultprocessors Schedulng Sgnals

4 Map Segent Reduce Segent Budgets Fro Vrtual Segents Budgets Fro Map Phase Budgets Fro Reduce Phase t t t t (a) MR Job a (b) Packng Server for Job a (c) MR Job b (d) Packng Server for Job b Fgure 2: Iproved Packng Server Constructon Schee are senstve to the axu ndvdual task utlzaton u ax [25, 26]. The saller the axu ndvdual task utlzaton, the better the schedulablty bound. Hence, we ntroduce a tunable paraeter,, to curb u ax of converted ndependent server budgets. Intutvely, the Packng server treats D D as the worst-case allowable budget sze, guaranteeng ndvdual budget utlzaton to be upper bounded by (.e., u ax D D ). In order to pack a MapReduce ob nto an nterval of D te unts, we derve two lowerbounds on Packng server concurrency,, that ste fro the followng condtons: The total WCET condton: In order to ft all orgnal coputaton of the workflow task nto budgets of length no longer than D, we should satsfy: c + 2 c2 () The crtcal path condton: In order to allow the MapReduce ob to fnsh n D te unts, the phase wth ore segents (phase accordng to our ndexng) has to fnsh n D c2 te unts. Therefore: c D (2) c2 Please note, that these two lower bounds do not donate each other. For exaple, n Fgure 2 (a), the MapReduce ob contans 7appers of WCET c 3, and 2 5 reducers of WCET c 2 2. Deadlne D was 0, and s set to(.e., D D ). In ths exaple, the frst lower bound wns, as t results n c +2 c2 D 4, whereas the second lower bound leads to c 3. D c2 Fgure 2 (c) depcts another exaple, where the orgnal MapReduce ob conssts of 6 appers of WCET c 3, and 2 reducers of WCET c 2 2. Under ths confguraton, the frst lower bound results n 2 budgets, whereas the second s 3budgets. Hence, we have the followng two cases: Case : When c < D c +2 c2 c2 D, the MapReduce ob concentrates to budgets of budget sze c +2 c2 D D C C /D, where C c + 2 c2. Fgures 2 (a)-(b) depct an exaple. As ths constructon strategy ntroduces no extra coputaton, the result Packng server τ shares the sae utlzaton wth ts orgnal MapReduce task τ (u u ). Case 2: When c D c +2 c2 c2 D, the reduce phase orgnally has less segents than c. Therefore, we add c D c2 2 D c2 vrtual reducers, as shown n Fgures 2 (c)-(d). Together wth vrtual reducers, the orgnal MapReduce ob converts to c budgets. In each budget, the ap phase and the reduce phase D c2 (wth vrtual reducers) contrbute c c /(D c2 ) and c2 executon te respectvely, resultng n the budget sze of c c c /(D c2 ) + c2. C. Converson Penalty It s crucal that we bound the utlzaton penalty ntroduced durng Packng server constructon, whch drectly affects the converson factor when brdgng schedulablty utlzaton bound fro ndependent tasks to MapReduce tasks. Lea. The utlzaton (u ) of the Packng server τ s at ost u, and the axu ndvdual ndependent task utlzaton s at ost, where s the stretch of MapReduce task τ, and [,] s a tunable paraeter. Proof. We prove the lea holds for the two constructon cases separately: Case : c < D c +2 c2 c2 D. As the Packng server constructon procedure ntroduces no utlzaton penalty n ths case, the utlzaton of the Packng server τ equals the utlzaton of ts orgnal MapReduce task τ (.e., u u < u ). Therefore, the lea holds for case. Case 2: c D c +2 c2 c2 D. The concurrency s. Hence, the nuber of vrtual c D c2

5 ( ) reduce segents s c 2 D c2, each of length c 2. Defne η to be c. Then, we have: c 2 u u u ( ) c 2 D c2 c 2 c + 2 c2 ( c D c2 ( c D c2 c ) + c2 c 2 c + c2 η ) ( η D +)( c 2 η ( ( η +) D c 2 η ( η +)( η c 2 ) as 2 ) as D D + ) as η η + η +( ) Reorganzng the result fro Inequalty 3, we have: D c + c2 u u. (4) Hence, Lea holds for both Packng server constructon cases. Lea drectly leads to a converson factor bound of, where n{ }. IV. MAPREDUCE WORKFLOW BOUNDS Real world MapReduce applcatons usually call for ultple MapReduce obs that for a ppelne or a DAG to accoplsh coplex ssons. In the ppelne odel, MapReduce obs are chaned together one after another, resultng n a sequence of phases. The DAG odel s ore generalzed such that the dependences aong MapReduce obs ay for a drected acyclc graph. In ths secton, we frst dscuss how to generalze the utlzaton penalty bound n Lea to MapReduce Ppelnes. Then we show that any MapReduce DAGs can be transfored nto a ppelne wth the sae crtcal path length L and utlzaton u, plyng that the utlzaton penalty bound for MapReduce ppelnes also apples to MapReduce DAGs. A. MapReduce Ppelnes MapReduce ppelne connects n MapReduce obs one after another, resultng n no ore than 2n ap/reduce phases. Gven a nuber x, f >x, phase s called (3) a x-large phase. Otherwse, t s a x-sall phase. Usng the slar strategy descrbed n Secton III-A, the Packng server concentrates the total WCET ( of each ) x-large phase nto x dentcal segents, and adds x vrtual segents to each x-sall phase. After that, the Packng server concatenates each (vrtual) segent wth another (vrtual) segent n the next phase, resultng n x budgets. Bnary search can be used to fnd the nu x such that the budget sze c does not exceed D D. That s to say, usng only x budgets would volate the deadlne D. Then, we have: { } c + c D. (5) { < } The defntons of the deadlne D and the stretch further lead to the followng nequalty: D D c. (6) Cobnng Inequaltes (5) and (6), we have: { } Subtractng { } c + c { < } { < } c c c. (7) fro both sdes, we obtan: c { < } c c c ( ) c (8) Based on Inequalty (8), the total aount of coputaton requreent can be bounded fro below: c { } c ( ) ( ) c (9) Accordng to Inequalty (8) and (9), we have:

6 u u u { <} ( ) ( ) ( ) c c { <x} c c { <} c ( ) ( ) Therefore, the sae utlzaton penalty bound for MapReduce ppelnes. B. Transforng DAGs nto Ppelnes c (as ) (Equaton 9) (0) holds Ths secton further generalzes the sae utlzaton penalty bound to MapReduce DAGs by transforng a MapReduce DAG nto a MapReduce ppelne. There are any dfferent ways to transfor a MapReduce DAG nto a MapReduce Ppelne. One nave soluton would perfor a topology sort on the DAG, and execute phases one after another accordng to the sorted order. However, ths soluton enlarges the crtcal path length L, leadng to a saller after converson, and hence a larger utlzaton penalty bound. Our goal s to develop a strategy that leads to the lowest utlzaton penalty bound. As shown above, the utlzaton penalty bound for a MapReduce ppelne s, whch decreases wth the ncrease of D L. Ths nspres us to desgn an algorth that nzes the ppelne crtcal path length L durng Packng server constructons. The resultng ppelne length s bounded fro below by the crtcal path length L of ts orgnal MapReduce DAG, where the utlzaton penalty ntroduced by Packng servers s nzed. Ths can be acheved by allowng each node n the DAG to start executon as soon as all ts prerequste nodes fnsh. To keep the result as a vald ppelne, a synchronzaton pont s nserted when each DAG node fnshes. Fg. 3 shows an exaple, where the orgnal DAG ob s depcted n (a) and the resultng ppelne s shown n (b). As phases are of dfferent lengths, a node n the orgnal DAG ay break nto ultple phases n the ppelne. For exaple, node 2 and node 5 both follow phase. Hence, these two nodes ay start at the sae te. However, as node 5 fnshes sooner than node 2, ts endng synchronzaton pont breaks workflow of node 2 nto two ppelne phases. The pseudo code s descrbed n Algorth. Algorth Transfor MapReduce Workflows to MapReduce Ppelnes Input: MapReduce DAG task set τ Output: MapReduce ppelne task set τ : procedure TRANS-D(τ) 2: τ 3: for τ τ do 4: Sync {0} 5: for node n τ do 6: l the earlest possble te that the node could start 7: Lay out all segents n node n te nterval [l, l + c ] 8: Sync Sync {l + c } 9: end for 0: sort Sync followng ncreasng order : τ 2: for 2 Sync do 3: P create a phase encapsulates all segents (portons) fall n te nterval [Sync[-], Sync[]) 4: τ τ {P } 5: end for 6: τ τ τ 7: end for 8: schedule τ usng the MapReduce ppelne schedulng algorth. 9: end procedure Algorth loops over all workflow tasks on lnes 2-7. For each task τ, the algorth frst coputes ts synchronzaton ponts on lnes 4-0. As shown on lnes 6-8, each node n task τ assocates wth a synchronzaton pont l + c, where l s ts longest precedng WCET path. Lnes 2-5 dvde the workflow nto a ppelne of phases usng synchronzaton te ponts n set Sync. The result can be vewed as a sngle ppelne task set, τ. Note that, Algorth transfors a workflow task set nto a ppelne task set wthout ncreasng ts utlzaton. Therefore, the sae utlzaton bound U B apples to MapReduce workflows, where n{ τ τ}. V. SCHEDULING MAPREDUCE WORKFLOWS Prevous sectons ntroduce the strategy to convert a set of MapReduce workflow tasks nto budgets that belong to a set of Packng servers, one per task. Those budgets can then be scheduled as ndependent sequental obs by the underlyng scheduler. Each budget s used to execute workflow segents. We now descrbe how the executon order of segents s deterned. Before proceedng wth our descrpton, t s good to understand the dfferences between tradtonal operatng syste schedulng and MapReduce schedulng, whch n our syste s based on Hadoop [2, 3]; an open-source MapReduce pleentaton. These dfferences are portant to the understandng of our workflow scheduler pleentaton. In an operatng systes context, whch has been the tradtonal schedulng context n real-te systes lterature, servers typcally refer to applcaton tasks. The underlyng scheduler typcally refers to an OS kernel scheduler [28]. Ths OS kernel scheduler has the power to allocate physcal resources to tasks. When t nvokes a server, a secondlevel scheduler (pleented n user space) nsde the server

7 2 3 c c c c c 5 5 (a) 6 2 c t Segents Synchronzaton Correspondng Pont DAG node (b) Fgure 3: Transfor a DAG nto a ppelne: (a) shows the orgnal DAG, and (b) shows the resultng ppelne. The ppelne conssts of 6 phases contanng 2, 9, 3, 4, 3, 2 segents and of lengths 5, 5, 2, 3, 3, 2 respectvely. decdes on the order n whch server budget s allocated to dfferent coputatons. In a MapReduce context, the pcture s slghtly dfferent. Frst, all schedulng s done n user space, snce Hadoop s an applcaton-level pleentaton. Hence, the underlyng scheduler refers to a user-level resource anager that perfors coarse-graned resource assgnents. In Hadoop v2 (YARN) [3], the resource anager assgns resources to the so called applcaton asters. In ths case, an applcaton aster acts as a Packng server. The syste can be confgured to have a sngle aster per workflow task. The applcaton aster pleents the second-level scheduler that decdes on the order of executon of MapReduce segents of the correspondng workflow task wthn the server s budgets. In an portant departure fro OS schedulng, n Hadoop, the applcaton asters are assued to be cooperatve. Hence, the schedulng polcy used by the Hadoop resource anager (.e., the underlyng scheduler) s expressed to the applcaton asters as an exact telne showng when the correspondng workflow task s allowed to run and on whch resources. Applcaton asters are therefore clarvoyant about ther exact future schedules. It s ths clarvoyance that allows us to pleent the abstracton of ndependent budgets, such that the underlyng scheduler (the resource anager) does not need to know anythng about workflow topologes and precedence constrants. More specfcally, once the resource anager nfors applcaton asters of ther budget schedules, snce each applcaton aster (.e., Packng server) of a workflow task knows when ts budgets are scheduled, t can deterne a sequence of synchronzaton te ponts wthn ts budgets, Sync {t 0,t,..., t n } such that t 0 0, and the total sze of scheduled budgets fallng n [t,t ) equals the total WCET of the th phase ncludng potental vrtual segents (c ax{, }). The te nstance t thus becoes the te when phase should end and phase begn. The applcaton aster (.e., Packng server) then packs segents of phase nto budget portons fallng n [t,t ). The result of such packng s shown n Fgure 4 for an exaple server coposed of four budgets. Packng s done n a best ft anner (.e., sallest budget porton s flled frst). Note that, segents wthn the sae phase have no precedence constrants and hence can be packed n any order. Furtherore, snce the executon of a segent s arbtrarly dvsble, t turns out that t s always possble to pack segents such that all budgets runnng at te t fnsh the executon of segents of phase sultaneously at that te. The only constrant to consder s that portons of the sae segent cannot run n parallel on ultple processors at the sae te. Hence, when schedulng a segent, the best ft polcy skps te ntervals where potons of the sae segent have already been scheduled. The exact pseudocode for the best ft segent packng algorth and the proof that t always succeeds at fndng a vald schedule between successve synchronzaton ponts are delegated to the appendx. Segents of Phase, apped by Applcaton Master (Packng Server) to budget slots whle respectng synchronzaton ponts t - Phase t Budget schedule coputed by Resource Manager (the underlyng scheduler ) Fgure 4: An exaple of segent packng. A. Ltatons The above dscusson has been a splfed treatent of MapReduce applcatons. MapReduce s a coplex syste. A fathful analytc treatent goes beyond the scope of a sngle paper. It s therefore useful to outlne the approxatons and splfcatons we ade n ths work. Frst and foreost, we do not explctly address data allocaton. Segents of task workflows operate on data. Such data ust be avalable on the local achne. If not, the coputaton te of the segent wll ncrease. In prncple,

8 CDF 0.5 Map Reduce Task Executon Te (s) Fgure 5: Yahoo! Hadoop Jobs Mapper and Reducer WCET t s possble to plan ahead of te, such that data are dstrbuted to achnes as segents are allocated, such that each segent fnds ts data locally. Challenges arse n the presence of preepton and graton, when a segent ght fnd tself resued on a dfferent achne. In general, ovng data around s a bad dea. Hence, n practce, the underlyng scheduler should consder graton and data oveent costs. For exaple, parttoned schedulng would be hghly preferable to global schedulng. The underlyng schedulng polcy s an orthogonal ssue to our contrbuton and hence s not addressed n ths paper. Second, the cost of preepton n MapReduce systes s hgher than that n a ult-core platfor. The NatJa syste [22] enables MapReduce preepton by nsertng checkponts between two key-value pars, and wrtng those checkponts nto the shared dstrbuted fle syste, whch ntroduces a few llseconds to a few seconds of delay. In real-world scenaros, ths s not a bg proble as MapReduce segents also take uch longer te to fnsh copared to ult-core tasks, leadng to a sall relatve preepton cost. For exaple, Fgure 5 plots the dstrbuton of apper and reducer WCET n Yahoo! Hadoop cluster [7]. WCETs of ost segents are larger than 0 seconds, and ore than 50% segents take ore than nute to fnsh. Fnally, there s a data oveent phase followng each coputatonal phase n a MapReduce workflow, where outputs of one set of segents are sent to the next set of segents. Ths oveent takes place over a hgh-speed network nterconnect. If network bandwdth s not suffcent, data oveent ay ntroduce delays that need to be explctly accounted for. One possblty s sply to subtract those delays fro end-to-end deadlnes, such that the deadlnes used reflect the te avalable for the coputatonal part only. Better solutons wll be explored n subsequent work. VI. EVALUATION In ths secton, we copare our soluton to two baselnes. The frst one s the state-of-the-art generalzed parallel tasks schedulng algorth [24], called federated schedulng. Aong past results, federated schedulng acheves the hghest-known utlzaton bound of 50% for generalzed parallel tasks, f the stretch surpasses 2. The second baselne for the parallel task odel s the GEDF schedulng 2 algorth, wth a best-known utlzaton bound of % [26]. A. Coputng the Optal Budget Sze We begn by coputng the optal Packng server budget sze (or equvalently, the optal value of ). Packng servers ay use any plct-deadlne ndependent task schedulng polcy A as the underlyng scheduler to schedule ther budgets, whle achevng a utlzaton bound U B for schedulablty of MapReduce workflows. In followng experents, we set A to EDF-FF [25] and (ndependent task) GEDF [26], respectvely, as they lead to hgh utlzaton bounds when stretch,, s large. We copute the optal as follows: EDF-FF: When EDF-FF s used as the underlyng polcy, A, the utlzaton bound guaranteed by Packng servers for MapReduce workflows becoes: U B ( +)( ). () ( +) By takng the dervatve wth respect to, and settng the dervatve to 0, the hghest utlzaton bound s acheved when: ( +)( ). (2) GEDF: When the underlyng schedulng polcy, A, s set to GEDF, the schedulablty bound becoes: U B ( )( +). (3) Slarly, by equatng the dervatve to zero, the optal, that acheves the hghest bound, s: ( ). (4) The followng experents use these two optal forulas to confgure Packng servers when they run above EDF-FF and GEDF schedulers. B. Schedulable Utlzaton The frst queston we answer n the evaluaton secton s to eprcally deterne the average schedulable utlzaton of a MapReduce cluster, due to tasks that eet deadlnes, under dfferent schedulng polces. Four polces are copared: () Packng servers on top of EDF-FF, () Packng servers on top of GEDF, () the federated schedulng polcy [24] (wth no Packng servers), and (v) GEDF. In ndustry, outputs of MapReduce workflows power a varety of servces, where tardness are usually allowed, but at the cost of dnshng onetary benefts. In our experents, we nhert the sae confguraton, allowng tasks to contnue executon after ther deadlnes, whch ay adversely affect the schedulablty of subsequent obs. We do not use adsson control. Rather, we vary the total nput workload utlzaton (as percentage of total platfor capacty) on the x-axs and count on the y-axs only the utlzaton of tasks whose deadlnes were

9 Accepted Utlzaton Packng & EDF FF 0.6 Packng & GEDF 0.5 Federated Packng & EDF FF 0.4 GEDF Packng & GEDF 0.3 GEDF Federated Subtted Utlzaton Accepted Utlzaton Packng & EDF FF 0.7 Packng & GEDF Federated Packng & EDF FF 0.4 GEDF Packng & GEDF 0.3 GEDF Federated Subtted Utlzaton Fgure 6: Schedulable Utlzaton, 20Fgure 7: Schedulable Utlzaton, PDF Workflow Response Te / Deadlne Fgure 8: Adsson Control et. Whle we do not explctly plot deadlne sses, note that the dfference between each curve and the dagonal xy s the utlzaton attrbuted to tasks that ssed deadlnes. In these experents, workflows are generated based on Yahoo! MapReduce cluster trace data [7, 8]. The Yahoo! dataset does not specfy the deadlne of obs or workflows. We therefore calculate the crtcal path length of workflows, and set ther deadlnes to control the value of. More specfcally, the nuber of slots s set to 500, and the paraeter s tuned to 20 (and 30) durng our sulatons, resultng n 3.58 (and 4.56) for the EDF-FF scheduler, and 4.47 (and 5.47) for the GEDF scheduler. Fgures 6-7 depct the experent results. The horzontal lnes n these fgures ndcate the theoretcal utlzaton bounds for the schedulablty of each of the four schees copared. When GEDF acts as the underlyng scheduler, the theoretcal schedulablty bounds of Packng servers are 60.3% for 20, and 66.9% for 30. Packng servers on top of EDF-FF have a 64% and 70% utlzaton bound, whch are the hghest under the 20 and 30 confguratons. We also plot the eprcally deterned, schedulable utlzaton curves for each of the four polces. Eprcally, when GEDF s the underlyng schedulng polcy, both GEDF (alone) and Packng servers on GEDF eet deadlnes of alost all tasks when the task set utlzaton s below 90%. However, the GEDF-based algorths fal serously, when the task set utlzaton surpasses 95%, exhbtng a dono effect. Federated schedulng leads to a theoretcal utlzaton bound of 50%, whch s the hghest-known bound n prevous work. Under MapReduce workloads, t s eprcally shown to be able to schedule tasks wthout deadlne sses up to about 70% utlzaton. Above that utlzaton deadlnes are ssed, although the dono effect seen by GEDF s not experenced. Packng servers on an EDF-FF scheduler appear to be the ost successful polcy. The polcy offers a hgh theoretcal schedulablty utlzaton bound, and perfors very well above the bound, rsng up to alost a 90% utlzaton wthout deadlne sses, when servng MapReduce workloads. No dono effect s experenced above 90%. Hence, we pleent ths scheduler on a 46-server Hadoop cluster to verfy ts feasblty and valdty. C. Meetng Deadlnes n a Real Hadoop Cluster Next, we test the effcacy of adsson control schees based on our new bounds at elnatng all deadlne sses. We pleent a prototype of Packng servers on WOHA [8], a workflow-enabled varant of Hadoop v. The experent runs on a cluster of 40 Dell PowerEdge R620 servers and 6 Dell PowerEdge R60 servers. The 40 R620 servers for a Hadoop cluster, provdng 60 reduce slots. The 6 R60 servers execute resource planner, and 5 clent nodes that subt workflow nvocatons to the Hadoop cluster. The paraeter s set to 20. It corresponds to a utlzaton bound of 64.3%, whch we set as the threshold for adsson control. Hence, we prepare a set of workflows wth a total utlzaton above 00%. An adsson controller s used that denes a workflow f t brngs the cluster utlzaton above 64.3%. All segents are coputatonallyntensve. Fgure 8 shows the resultng probablty densty dstrbuton of response te-to-deadlne rato of workflow nvocatons durng a 4-hour experent. All ratos are below, suggestng the valdty of the utlzaton bound. VII. RELATED WORK Workflow schedulng attracts ncreasng attenton fro both real-te and MapReduce researchers. The wdespread MapReduce deployents stulate the MapReduce county to desgn and prove schedulng polces for MapReduce pleentatons, such as Hadoop. The default scheduler executes obs n a FIFO order, leadng to poor farness under ult-tenant scenaros. Yahoo! developed a Capacty

10 Scheduler [20] to offer each Hadoop cluster tenant a guaranteed resource share. Facebook s Far Scheduler [2] organzes Hadoop obs nto pools, and farly dvdes resources aong these pools. Vera et al. [] evaluates an EDFbased schedulng algorth on MapReduce. Ther sulaton results confr that sple deadlne-based schedulng heurstcs allow ore obs to eet ther deadlnes. All of the above solutons target ob schedulng rather than workflow schedulng. Yahoo! later developed Ooze [9, 29] as a generc Hadoop workflow anageent tool, that subts each workflow ob at the rght te. WOHA [8] ntroduces deadlne-aware schedulng of Hadoop workflows. However, these schedulers ake no guarantees on whether workflow deadlnes are et or not. In real-te lterature, workflow schedulng (called generalzed parallel tasks schedulng) has been recently studed on ultprocessor platfors. Baruah et al. [30] prove that EDF can acheve a 2X speedup bound for a sngle recurrent workflow. Safullah et al. [3] propose to arrange a workflow nto stages, and then the workflow s deadlne s splt and assgned to each stage. If soe optal algorth can successfully schedule the orgnal workflow, ther soluton s guaranteed to satsfy the sae deadlne wth a 4X (speedup bound) speed processors. When the workflow s restrcted to a fork-on odel [32, 33], Lakshanan et al. [34] prove the speedup bound to L et al. [35] develop a capacty augentaton bound of 4 2 for workflows, whch edately leads to a sple and effectve schedulablty test. More recently, L et al. [24] prove the capacty augentaton bound to 2 usng the federated schedulng algorth. Nevertheless, a utlzaton below 50% ay be pessstc for ndustry MapReduce clusters. Independent tasks on ultprocessors have been studed ore extensvely durng the past decades. Soe algorths push the schedulablty bound to be uch hgher than 50%. The EDF-FF (frst ft) algorth s able to schedule all tasks f ther total utlzaton stays below U B + +, where s the nverse of the axu ndvdual task utlzaton (.e., u ax ). The global EDF guarantees ( schedulablty ) f the total utlzaton s less than U B +. Both algorths approach a 00% utlzaton bound when and are large, whch s coon on MapReduce platfors. These observatons otvate us to develop the Packng server technque to be able to apply those hgher bounds fro ndependent task schedulng to MapReduce workflows. VIII. CONCLUSION Ths paper ntroduces the technque of Packng server to convert ndependent task set schedulablty bounds to MapReduce workflows schedulablty bounds. If an ndependent task set scheduler A guarantees schedulablty up to total utlzaton U B, the Packng servers can acheve schedulablty bound of U B usng A as the underlyng scheduler, where s the nu deadlne to crtcal path rato, and [,] s a tunable paraeter that curbs the axu converted ndvdual ndependent task utlzaton (u ax ). MapReduce workflows usually yeld large, allowng the new bound to acheve a uch hgher value than the best known bound of 50%. Our evaluatons usng Yahoo! data on a 46-server Hadoop cluster confr the valdty of the new bound and the feasblty of the Packng server syste desgn. ACKNOWLEDGEMENT Ths work was sponsored n part by the Natonal Scence Foundaton under grants CNS , CNS , CNS and CNS We are also thankful to Yahoo! Inc. for sharng the access to Webscope data. REFERENCES [] J. Dean and S. Gheawat, Mapreduce: Splfed data processng on large clusters, n USENIX OSDI, [2] Apache Hadoop, October 204. [3] Apache Hadoop V2 (YARN): Yet Another Resource Negotator, October 204. [4] Apache spark, May 204. [5] S. L, T. Abdelzaher, and M. Yuan, Tapa: Teperature aware power allocaton n data center wth ap-reduce, n IEEE IGCC, 20. [6] H. K, D. de Nz, B. Andersson, M. Klen, O. Mutlu, and R. Rakuar, Boundng eory nterference delay n cotsbased ult-core systes, n IEEE RTAS, 204. [7] J. Lee, A. Easwaran, and I. Shn, Maxzng contentonfree executons n ultprocessor schedulng, n IEEE RTAS, 20. [8] C. Lu and J. H. Anderson, Suspenson-aware analyss for hard real-te ultprocessor schedulng, n ECRTS, 203. [9] M. A. Haque, H. Aydn, and D. Zhu, Real-te schedulng under fault bursts wth ultple recovery strategy, n IEEE RTAS, 204. [0] D. Logothets, C. Trezzo, K. C. Webb, and K. Yocu, In-stu apreduce for log processng, n USENIX ATC, 20. [] A. Vera, L. Cherkasova, V. S. Kuar, and R. H. Capbell, Deadlne-based workload anageent for apreduce envronents: Peces of the perforance puzzle, n IEEE NOMS, 202. [2] K. Kc and K. Anyanwu, Schedulng hadoop obs to eet deadlnes, n IEEE CLOUDCOM 200. [3] K. Agrawal, C. Gll, J. L, M. Mahadevan, D. Ferry, and C. Lu, A real-te schedulng servce for parallel tasks, n IEEE RTAS, 204. [4] R. M. Pathan, P. Stenstró, L.-G. Green, T. Hult, and P. Sandn, Overhead-aware teporal parttonng on ultcore processors, n IEEE RTAS, 204. [5] J. Forget, F. Bonol, E. Grolleau, D. Lesens, and C. Pagett, Schedulng dependent perodc tasks wthout synchronzaton echanss, n IEEE RTAS, 200. [6] C. J. Kenna, J. L. Heran, B. C. Ward, and J. H. Anderson, Makng shared caches ore predctable on ultcore platfors, n ECRTS, 203. [7] Yahoo! webscope, [8] S. L, S. Hu, S. Wang, L. Su, T. Abdelzaher, I. Gupta, and R. Pace, Woha: Deadlne-aware ap-reduce workflow schedulng fraework over hadoop cluster, n IEEE ICDCS, 204.

11 [9] Apache ooze, May 204. [20] Capacty scheduler, capacty scheduler.htl, May 204. [2] Far scheduler, scheduler.htl, May 204. [22] B. Cho, M. Rahan, T. Chaed, I. Gupta, C. Abad, N. Roberts, and P. Ln, Nata: Desgn and evaluaton of evcton polces for supportng prortes and deadlnes n apreduce clusters, n ACM SoCC, 203. [23] S. K. Baruah, V. Bonfac, A. Marchett-Spaccaela, L. Stouge, and A. Wese, A generalzed parallel task odel for recurrent real-te processes, n RTSS, 202. [24] J. L, J. J. Chen, K. Agrawal, C. Lu, C. Gll, and A. Safullah, Analyss of federated and global schedulng for parallel realte tasks, n ECRTS, 204. [25] J. M. López, M. García, J. L. Díaz, and D. F. García, Worst-case utlzaton bound for edf schedulng on real-te ultprocessor systes, n ECRTS, [26] T. P. Baker, A coparson of global and parttoned edf schedulablty tests for ultprocessors, In Internatonal Conf. on Real-Te and Network Systes, Tech. Rep., [27] J. Wong, Whch bg data copany has the worlds bggest hadoop cluster? Septeber [28] G. Lpar and E. Bn, A fraework for herarchcal schedulng on ultprocessors: Fro applcaton requreents to runte allocaton, n IEEE RTSS, 200. [29] M. Isla, A. K. Huang, M. Battsha, M. Chang, S. Srnvasan, C. Peters, A. Neuann, and A. Abdelnur, Ooze: towards a scalable workflow anageent syste for hadoop, n ACM SWEET, 202. [30] S. Baruah, V. Bonfac, A. Marchett-Spaccaela, L. Stouge, and A. Wese, A generalzed parallel task odel for recurrent real-te processes, n IEEE RTSS, 202. [3] A. Safullah, K. Agrawal, C. Lu, and C. Gll, Mult-core real-te schedulng for generalzed parallel task odels, n IEEE RTSS, 20. [32] C. Maa, L. Noguera, L. M. Pnho, and M. Bertogna., Response-te analyss of fork/on tasks n ultprocessor systes, n ECRTS, 203. [33] C. Maa, M. Bertogna, L. Noguera, and L. M. Pnho, Response-te analyss of synchronous parallel tasks n ultprocessor systes, n Proceedngs of the 22Nd Internatonal Conference on Real-Te Networks and Systes (RTNS), 204. [34] K. Lakshanan, S. Kato, and R. Rakuar, Schedulng parallel real-te tasks on ult-core processors, n IEEE RTSS, 200. [35] J. L, K. Agrawal, C. Lu, and C. Gll, Analyss of global edf for parallel tasks, n IEEE ECRTS, 203. APPENDIX A. Schedulng Segents on Budgets As Algorth converts MapReduce workflows nto MapReduce ppelnes wthout ntroducng any utlzaton penalty, we dscuss the schedulng algorth n the context of a MapReduce ppelne for sake of splcty. In order to further splfy the notaton, we focus on a sngle MapReduce ppelne nvocaton fro MapReduce task τ, savng subscrpts for task ID and ppelne ID. The algorth schedules each phase nto ts budget portons n a Frst-Ft anner. Startng fro the frst phase, let π {π,π 2,..., π z } denote the set of budget portons for the frst phase, where z. Please note, soe budget ay copletely fall n te nterval [t, + ), leavng z to be saller than. A budget porton π s a set of reserved non-overlappng te ntervals on soe resource slots. Hence, each budget porton assocates wth two affntes, te and slot. Let N (π) represent the nuber of budget portons n π (.e., N (π) z), and L(π) the total sze of budget portons n π (.e., L(π) L(π )). Wthout loss of generalty, assue the set π s ordered such that L(π ) L(π + ).Astwo budgets cannot execute on the sae slot at the sae te, we have L(π u \π v )L(π u ) and L (π u π v )L(π u )+L(π v ), for u v. Algorth 2 schedules x segents of WCET y on the budget porton set π. Algorth 2 Schedule a phase on ts budget portons Input: π the set of budget portons, x the nuber of segents, y the length of each segent Output: S the schedule of nput segents usng n nput budget portons : procedure SCHEDSEG(π, x, y) 2: S 3: for x do 4: l y 5: for N(π) do 6: Schedule the length-l segent on π followng the ncreasng order of te, skppng all conflctng te nstances. 7: Store the scheduled ( ) part n π 8: l l L π 9: S S {π } 0: end for : f l>0 then 2: return null 3: end f 4: end for 5: return S 6: end procedure The algorth schedules all segents one by one (Lnes 3-4). For each segent, t always tres to use saller budget portons frst (Lnes 5-0). A segent flls budget porton π followng the ncreasng order of te. Reanng parts n π wll be flled by followng segents. In order to prevent segent-level parallels, the algorth skps all conflctng te ntervals when schedulng segents. As Algorth 2 only requres a budget porton set and segents propertes, t also apples to the subsequent phase n the ppelne, by settng x to ax{, }, y the WCET c, and π the budget portons n te nterval [t,t ). B. Algorth Correctness We now prove that Algorth 2 guarantees to successfully schedule a phase on correspondng budget portons. Agan, n order to splfy the notaton, we focus on one sngle MapReduce ppelne fro MapReduce task τ, and use the frst phase as the proof subect, savng the notatons for task ID, and phase ID. The proof can be dvded nto two cases dependng on whether the sze of the sallest budget porton L(π ) s larger than segent WCET y.

12 Case : L(π ) >y. Algorth 2 always tres to fll up budget porton π before t starts to use π +, unless segent-level parallels conflcts prevent t fro achevng that. In the case of L(π ) >y, a segent α can ether copletely ft nto the current budget porton π, or exhaust the reanng parts of the π and start to use π +.Asπ + π >y, the unscheduled parts of α can always ft nto π + avodng parallels conflcts fro π. Therefore, budget portons are flled up one-by-one followng ther ndex order, leavng no gap n the ddle. Due to L(π) xy, all segents can be scheduled n π. Case 2:L(π ) y. We apply nducton on z. Bass: When z, all segents are scheduled sequentally nto a sngle budget porton. The nducton hypothess trvally holds. Inductve Step: Assue the lea holds for z k, we now prove t also holds for z k. There are two cases: As Algorth 2 s deternstc, t s easy to fgure out whch parts of π are assgned to the frst segent. Reove those parts fro π, and denote the resultng budget porton set as π. Now, n order to prove the lea, we only need to show that Algorth 2 s able to ft the reanng x length-y segents nto schedule π.gvenn (π) x, L(π ) y, and L(π) xy, wehaven (π ) x and L (π )L (π) y (x )y. Therefore, accordng to the nducton hypothess, Algorth 2 can ft x length-y segents nto π, plyng that the lea also holds for z k.

An Electricity Trade Model for Microgrid Communities in Smart Grid

An Electricity Trade Model for Microgrid Communities in Smart Grid An Electrcty Trade Model for Mcrogrd Countes n Sart Grd Tansong Cu, Yanzh Wang, Shahn Nazaran and Massoud Pedra Unversty of Southern Calforna Departent of Electrcal Engneerng Los Angeles, CA, USA {tcu,

More information

Stochastic Models of Load Balancing and Scheduling in Cloud Computing Clusters

Stochastic Models of Load Balancing and Scheduling in Cloud Computing Clusters Stochastc Models of Load Balancng and Schedulng n Cloud Coputng Clusters Sva Theja Magulur and R. Srkant Departent of ECE and CSL Unversty of Illnos at Urbana-Chapagn sva.theja@gal.co; rsrkant@llnos.edu

More information

Stochastic Models of Load Balancing and Scheduling in Cloud Computing Clusters

Stochastic Models of Load Balancing and Scheduling in Cloud Computing Clusters Stochastc Models of Load Balancng and Schedulng n Cloud Coputng Clusters Sva Theja Magulur and R. Srkant Departent of ECE and CSL Unversty of Illnos at Urbana-Chapagn sva.theja@gal.co; rsrkant@llnos.edu

More information

Stochastic Models of Load Balancing and Scheduling in Cloud Computing Clusters

Stochastic Models of Load Balancing and Scheduling in Cloud Computing Clusters 01 Proceedngs IEEE INFOCOM Stochastc Models of Load Balancng and Schedulng n Cloud Coputng Clusters Sva heja Magulur and R. Srkant Departent of ECE and CSL Unversty of Illnos at Urbana-Chapagn sva.theja@gal.co;

More information

Virtual machine resource allocation algorithm in cloud environment

Virtual machine resource allocation algorithm in cloud environment COMPUTE MOELLIN & NEW TECHNOLOIES 2014 1(11) 279-24 Le Zheng Vrtual achne resource allocaton algorth n cloud envronent 1, 2 Le Zheng 1 School of Inforaton Engneerng, Shandong Youth Unversty of Poltcal

More information

CONSTRUCTION OF A COLLABORATIVE VALUE CHAIN IN CLOUD COMPUTING ENVIRONMENT

CONSTRUCTION OF A COLLABORATIVE VALUE CHAIN IN CLOUD COMPUTING ENVIRONMENT CONSTRUCTION OF A COLLAORATIVE VALUE CHAIN IN CLOUD COMPUTING ENVIRONMENT Png Wang, School of Econoy and Manageent, Jangsu Unversty of Scence and Technology, Zhenjang Jangsu Chna, sdwangp1975@163.co Zhyng

More information

Basic Queueing Theory M/M/* Queues. Introduction

Basic Queueing Theory M/M/* Queues. Introduction Basc Queueng Theory M/M/* Queues These sldes are created by Dr. Yh Huang of George Mason Unversty. Students regstered n Dr. Huang's courses at GMU can ake a sngle achne-readable copy and prnt a sngle copy

More information

How Much to Bet on Video Poker

How Much to Bet on Video Poker How Much to Bet on Vdeo Poker Trstan Barnett A queston that arses whenever a gae s favorable to the player s how uch to wager on each event? Whle conservatve play (or nu bet nzes large fluctuatons, t lacks

More information

Online Algorithms for Uploading Deferrable Big Data to The Cloud

Online Algorithms for Uploading Deferrable Big Data to The Cloud Onlne lgorths for Uploadng Deferrable Bg Data to The Cloud Lnquan Zhang, Zongpeng L, Chuan Wu, Mnghua Chen Unversty of Calgary, {lnqzhan,zongpeng}@ucalgary.ca The Unversty of Hong Kong, cwu@cs.hku.hk The

More information

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

A Novel Dynamic Role-Based Access Control Scheme in User Hierarchy

A Novel Dynamic Role-Based Access Control Scheme in User Hierarchy Journal of Coputatonal Inforaton Systes 6:7(200) 2423-2430 Avalable at http://www.jofcs.co A Novel Dynac Role-Based Access Control Schee n User Herarchy Xuxa TIAN, Zhongqn BI, Janpng XU, Dang LIU School

More information

BANDWIDTH ALLOCATION AND PRICING PROBLEM FOR A DUOPOLY MARKET

BANDWIDTH ALLOCATION AND PRICING PROBLEM FOR A DUOPOLY MARKET Yugoslav Journal of Operatons Research (0), Nuber, 65-78 DOI: 0.98/YJOR0065Y BANDWIDTH ALLOCATION AND PRICING PROBLEM FOR A DUOPOLY MARKET Peng-Sheng YOU Graduate Insttute of Marketng and Logstcs/Transportaton,

More information

Revenue Maximization Using Adaptive Resource Provisioning in Cloud Computing Environments

Revenue Maximization Using Adaptive Resource Provisioning in Cloud Computing Environments 202 ACM/EEE 3th nternatonal Conference on Grd Coputng evenue Maxzaton sng Adaptve esource Provsonng n Cloud Coputng Envronents Guofu Feng School of nforaton Scence, Nanng Audt nversty, Nanng, Chna nufgf@gal.co

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

Capacity Planning for Virtualized Servers

Capacity Planning for Virtualized Servers Capacty Plannng for Vrtualzed Servers Martn Bchler, Thoas Setzer, Benjan Spetkap Departent of Inforatcs, TU München 85748 Garchng/Munch, Gerany (bchler setzer benjan.spetkap)@n.tu.de Abstract Today's data

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

Maximizing profit using recommender systems

Maximizing profit using recommender systems Maxzng proft usng recoender systes Aparna Das Brown Unversty rovdence, RI aparna@cs.brown.edu Clare Matheu Brown Unversty rovdence, RI clare@cs.brown.edu Danel Rcketts Brown Unversty rovdence, RI danel.bore.rcketts@gal.co

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

Scan Detection in High-Speed Networks Based on Optimal Dynamic Bit Sharing

Scan Detection in High-Speed Networks Based on Optimal Dynamic Bit Sharing Scan Detecton n Hgh-Speed Networks Based on Optal Dynac Bt Sharng Tao L Shgang Chen Wen Luo Mng Zhang Departent of Coputer & Inforaton Scence & Engneerng, Unversty of Florda Abstract Scan detecton s one

More information

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING FORMAL ANALYSIS FOR REAL-TIME SCHEDULING Bruno Dutertre and Vctora Stavrdou, SRI Internatonal, Menlo Park, CA Introducton In modern avoncs archtectures, applcaton software ncreasngly reles on servces provded

More information

TheHow and Why of Having a Successful Home Office

TheHow and Why of Having a Successful Home Office Near Optal Onlne Algorths and Fast Approxaton Algorths for Resource Allocaton Probles Nkhl R Devanur Kaal Jan Balasubraanan Svan Chrstopher A Wlkens Abstract We present algorths for a class of resource

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

Recurrence. 1 Definitions and main statements

Recurrence. 1 Definitions and main statements Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

An Analytical Model of Web Server Load Distribution by Applying a Minimum Entropy Strategy

An Analytical Model of Web Server Load Distribution by Applying a Minimum Entropy Strategy Internatonal Journal of Coputer and Councaton Engneerng, Vol. 2, No. 4, July 203 An Analytcal odel of Web Server Load Dstrbuton by Applyng a nu Entropy Strategy Teeranan Nandhakwang, Settapong alsuwan,

More information

II. THE QUALITY AND REGULATION OF THE DISTRIBUTION COMPANIES I. INTRODUCTION

II. THE QUALITY AND REGULATION OF THE DISTRIBUTION COMPANIES I. INTRODUCTION Fronter Methodology to fx Qualty goals n Electrcal Energy Dstrbuton Copanes R. Rarez 1, A. Sudrà 2, A. Super 3, J.Bergas 4, R.Vllafáfla 5 1-2 -3-4-5 - CITCEA - UPC UPC., Unversdad Poltécnca de Cataluña,

More information

Research Article Load Balancing for Future Internet: An Approach Based on Game Theory

Research Article Load Balancing for Future Internet: An Approach Based on Game Theory Appled Matheatcs, Artcle ID 959782, 11 pages http://dx.do.org/10.1155/2014/959782 Research Artcle Load Balancng for Future Internet: An Approach Based on Gae Theory Shaoy Song, Tngje Lv, and Xa Chen School

More information

A Fuzzy Optimization Framework for COTS Products Selection of Modular Software Systems

A Fuzzy Optimization Framework for COTS Products Selection of Modular Software Systems Internatonal Journal of Fuy Systes, Vol. 5, No., June 0 9 A Fuy Optaton Fraework for COTS Products Selecton of Modular Software Systes Pankaj Gupta, Hoang Pha, Mukesh Kuar Mehlawat, and Shlp Vera Abstract

More information

Ganesh Subramaniam. American Solutions Inc., 100 Commerce Dr Suite # 103, Newark, DE 19713, USA

Ganesh Subramaniam. American Solutions Inc., 100 Commerce Dr Suite # 103, Newark, DE 19713, USA 238 Int. J. Sulaton and Process Modellng, Vol. 3, No. 4, 2007 Sulaton-based optsaton for ateral dspatchng n Vendor-Managed Inventory systes Ganesh Subraana Aercan Solutons Inc., 100 Coerce Dr Sute # 103,

More information

Quality of Service Analysis and Control for Wireless Sensor Networks

Quality of Service Analysis and Control for Wireless Sensor Networks Qualty of ervce Analyss and Control for Wreless ensor Networs Jaes Kay and Jeff Frol Unversty of Veront ay@uv.edu, frol@eba.uv.edu Abstract hs paper nvestgates wreless sensor networ spatal resoluton as

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

How To Understand The Results Of The German Meris Cloud And Water Vapour Product Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller

More information

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process Dsadvantages of cyclc TDDB47 Real Tme Systems Manual scheduler constructon Cannot deal wth any runtme changes What happens f we add a task to the set? Real-Tme Systems Laboratory Department of Computer

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

A Programming Model for the Cloud Platform

A Programming Model for the Cloud Platform Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna luxaodongxht@qq.com

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2016. All Rghts Reserved. Created: July 15, 1999 Last Modfed: January 5, 2015 Contents 1 Lnear Fttng

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

INTRODUCTION TO MERGERS AND ACQUISITIONS: FIRM DIVERSIFICATION

INTRODUCTION TO MERGERS AND ACQUISITIONS: FIRM DIVERSIFICATION XV. INTODUCTION TO MEGES AND ACQUISITIONS: FIM DIVESIFICATION In the ntroducton to Secton VII, t was noted that frs can acqure assets by ether undertakng nternally-generated new projects or by acqurng

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

Real-Time Process Scheduling

Real-Time Process Scheduling Real-Tme Process Schedulng ktw@cse.ntu.edu.tw (Real-Tme and Embedded Systems Laboratory) Independent Process Schedulng Processes share nothng but CPU Papers for dscussons: C.L. Lu and James. W. Layland,

More information

An Enhanced K-Anonymity Model against Homogeneity Attack

An Enhanced K-Anonymity Model against Homogeneity Attack JOURNAL OF SOFTWARE, VOL. 6, NO. 10, OCTOBER 011 1945 An Enhanced K-Anont Model aganst Hoogenet Attack Qan Wang College of Coputer Scence of Chongqng Unverst, Chongqng, Chna Eal: wangqan@cqu.edu.cn Zhwe

More information

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems 1 Mult-Resource Far Allocaton n Heterogeneous Cloud Computng Systems We Wang, Student Member, IEEE, Ben Lang, Senor Member, IEEE, Baochun L, Senor Member, IEEE Abstract We study the mult-resource allocaton

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

Modeling and Assessment Performance of OpenFlow-Based Network Control Plane

Modeling and Assessment Performance of OpenFlow-Based Network Control Plane ISSN (Onlne): 2319-7064 Index Coperncus Value (2013): 6.14 Ipact Factor (2013): 4.438 Modelng and Assessent Perforance of OpenFlo-Based Netork Control Plane Saer Salah Al_Yassn Assstant Teacher, Al_Maon

More information

Inventory Control in a Multi-Supplier System

Inventory Control in a Multi-Supplier System 3th Intl Workng Senar on Producton Econocs (WSPE), Igls, Autrche, pp.5-6 Inventory Control n a Mult-Suppler Syste Yasen Arda and Jean-Claude Hennet LAAS-CRS, 7 Avenue du Colonel Roche, 3077 Toulouse Cedex

More information

8 Algorithm for Binary Searching in Trees

8 Algorithm for Binary Searching in Trees 8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the

More information

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy 4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

Dynamic Resource Allocation in Clouds: Smart Placement with Live Migration

Dynamic Resource Allocation in Clouds: Smart Placement with Live Migration Dynac Resource Allocaton n Clouds: Sart Placeent wth Lve Mgraton Mahlouf Had Ingéneur de Recherche ahlouf.had@rt-systex.fr Avec : Daal Zeghlache (TSP) daal.zeghlache@teleco-sudpars.eu FONDATION DE COOPERATION

More information

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,

More information

Technical Report, SFB 475: Komplexitätsreduktion in Multivariaten Datenstrukturen, Universität Dortmund, No. 1998,04

Technical Report, SFB 475: Komplexitätsreduktion in Multivariaten Datenstrukturen, Universität Dortmund, No. 1998,04 econstor www.econstor.eu Der Open-Access-Publkatonsserver der ZBW Lebnz-Inforatonszentru Wrtschaft The Open Access Publcaton Server of the ZBW Lebnz Inforaton Centre for Econocs Becka, Mchael Workng Paper

More information

Web Service-based Business Process Automation Using Matching Algorithms

Web Service-based Business Process Automation Using Matching Algorithms Web Servce-based Busness Process Autoaton Usng Matchng Algorths Yanggon K and Juhnyoung Lee 2 Coputer and Inforaton Scences, Towson Uversty, Towson, MD 2252, USA, yk@towson.edu 2 IBM T. J. Watson Research

More information

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by 6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng

More information

Yixin Jiang and Chuang Lin. Minghui Shi and Xuemin Sherman Shen*

Yixin Jiang and Chuang Lin. Minghui Shi and Xuemin Sherman Shen* 198 Int J Securty Networks Vol 1 Nos 3/4 2006 A self-encrypton authentcaton protocol for teleconference servces Yxn Jang huang Ln Departent of oputer Scence Technology Tsnghua Unversty Beng hna E-al: yxang@csnet1cstsnghuaeducn

More information

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

Allocating Collaborative Profit in Less-than-Truckload Carrier Alliance

Allocating Collaborative Profit in Less-than-Truckload Carrier Alliance J. Servce Scence & Management, 2010, 3: 143-149 do:10.4236/jssm.2010.31018 Publshed Onlne March 2010 (http://www.scrp.org/journal/jssm) 143 Allocatng Collaboratve Proft n Less-than-Truckload Carrer Allance

More information

PRIOR ROBUST OPTIMIZATION. Balasubramanian Sivan. A dissertation submitted in partial fulfillment of the requirements for the degree of

PRIOR ROBUST OPTIMIZATION. Balasubramanian Sivan. A dissertation submitted in partial fulfillment of the requirements for the degree of PRIOR ROBUST OPTIMIZATION By Balasubraanan Svan A dssertaton subtted n partal fulfllent of the requreents for the degree of Doctor of Phlosophy (Coputer Scences) at the UNIVERSITY OF WISCONSIN MADISON

More information

Conferencing protocols and Petri net analysis

Conferencing protocols and Petri net analysis Conferencng protocols and Petr net analyss E. ANTONIDAKIS Department of Electroncs, Technologcal Educatonal Insttute of Crete, GREECE ena@chana.tecrete.gr Abstract: Durng a computer conference, users desre

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

Can Auto Liability Insurance Purchases Signal Risk Attitude?

Can Auto Liability Insurance Purchases Signal Risk Attitude? Internatonal Journal of Busness and Economcs, 2011, Vol. 10, No. 2, 159-164 Can Auto Lablty Insurance Purchases Sgnal Rsk Atttude? Chu-Shu L Department of Internatonal Busness, Asa Unversty, Tawan Sheng-Chang

More information

Loop Parallelization

Loop Parallelization - - Loop Parallelzaton C-52 Complaton steps: nested loops operatng on arrays, sequentell executon of teraton space DECLARE B[..,..+] FOR I :=.. FOR J :=.. I B[I,J] := B[I-,J]+B[I-,J-] ED FOR ED FOR analyze

More information

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems

Schedulability Bound of Weighted Round Robin Schedulers for Hard Real-Time Systems Schedulablty Bound of Weghted Round Robn Schedulers for Hard Real-Tme Systems Janja Wu, Jyh-Charn Lu, and We Zhao Department of Computer Scence, Texas A&M Unversty {janjaw, lu, zhao}@cs.tamu.edu Abstract

More information

Section 5.4 Annuities, Present Value, and Amortization

Section 5.4 Annuities, Present Value, and Amortization Secton 5.4 Annutes, Present Value, and Amortzaton Present Value In Secton 5.2, we saw that the present value of A dollars at nterest rate per perod for n perods s the amount that must be deposted today

More information

+ + + - - This circuit than can be reduced to a planar circuit

+ + + - - This circuit than can be reduced to a planar circuit MeshCurrent Method The meshcurrent s analog of the nodeoltage method. We sole for a new set of arables, mesh currents, that automatcally satsfy KCLs. As such, meshcurrent method reduces crcut soluton to

More information

Checkng and Testng in Nokia RMS Process

Checkng and Testng in Nokia RMS Process An Integrated Schedulng Mechansm for Fault-Tolerant Modular Avoncs Systems Yann-Hang Lee Mohamed Youns Jeff Zhou CISE Department Unversty of Florda Ganesvlle, FL 326 yhlee@cse.ufl.edu Advanced System Technology

More information

1 Example 1: Axis-aligned rectangles

1 Example 1: Axis-aligned rectangles COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton

More information

A Probabilistic Theory of Coherence

A Probabilistic Theory of Coherence A Probablstc Theory of Coherence BRANDEN FITELSON. The Coherence Measure C Let E be a set of n propostons E,..., E n. We seek a probablstc measure C(E) of the degree of coherence of E. Intutvely, we want

More information

Secure Cloud Storage Service with An Efficient DOKS Protocol

Secure Cloud Storage Service with An Efficient DOKS Protocol Secure Cloud Storage Servce wth An Effcent DOKS Protocol ZhengTao Jang Councaton Unversty of Chna z.t.ang@163.co Abstract Storage servces based on publc clouds provde custoers wth elastc storage and on-deand

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

Period and Deadline Selection for Schedulability in Real-Time Systems

Period and Deadline Selection for Schedulability in Real-Time Systems Perod and Deadlne Selecton for Schedulablty n Real-Tme Systems Thdapat Chantem, Xaofeng Wang, M.D. Lemmon, and X. Sharon Hu Department of Computer Scence and Engneerng, Department of Electrcal Engneerng

More information

A Secure Password-Authenticated Key Agreement Using Smart Cards

A Secure Password-Authenticated Key Agreement Using Smart Cards A Secure Password-Authentcated Key Agreement Usng Smart Cards Ka Chan 1, Wen-Chung Kuo 2 and Jn-Chou Cheng 3 1 Department of Computer and Informaton Scence, R.O.C. Mltary Academy, Kaohsung 83059, Tawan,

More information

Two-Phase Traceback of DDoS Attacks with Overlay Network

Two-Phase Traceback of DDoS Attacks with Overlay Network 4th Internatonal Conference on Sensors, Measureent and Intellgent Materals (ICSMIM 205) Two-Phase Traceback of DDoS Attacks wth Overlay Network Zahong Zhou, a, Jang Wang2, b and X Chen3, c -2 School of

More information

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet 2008/8 An ntegrated model for warehouse and nventory plannng Géraldne Strack and Yves Pochet CORE Voe du Roman Pays 34 B-1348 Louvan-la-Neuve, Belgum. Tel (32 10) 47 43 04 Fax (32 10) 47 43 01 E-mal: corestat-lbrary@uclouvan.be

More information

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application Internatonal Journal of mart Grd and lean Energy Performance Analyss of Energy onsumpton of martphone Runnng Moble Hotspot Applcaton Yun on hung a chool of Electronc Engneerng, oongsl Unversty, 511 angdo-dong,

More information

International Journal of Industrial Engineering Computations

International Journal of Industrial Engineering Computations Internatonal Journal of Industral ngneerng Coputatons 3 (2012) 393 402 Contents lsts avalable at GrowngScence Internatonal Journal of Industral ngneerng Coputatons hoepage: www.growngscence.co/jec Suppler

More information

Analysis of Energy-Conserving Access Protocols for Wireless Identification Networks

Analysis of Energy-Conserving Access Protocols for Wireless Identification Networks From the Proceedngs of Internatonal Conference on Telecommuncaton Systems (ITC-97), March 2-23, 1997. 1 Analyss of Energy-Conservng Access Protocols for Wreless Identfcaton etworks Imrch Chlamtac a, Chara

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

A Cryptographic Key Binding Method Based on Fingerprint Features and the Threshold Scheme

A Cryptographic Key Binding Method Based on Fingerprint Features and the Threshold Scheme A Cryptographc Key ndng Method Based on Fngerprnt Features and the Threshold Schee 1 Ln You, 2 Guowe Zhang, 3 Fan Zhang 1,3 College of Councaton Engneerng, Hangzhou Danz Unv., Hangzhou 310018, Chna, ryouln@gal.co

More information

Rapid Estimation Method for Data Capacity and Spectrum Efficiency in Cellular Networks

Rapid Estimation Method for Data Capacity and Spectrum Efficiency in Cellular Networks Rapd Estmaton ethod for Data Capacty and Spectrum Effcency n Cellular Networs C.F. Ball, E. Humburg, K. Ivanov, R. üllner Semens AG, Communcatons oble Networs unch, Germany carsten.ball@semens.com Abstract

More information

Multi-Source Video Multicast in Peer-to-Peer Networks

Multi-Source Video Multicast in Peer-to-Peer Networks ult-source Vdeo ultcast n Peer-to-Peer Networks Francsco de Asís López-Fuentes*, Eckehard Stenbach Technsche Unverstät ünchen Insttute of Communcaton Networks, eda Technology Group 80333 ünchen, Germany

More information

PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 12

PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 12 14 The Ch-squared dstrbuton PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 1 If a normal varable X, havng mean µ and varance σ, s standardsed, the new varable Z has a mean 0 and varance 1. When ths standardsed

More information

Extending Probabilistic Dynamic Epistemic Logic

Extending Probabilistic Dynamic Epistemic Logic Extendng Probablstc Dynamc Epstemc Logc Joshua Sack May 29, 2008 Probablty Space Defnton A probablty space s a tuple (S, A, µ), where 1 S s a set called the sample space. 2 A P(S) s a σ-algebra: a set

More information

Brigid Mullany, Ph.D University of North Carolina, Charlotte

Brigid Mullany, Ph.D University of North Carolina, Charlotte Evaluaton And Comparson Of The Dfferent Standards Used To Defne The Postonal Accuracy And Repeatablty Of Numercally Controlled Machnng Center Axes Brgd Mullany, Ph.D Unversty of North Carolna, Charlotte

More information

Calculation of Sampling Weights

Calculation of Sampling Weights Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample

More information

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT Toshhko Oda (1), Kochro Iwaoka (2) (1), (2) Infrastructure Systems Busness Unt, Panasonc System Networks Co., Ltd. Saedo-cho

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

International Journal of Information Management

International Journal of Information Management Internatonal Journal of Inforaton Manageent 32 (2012) 409 418 Contents lsts avalable at ScVerse ScenceDrect Internatonal Journal of Inforaton Manageent j our nal ho e p age: www.elsever.co/locate/jnfogt

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers

Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers 1 Domnant Resource Farness n Cloud Computng Systems wth Heterogeneous Servers We Wang, Baochun L, Ben Lang Department of Electrcal and Computer Engneerng Unversty of Toronto arxv:138.83v1 [cs.dc] 1 Aug

More information

A Hybrid Approach to Evaluate the Performance of Engineering Schools

A Hybrid Approach to Evaluate the Performance of Engineering Schools A Hybrd Approach to Evaluate the Perforance of Engneerng Schools School of Engneerng Unversty of Brdgeport Brdgeport, CT 06604 ABSTRACT Scence and engneerng (S&E) are two dscplnes that are hghly receptve

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

How To Calculate The Accountng Perod Of Nequalty

How To Calculate The Accountng Perod Of Nequalty Inequalty and The Accountng Perod Quentn Wodon and Shlomo Ytzha World Ban and Hebrew Unversty September Abstract Income nequalty typcally declnes wth the length of tme taen nto account for measurement.

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika.

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika. VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing Effcent Bandwdth Management n Broadband Wreless Access Systems Usng CAC-based Dynamc Prcng Bader Al-Manthar, Ndal Nasser 2, Najah Abu Al 3, Hossam Hassanen Telecommuncatons Research Laboratory School of

More information