Network Aware Load-Balancing via Parallel VM Migration for Data Centers

Size: px
Start display at page:

Download "Network Aware Load-Balancing via Parallel VM Migration for Data Centers"

Transcription

1 Network Aware Load-Balancng va Parallel VM Mgraton for Data Centers Kun-Tng Chen 2, Chen Chen 12, Po-Hsang Wang 2 1 Informaton Technology Servce Center, 2 Department of Computer Scence Natonal Chao Tung Unversty, Hsn-Chu, Tawan quentn2007.cs96g@g2.nctu.edu.tw, chenchen@cs.nctu.edu.tw, btc cs98g@g2.nctu.edu.tw Abstract It becomes a challenge to desgn an effcent load balancng method va lve vrtual machne (VM) mgraton wthout degradng applcaton performance. Two major performance mpacts on hosted applcatons that run on a VM are the system load balancng degree and the total tme tll a balanced state s reached. Exstng load balancng methods usually gnore the VM mgraton tme overhead. In contrast to sequental mgratonbased load balancng, ths paper proposes usng a networktopology aware parallel mgraton to speed up the load balancng process n a data center. We transform the VM mgraton-based mult-resource load-balancng problem nto a mnmum weghted matchng problem over a weghted bpartte graph. By obtanng the mnmum weghted matchng pars through the Hungaran method, we parallel mgrate multple VMs from overloaded hosts to underutlzed hosts to reduce the tme t takes to reach a load balanced state. The expermental results show that our algorthm not only obtans a compatble mult-resource load balancng performance but also mproves the balanced tme whch results n at most a 10% throughput gan by assumng a large batch applcaton runnng on all VMs. I. INTRODUCTION Cloud data centers employ vrtualzaton-based technology to consoldate hardware resource usage to provde applcaton hostng for multple servce provders. In a cloud data center, thousands of commodty computers work n parallel wth host vrtual machnes (VMs) that support dfferent applcatons. The CPU speed, memory sze, and network bandwdth of dfferent commodty computers are wdely heterogeneous. Besdes, a physcal host may run multple servces wth dfferent types of resource demands. Wthout proper allocaton, the loads of dfferent resources may become unbalanced among dfferent physcal hosts. Even wth careful resource provsonng at the begnnng, due to dynamc arrvng and leavng of runnng workload, the cloud system could stll become a load unbalanced state later. Thus, a cloud system may have qute a few overloaded hosts whle lots of underutlzed hosts are stll avalable. Load-balancng mechansms mprove system performance by reducng resource contenton (e.g., CPU, network bandwdth, and memory) on overloaded hosts. More mportantly, t can prevent a hgher hardware falure rate n overloaded hosts. Load-balancng mechansms are effectvely studed n many research areas, ncludng dstrbuted systems, web servers, and cloud systems, to maxmze resource utlzaton and mnmze the varance of the load of multple servers [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]. Whle most recent research focuses on the VM mgraton as a mean to acheve load balancng n cloud systems [4] [5] [6] [7] [8] [9] [10] [11] [20][21], a few of them focus on the mpact of the total tme requred to reach system balance. Normally, the exstng load balancng algorthms search for a VM to nstantate a VM mgraton from overloaded hosts to under-loaded hosts. They usually select and start the next VM mgraton when the prevous mgraton completely termnates. These schemes am to acheve an egaltaran state wthout consderng the watng tme for other hosts to offload ther workload. Ths paper calls such schemes sequental mgraton-based load balancng methods. These methods could lead to all overloaded hosts takng a long tme to offload ther workload. Consequently, the applcatons runnng on those overloaded hosts wll contend for resources for a longer tme. Therefore, users could experence applcaton performance degradaton. In ths work, we study the mult-resource load balancng problem for a cloud data center wth heterogeneous hardware capacty. Compared wth sequental mgraton-based load balancng methods, ths paper proposes a parallel VM mgraton approach that focuses on mnmzng the jont mult-resource mbalance and the tme t takes to reach a balanced state. In addton, the mgraton delay based on the network topology of a data center s consdered, especally f the mgraton pars are between machnes wth a large hop dstance. In order to mprove the applcaton performances, ths paper mnmzes the tme tll a balanced state s reached by mgratng VMs parallel between ndependent pars of hosts whle consderng the underlyng network topology. To be specfc, ths paper models the load balancng problem wth a weghted bpartte graph (WBG). The over- and underutlzed hosts n a data center are selected as two dsjont sets of vertces: Trgger (TR) node and Non-Trgger (NTR) node sets. A physcal host s sad to be a trgger node f ts resource utlzaton exceeds a system threshold on any dmensons of resources; otherwse, t s a non-trgger node. Each edge between the two sets s desgnated wth a weght accordng to the mult-resource requrements of VMs, the varous resource capactes of the hosts, and the network cost between two hosts. By solvng the mnmum weghted matchng problem usng the Hungaran method [12], ths paper could mgrate a set of VMs between over- and underutlzed hosts n parallel. The expermental results show that our algorthm acheves a compatble mult-resource load balancng whle mprovng the balancng tme, whch results n at most a 10% runnng applcaton s throughput gan compared to conventonal algorthms whch perform load balancng n a sequental manner. The rest of ths paper s organzed as follows. Secton II descrbes related works. Secton III elaborates the motvaton /14/$ IEEE

2 Fg. 1. Mgraton tme vs. VM memory sze behnd ths work. Secton IV descrbes the network-aware bpartte matchng load balancng algorthm n detal. Expermental results of the smulaton wll be presented n secton VI. Fnally ths paper concludes wth some remarks n secton VII. II. RELATED WORKS A number of researches have proposed load balancng methods [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] for cloud data center. Most of these works focus on balancng the resource requrement of VMs on physcal machnes. Based on ther algorthms, they mgrate ether sngle or multple VMs n each round of the load balancng procedure. These load-balancng schemes for cloud computng measure the load on physcal hosts n dfferent ways. However, none of ther efforts address the mult-resource load balancng ssue [1] [2] [3]. Zhao and Huang [2] use the number of VMs of a host as ther load measurement. Recent study [3] mmcs the anmal behavor of honeybees foragng and scoutng to harvest foods for load-balancng. Though the scheme works for a dynamc workload, they assume each host s homogeneous and deal wth sngle resources only. And yet [4] consders the mult-resource demands of VMs and heterogeneous capacty n each host. They propose a load balancng method called VectorDot (VD). Ther goal s to brng overloaded hosts below ther threshold by mgratng one or more VMs sequentally. They extend the Toyoda heurstc for a sngle knapsack problem to solve a multdmensonal knapsack problem. VD treats a VM n an overloaded host as an tem to be placed nto an approprate node. Then, ths VM s mgrated to the selected destnaton node. Ths procedure s repeated untl there are no more overloaded hosts. From a practcal perspectve VM mgraton takes a whle to fnsh. Thus mgratng VMs n a one-by-one manner could take a long tme untl the system s balanced, and thereby exacerbate the runnng applcatons that are hosted on overloaded machnes. Recent works that consder VM mgraton overhead for load balancng can be found n [7] [8] [9] [10] [11]. Snce a cloud system nvolves cloud operators and network operators wth dfferent goals n mnd, [7] studes the problem of makng both objects meet. On one hand, the cloud operators wsh to assgn a number of VMs to some selected hosts for the purpose of meetng the VM schedulng deadlne. Thus, the cloud operators wsh to par each VM wth a destnaton host that mnmzes the mgraton tme. On the other hand, the network operators wsh to mnmze the network bandwdth. To make both ends meet, [7] models a balanced stable matchng problem for both requrements of cloud operators and network operators. VM mgraton overhead s derved from the network topology and Fg. 2. Total elapsed mgraton tme vs. number of VMs Fg. 3. Number of completed operatons vs. number of VMs the transfer memory volume. However, ther matchng does not consder the bandwdth of NIC sharng at the destnaton host. Thus they could mgrate several VMs from multple dstnct hosts to an dentcal host whch could degrade nto sequental mgraton. In contrast, we consder fndng ndependent pars between the end hosts for load balancng va parallel mgraton. Recent works [8] [9] conduct a number of experments on the cloud testbed to study the factors that nfluence VM mgraton tme and mpact of lve mgraton on applcaton performances. In [8], they show that VM mgraton could contend for resources wth the runnng workload on the source and destnaton host, especally as the number of concurrently mgratng VMs between one source and one destnaton host ncreases. Hence they study the problem of selectng a number of VMs to mgrate that mnmze the resource contenton overhead. [8] models a VM mgraton as processes that run at source and destnaton hosts. They show that by carefully assgnng VMs they could reduce the VM mgraton overhead. Though, t s dffcult to formulate the mpact of mgraton tme gven dfferent combnaton of jobs and VM mgratons runnng on machnes wth varous resource confguratons. As the cloud system gets larger, the resource contenton model would become more dffcult to buld. [9] shows that the destnaton host CPU reserved for the mgraton process has lttle to do wth mgraton tme. The source host CPU mpacts the mgraton tme when the host s overloaded, especally when ts memory drty rate s hgh. However, they only suggest a drecton to balance mult-resource load wthout relatng t to the performance overhead ncurred by concurrent VM mgratons. In contrast to [8] [9], ths paper performs an expermental study of lve mgratons and applcaton throughputs wth multple, concurrent VM mgratons (n next secton). These experment results provde a motvaton to a new parallel mgraton-based load balancng algorthm. Recent studes [10] [11] formulate the load balancng problem wth mgraton overhead beng the dstance measured by the number of hops on the Internet. Nevertheless these schemes often do not consder the heterogeneous capacty of mult-resource n each host. In addton, the concurrent mgraton approach they adopt stll has a chance to mgrate multple VMs to an dentcal destnaton host or from a sngle source to multple dstnct destnatons. III. MOTIVATION Even through the modern cloud admnstrators mgrate multple VMs concurrently, f the hosts of mgraton pars do not choose carefully, concurrent mgraton stll has a chance to degrade nto sequental mgraton. The worst case crcumstance 205

3 n load balancng s to mgrate many VMs from dfferent hosts to one under-utlzed host (many-to-1) concurrently. In ths secton, we measure the real applcaton performance and compare parallel mgraton wth a range of many-to-1 and 1-tomany concurrent mgraton. The experment s conducted on a real testbed based on Xen [13]. In ths system, we reserve the network bandwdth as 100 Mbps at each host NIC for VM mgraton. Each VM runs the crypto.rsa applcaton n SPECjvm [14]. VMs servng the same applcaton type are allocated wth dentcal resources. Durng lve mgratons, applcatons runnng on VMs persst n executon wth a small nterrupton tme compared to non-lve mgraton [13]. We frst measure the tme taken for each ndvdual VM durng concurrent mgraton by observng dfferent combnatons of VM pars. Then we observe the applcaton performance for an ncreasng number of concurrently mgratng VMs. A concluson that leads to the work n ths paper follows thereafter. Each evaluaton s an averaged result for 10 tmes. Fg. 1 shows the ndvdual VM mgraton tme for mgratng two VMs wth ncreasng total mgratng memory volume. Consstently wth [8], as each VM memory sze ncreases from 256 to 1280 MB (where the total mgrated sze ncreases from 512MB to 2048MB, respectvely,) VM mgraton tme ncreases lnearly proportonal to the volume of transferred memory. As two dstnct hosts mgrate to a sngle host (.e., 2-to-1), t suffers from a lnearly ncreasng tme for both VM mgratons snce the network capacty at the destnaton host s NIC s shared by the two mgratons. When mgratng from a sngle host to two dstnct hosts (.e., 1-to-2), one VM does not start mgratng untl another fnshes mgratng. Ths s the mechansm mplemented by the Xen platform 1 whch would start VM mgraton after the prevous one fnshes, that behaves the same as sequental mgraton n the case of mgratng two VMs from one host to the other host. As opposed to the above cases, the parallel VM mgraton scheme (.e., 2-to-2) acheves the smallest mgraton tme wthout sufferng from addtonal network delays. Gven that mgratng two VMs could degrade nto sequental mgraton n the 2-to-1 and 1-to-2 cases, we would lke to know the performance mpact of mgratng multple VMs concurrently n a larger scale, especally n an overloaded system envronment. We assume that an overloaded envronment conssts of overloaded hosts and dle hosts. In the followng experments we assume that all the VMs run on those overloaded hosts. In such an overloaded envronment, the workload on one VM contends resources wth all other VMs resdent on the same host. Thus the tme untl a VM s mgrated to an dle host could affect the applcaton performance runnng 1 Xen provdes mgratng modules for synchronous and asynchronous modes. The only dfference s whether the mgratng thread mmedately returns to user thread or returns untl mgraton completes. Both mplementatons mgrate VMs n sequental manner even they are ssued concurrently. on that VM. We study the mpact of total mgraton tme of dfferent number of mgratng VMs on the performance of workload on the mgratng VMs, where the total mgraton tme denotes the longest completon tme for mgratng a number of VMs. In the many-to-1 case, we concurrently mgrate 2, 3, and 4 VMs each from 2, 3, and 4 overloaded hosts to one dle host. On the contrary, the 1-to-many case exhbts that 2, 3, and 4 VMs n one overloaded host are mgrated to 2, 3, and 4 dle hosts respectvely. The 1-to-1 case let 2, 3, and 4 VMs mgrate from an overloaded host to an dle host. Parallel mgraton ndependently mgrates 2, 3, and 4 VMs each from 2, 3, and 4 overloaded hosts to 2, 3, and 4 dle hosts. Each VM mgraton s nstantated by our controller 1 mnute after the workload starts runnng. Durng VM mgraton, we measure the applcaton performance obtaned on mgratng VMs. In order to measure the mpact on the runnng applcaton, the teraton duraton parameter of the crypto.rsa applcaton s set to last after all VMs fnsh mgratng. Ths parameter ensures that the VM contnues to demand resources durng VM mgraton. The baselne performance s approxmately 108 operatons under baselne sngle core CPU at 2.67GHz. Fg. 2 shows the total mgraton tme under 2, 3, and 4 VM mgratons. As the number of VM mgratons ncreases, the completon tme of mgraton n the many-to-1, 1-to-many, and 1-to-1 cases ncreases. However, the total mgraton tme of parallel mgraton remans almost the same. It s the smallest among the other cases. Consstently wth [8], as the total transfer memory sze of concurrent VM mgratons ncreases, the total mgraton tme also ncreases. Gven that the total transfer volume of memory between two hosts s fxed, they observe that the total mgraton tme ncreases as the number of concurrent VM mgratons ncreases. Ths s due to the resource contenton between those mgraton processes and runnng workload on the source and destnaton host. In order to mprove applcaton performance, they study the VM assgnment problem by selectng dfferent pars of VM mgratons to lower the mpact of resource contenton. Fg. 3 shows the total number of completed operatons of crypto.rsa observed on mgratng VMs. The case of a sngle host to multple dstnct hosts (1-to-many) completes 4%, 9%, and 10% fewer operatons than parallel mgraton, as the number of concurrently mgratng VMs ncrease from 2, to 3, to 4. Although the many-to-1 case experences a smaller total mgraton tme than that of the 1-to-many as shown n Fg. 2, the number of completed operatons of many-to-1 decreases more severely than that of 1-to-many wth 15%, 34%, and 61% fewer operatons done than parallel mgraton. Further, the many-to-1 case s also close to the worst case wthout mgraton (.e. no load balancng). The 1-to-many case completes more operatons than the case of many-to-1 where t suffers from a longer mgraton delay for each ndvdual VM as shown n the 2-to-1 case n Fg. 1. Among them, parallel mgraton shows the hghest workload completon rate. These observatons suggest that maneuverng parallel VM mgratons s a feasble way to mprove applcaton performances. IV. NETWORK-AWARE BIPARTITE MATCHING LOAD- BALANCING ALGORITHM Load balancng for mult-resource requrements whle consderng heterogeneous resource capacty has been a challengng problem, especally n a large scale cloud hostng envronment. In ths paper, we study the load balancng problem for a cloud data center. Especally, we try to reduce the tme taken for a cloud data center to reach ts load balance state. Snce a lve mgraton of VM takes some tme to accomplsh, sequentally mgratng VMs from trgger nodes to non-trgger nodes may take a long tme to offload workload for overload hosts. Further, the mgraton delay between two hosts vares 206

4 accordng to the underlyng network archtecture and the transfer volume of VM memory. Thus, we propose a Network-Aware Bpartte matchng (NABM) load balancng algorthm. Ths paper frst transforms the load-balancng problem nto a mnmum weghted matchng problem. Accordng to the mnmum weghted matchng obtaned from the Hungaran method, ths paper mgrates VMs from overloaded hosts to underutlzed hosts n parallel. Furthermore, ths paper adds the hop dstance between source and destnaton hosts to reflect the mpact of the network to the VM mgraton tme. By reducng the VM mgraton tme, t can shorten the tme for a cloud system to reach ts load balanced state. A. Prelmnary The notatons for use n NABN wll be defned as follows. Let H h1, h2,, hmand V vm1, vm2,, vmv denote a set of m hosts and v VMs n a cloud system, respectvely, wth an exstng allocaton where denotes the host where vm resdes. The n dfferent dmensons of resources consumed by a VM vm β s represented by S s 1, 2,, n. The nput to our load balancng algorthm ncludes a capacty value, a current utlzaton value and a suggested threshold C c {1,2,..., n} and AV : H Avm ( ) fracton between 0 and 1. Let Ua ua ua 0,1, 1, 2,, n aa denote the vectors of capacty and resource utlzaton along n dmensons of resources for a T t t [0,1], {1,2,..., n} denotes Host h a, respectvely. the vector of the system threshold along each dmenson of resources. Our load balancer wll mantan the usage of the resources below those thresholds. Moreover, the avalable capacty for a host h a n any dmenson of resources can be represented as n (1). F f f 1 u c, 1, 2,, n a a a a a B. Bpartte Matchng Load-Balancng Smlar to [4], NABM collects mult-dmensonal resource utlzaton nformaton of hosts and resource requrements of VMs as nput. In the real testbed deployed wth Xen, we send XML-RPC requests to collect load nformaton from any host n the same server farm through a control doman. Xen mplements multple control domans n charge of sendng and recevng end hosts control messages. To construct a weghted bpartte graph (WBG), NABM leverages three decson polces: partcpaton, canddate selecton, and locaton, as defned n [15]. The partcpaton polcy decdes whch hosts are nvolved n the load balancng process. In ths paper, we specfy two node sets for over- and under-loaded hosts. They can be defned as a a a a a non-trggered set: NTR ha ha H, ha TR trggered set: TR h h H, ( u t ), 1,2,, n and, respectvely. The trggered set s a set of hosts for whch resource utlzaton exceeds the system threshold n any dmenson of resources. The hosts that are not marked as the trggered nodes belong to the non-trggered set. These two sets together form two vertex sets for the weghted bpartte graph used n NABM. Both the canddate selecton polcy and the locaton polcy play the ntermedate roles of generatng edges and edges weght for the WBG. The canddate selecton polcy chooses the VMs to transfer to allevate an overloaded hot spot. A VM s a canddate f the removal of ths VM turns a trggered host nto a non-trggered host. However, f the removal of any VM n a trgger node couldn t turn the trgger host nto a non-trgger host, then all tenant VMs n ths trggered host are chosen for the canddate set. For a trggered host h a and any dmenson of resource, we dentfy the canddate set as (2). CE a vm A( vm) ha, ha TR (max{ ua s,0} t), 1, 2,, n. f, (max{ ua s,0} t), 1, 2,, n, vm A( vm) ha, ha TR f, (max{ ua s,0} t), 1, 2,, n The locaton polcy selects the destnaton host to whch the canddate VM prevously selected s mgrated. A Host h a n NTR s sad to be feasble f after mgraton ts capacty constrant C a s not volated. For any vm CE the locaton polcy determnes the set of potental destnaton hosts from the feasble hosts n NTR set. Snce we attempt not to ncrease the number of trggered nodes after the mgraton of VMs, a feasble node can be a potental destnaton node only f the VM mgraton wouldn t turn t nto a trggered node. The set of potental destnaton hosts for any canddate VM vm CE s a shown n (3). s ( vm, ha ) (max{ ua,0}) t, ca EP 1, 2,, n, A( vm) ha, ha NTR. Ths polcy then assocates a cost wth each (vm β, h a ). The cost s denoted as the sum of the resource requrements of vm β dvded by the resdual capacty on n dmensons of the potental destnaton host h a. It ndcates that a host wth scarce remanng resources exhbts a hgher cost. The formal load balancng cost functon for a VM mgraton for a par of hosts (h src, h dst, vm β ) can be defned n equaton (4). n s CostLB ( hsrc, hdst, vm ) f 1 dst where vm β resdes on h src whch belongs to TR, h dst belongs to NTR, s β denotes resource demands of vm β over each resource dmenson, f dst denotes the avalable resource of h src, and n denotes the total number of machne resources. For the canddate VM vm CE, the locaton polcy selects a a destnaton host out of the potental destnaton host set accordng to cost. We can apply one of the strateges n the a 207

5 best-ft, frst-ft, worst-ft, and relaxed-best-ft [4] to determne the destnaton host. Wth the best-ft, we select a destnaton host wth the smallest cost. Thus, the potental destnaton host wth hgher cost s less lkely to be selected as a destnaton for the canddate VM. However, t requres a lnear search from a set of potental destnaton hosts. Thus, the relaxed-best-ft exhbts a better search tme by nvestgatng the smallest cost from a much smaller set of the potental destnaton hosts whch are randomly chosen from the orgnal potental destnaton hosts. Wth the relaxed-best-ft, t also reduces the chance of pckng the same best potental hosts among dfferent VMs and thereby ncreases the number of vald edges n a weghted bpartte graph. NABM apples only relaxed-best-ft (RBF) as n [4]. Fg. 4. An llustraton of our WBG For a canddate VM n CE a, relaxed-best-ft selects a destnaton host to form an edge n EP a. Snce multple canddate VMs resdng at the same host could select the same destnaton host, t causes multple edges between trggered and non-trggered nodes. We dscard all such edges except for the edge wth the smallest weght. Fg. 4 shows an llustraton of WBG wth two trggered nodes and three non-trggered nodes. Snce canddate VM 1 and VM 2 n the Trgger node 1 select the Non-Trgger node 1 as ther potental destnaton host, two potental edges assocated wth the weghts are formed between the Trgger node 1 and Non-Trgger node 1. However, snce the sold edge has a smaller weght than the dotted edge, the dotted edge s dscarded from our fnal weghted bpartte graph. For any node par, the edge set s computed as (5). ( hsrc, hdst ) h src TR, hdst NTR, vm ( vm j ( j, E ( CostLB ( hsrc, hdst, vm j ) CostLB ( hsrc, hdst, vm ))), vm, j Cv src Em Fnally, we construct a weghted bpartte graph: G=(V,E,W) where V TR NR T and E s the edge set obtaned from (5), W s the cost functon defned n (4). After constructng the WBG, NABM apples the Hungaran algorthm [12] to solve the mnmum weghted matchng problem. Snce the Hungaran algorthm solves the nstance of WBG wth perfect matchng, NABM must ensure that the WBG created above has an equal number of nodes for TR and NTR. Thus NABM algorthm adds pseudo nodes to ether TR or NTR, whchever one has the smaller number of nodes, untl they have the same number of nodes. Pseudo edges assocated wth a sgnfcantly large value also are added from a pseudo node to all nodes n the other set. Based on the outcome of each match from the Hungaran algorthm except for the pars connected wth pseudo edges, NABM mgrate VMs from trggered nodes to ther correspondng non-trggered nodes n parallel for balancng load. Compared wth the sequental mgratons descrbed earler, NABM not only mgrates VMs concurrently but also prevents from exhbtng the mgraton overhead due to contenton at the end hosts NIC. NABM terates the steps of constructng a weghted bpartte graph, fndng ts mnmum weghted bpartte match, and parallel mgratng the correspondng VMs untl ether there s no trggered node left or the match conssts of nothng but the pseudo edges. We omt the pseudo-code of NABM due to space lmt. C. Network-aware Extenson In addton to the weghted bpartte match that ncreases the total number of VM mgraton pars n each round, the length of the duraton between each parallel mgraton round also plays an mportant role n reducng total tme tll balance load. The tme between each round depends on the longest mgraton tme among all mgratons. Snce the network hop dstance between two hosts wll affect the mgraton tme [7], we further consder network hop dstance n the locaton polcy. Lke [6], we assume that the traffc patterns rarely change n the producton data center. Even though the traffc dstrbuton could be hghly uneven, a balanced traffc dstrbuton could stll be obtaned n the data center network. For example, the cloud operators could reassgn traffc flows among communcatng VMs va equalcost-multple-path (ECMP) [16] or a centralzed network controller such as NOX [17]. Under even traffc dstrbuton assumpton, VM mgraton between end hosts wth a long network path exhbts a hgher probablty of long network delay. Accordng to (6), t ndcates that the mgraton tme s proportonal to the sze of VM transfer memory and communcaton dstance whle beng dsproportonal to the bottleneck of end hosts NIC bandwdth. NABM extends the prevous cost functon n (4) to account for mgraton cost whch s related to the network hop dstance and the sze of VM transfer memory as defned n (6). mem s Costmg ( hsrc, hdst, vm ) Dh ( src, hdst ). bw bw mn( f, f ) where vm β resdes on h src whch belongs to TR, h dst belongs to NTR, s mem β s memory volume of vm β, f bw src and f bw dst denote the avalable bandwdth for VM mgraton n source and destnaton NIC, respectvely, and D(, j) s network hop dstance between Host and j. An example of D(, j) s to account for the hop count between end hosts n some data center network archtectures such as fat-tree [18] and VL2 [16]. Ths paper mplements fat-tree and VL2 as the underlyng network archtectures. The extenson of NABM takes a normalzed cost of (4) and (6) as n (7). src dst 208

6 Fg. 5(a). STD of CPU utlzaton Fg. 5(b). STD of memory utlzaton Fg. 5(c). STD of network utlzaton Cost hsrc, hdst, vm (1 ) CostLB( hsrc, hdst, vm ) ( Cost ( h, h, vm )) mg src dst where s an adjustable parameter for normalzng the mgraton cost wth load balancng cost; andα s the parameter for adjustng the relatve mportance between the two costs. For example, t consders there to be no network cost f α s zero. Durng the constructon of the WBG, the edge weght s substtuted wth Cost( hsrc, hdst, vm ) n (7) n the locaton polcy. Ths paper conducts a number of event-drven smulaton analyses to study how parameter α affects the tme tll system s balanced, the mean VM mgraton tme between each round, and the applcaton performances n secton VI. Fg. 6(a). Average CPU utlzaton Fg. 6(b) Average memory utlzaton Fg. 6(c). Average network utlzaton V. EXPERIMENTAL EVALUATION Ths secton studes the effectveness and scalablty of the NABM algorthms by comparng them wth VectorDot (VD) [4] usng the NetworkCloudSm [19] smulaton platform. A. Expermental Settngs To vary the system scale we smulate wth 50~1050 hosts and 157~3244 VMs. The baselne host s equpped wth a 2.8GHz (approxmately 12,000 MIPS) quad-core CPU, 4 GB memory, and 1024 Mbps NIC. The mult-resource and heterogeneous capacty of each host s generated as baselne capacty*(1±heterogeneous degree.) The heterogeneous degree s 0.2. Each VM consumes 12%~25% mult-resource demands of the baselne host capacty. Each VM runs a large-scale Bagof-Task (BoT) applcaton whch s ndependent wthout the need to communcate wth other VMs. The workload conssts of 287,712,000 mllon nstructons whch take approxmately 9 hours to fnsh for a VM wth 8880 MIPS CPU. Typcal examples of such workload are bologcal computaton, data mnng, and scentfc engneerng applcatons. The 37% of hosts are over provsoned, whch are consdered to be overloaded hosts. The average system utlzaton along each resource dmenson s approxmately 57%. In order to accommodate the varance of mult-resource load balancng, the trggered node threshold s set to 75% along each dmenson of resources. For the network parameters, we assume that the workload n VM consumes on average 57% of host NIC bandwdth. Ths bandwdth s reserved and not used by VM mgratons. The network topologes fat-tree and VL2 used n our smulaton are bult accordng to [6]. The network parameter α s set from 0 to 1 wth 0.2 ncrement. Whle = 0, NABM would not consder any mpact of the network topology. We take a modest value of network weght equal to 0.6 for the smulaton results n Fg. 5, 6, and 7. The avalable bandwdth of core and aggregate swtch reserved for VM mgraton s set to 1184 Mbps n the followng experments. The parameter s determned as follows. We observe that load balancng cost Cost LB n general s n the range (0, 1.8]. In order to combne the costs of load balancng and mgraton, we normalze the mgraton cost to be n the range (0, 1.8]. Thus we take as 1.8 dvded by the maxmum value of mgraton cost whch s n the range [1000, 2800] n our experments. For the rest of smulaton, we fx = 1.8/1500. All the performance results are an average of 10 repeated runs and obtaned at the moment of system convergence. We use the followng metrcs to evaluate NABM and VD: the system utlzaton and standard devaton of mult-resources of the hosts, the tme t takes for the load balancng process to reach a balanced state, the number of VM mgratons taken durng the load balancng process, and the total applcaton completon tme. 209

7 Fg. 7(a) Total tme taken to reach a balanced state Fg. 7(b) Number of VM mgratons to reach a balanced state Fg. 7(c) Number of average VM mgratons per round Fg. 7(d) Total batch completon tme n dfferent number of hosts Fg. 8(a) Fat-tree: the number of VM mgratons wth dfferent hop count Fg. 8(b) VL2: the number of VM mgratons wth dfferent hop count Fg. 8(c) Total tme taken to reach a balanced state n fat-tree and VL2 B. Expermental Results Fg. 5(a), (b), and (c) and Fg. 6(a), (b), and (c) show the average standard devaton of resource usage of CPU, memory, network bandwdth, and ther average resource utlzaton, respectvely, for dfferent numbers of hosts. Both NABM and VD exhbt a lower resource standard devaton than NOLB, whch does not do any load balancng. The dfferences of average standard devaton between NABM and VD are less than 5% whch means that NABM stll mantans a good degree of mbalance at balanced states compared wth VD. Fg. 7 (a) shows that the tme for NABM to get the system to a balanced state s much less than the tme for VD especally n a data center wth a large number of hosts. For VD, the balance tme ncreases along wth ncreases of the number of VM mgratons. Ths s because VD sequentally mgrates one VM at a tme. In contrast, NABM concurrently mgrates VMs accordng to the number of ndependent matchng pars. Whle the average number of VM mgratons per round for VD s constant (equal to 1), the average number of VM mgratons per round for NABM ncreases as the number of hosts ncreases as shown n Fg. 8(d) Total batch completon tme n fat-tree topology Fg. 8(e). STD of CPU utlzaton n dfferent network parameters Fg. 7(c). Ths s because as the number of hosts ncreases, the matchng pars n the mnmum weghted matchng also ncreases. Fg. 7(d) shows that the total applcaton completon tme of the NABM algorthm s better than VD by 1%, 3%, 5%,7%, 9%, and 10%, respectvely, under dfferent numbers of hosts. Ths s because NABM handles multple overloaded hosts at a tme, decreasng the load on those hosts and thus decreasng the degree of resource contenton amongst the applcatons runnng on the VMs. In contrast, VD handles only one trgger node at a tme and the algorthm needs to spend a longer tmes to decrease the amount of trgger nodes. When the number of hosts s ncreased, there are more VMs n the hotspot competng for the resources. For VD, t wll take more rounds (.e. tme) to allevate the hot spot. Fg. 8 shows the VM mgraton tme and the tme t takes to reach a load balancng state under dfferent weghts of network parameter. As the weght of network parameter ncreases, NABM can select more hosts wth shorter paths to mgrate VMs for both fat-tree and VL2 as shown n Fg. 8(a) and 8(b). Fg. 8(c) shows that the tme tll the system reaches the balanced state s 210

8 reduced as the network parameter ncreases. It s because mgratng VMs along a shorter path could mprove the mean mgraton tme. Especally, compared wth no network cost or very small network cost cases, consderng the network hop dstance ( > 0.2) saves us more than half of the tme t takes to reach a balancng state. Wth the declne of mgraton tme, network-aware further mproves applcaton performances as shown n Fg. 8(d). However, a large network parameter value could stll hurt the tme tll the system reaches balance and applcaton performance. Ths s why we pck network weght equal to 0.6 n our prevous experment settng. Fg. 8(e) shows the standard devaton of CPU utlzaton verse network weght whle NABM reaches a balance state. When network weght ncreases, the standard devaton of CPU utlzaton also ncreases. It ndcates the tradeoff between VM mgraton tme and load balancng degree.. VI. CONCLUSION & FUTURE WORKS We propose a network-aware mult-resource load-balancng scheme usng a parallel VM mgraton. We transform the parallel VM mgraton to a mnmum weghted matchng problem of a weghted bpartte graph n a cloud system. Our algorthm mgrates VMs parallel wth each other to mnmze the tme to get the system to a balanced state and thus ncreases the throughput of overloaded hosts. Snce the network hop dstance between two hosts wll affect the mgraton tme, we further consder network hop dstance n our cost functon. Smulaton results show that our NABM algorthm mproves the throughput on overloaded machnes up to 10% compared wth VD. In the future, we wll take network bandwdth nto consderaton. In order to avod multple mgratons sharng a same network lnk, a greedy algorthm wll be appled to select one VM mgraton par at a tme untl no more mgraton pars can be selected. However, the VM mgratons stll can be performed n parallel. ACKNOLEGEMENT Ths work has been supported n part by a garnt from D- Lnk Systems, Inc. REFERENCES [1] T. Cheu, A. Mohndra, A. Karve and A. Segal, Dynamc Scalng of Web Applcatons n a Vrtualzed Cloud Computng Envronment, n ICEBE, [2] Y. Zhao and W. Huang, "Adaptve dstrbuted load balancng algorthm based on lve mgraton of vrtual machnes n cloud," n Ffth Internatonal Jont Conference on INC, IMS and IDC, 2009, pp [3] M. Randles, et al., "A comparatve study nto dstrbuted load balancng algorthms for cloud computng," n IEEE 24th Internatonal Conference on Advanced Informaton Networkng and Applcatons Workshops, 2010, pp [4] A. Sngh, et al., "Server-storage vrtualzaton: ntegraton and load balancng n data centers," n Proceedngs of the 2008 ACM/IEEE conference on Supercomputng, 2008, p. 53. [5] V. Shrvastava, et al., "Applcaton-aware vrtual machne mgraton n data centers," n Proceedngs IEEE INFOCOM, 2011, pp [6] X. Meng, et al., "Improvng the scalablty of data center networks wth traffc-aware vrtual machne placement," n Proceedngs IEEE INFOCOM, 2010, pp [7] H. Xu and B. L, "Egaltaran stable matchng for VM mgraton n cloud computng," n IEEE Conference on Computer Communcatons Workshops (INFOCOM WKSHPS), 2011, pp [8] S.-H. Lm, et al., "Mgraton, assgnment, and schedulng of jobs n vrtualzed envronment," ACM USENIX workshop HotCloud, 2011, vol. 40, p. 45, [9] K. Ye, et al., "Lve mgraton of multple vrtual machnes wth resource reservaton n cloud computng envronments," n IEEE Internatonal Conference on Cloud Computng, 2011, pp [10] D. Arora, et al., "On the beneft of vrtualzaton: Strateges for flexble server allocaton," n Proc. USENIX Workshop on Hot Topcs n Management of Internet, Cloud, and Enterprse Networks and Servces (Hot-ICE), [11] M. Benkowsk, et al., "Compettve analyss for servce mgraton n vnets," n Proceedngs of the second ACM SIGCOMM workshop on Vrtualzed nfrastructure systems and archtectures, 2010, pp [12] H. W. Kuhn, "The Hungaran method for the assgnment problem," Naval research logstcs quarterly, vol. 2, pp , [13] Xen. Avalable: [14] SPECjvm. Avalable: [15] K. P. Bubendorfer and J. H. Hne, "A compostonal classfcaton for load-balancng algorthms," techncal report, No.CS-TR-99-9, [16] A. Greenberg, et al., "VL2: a scalable and flexble data center network," n ACM SIGCOMM Computer Communcaton Revew, 2009, pp [17] NOX. Avalable: [18] M. Al-Fares, et al., "A scalable, commodty data center network archtecture," n ACM SIGCOMM Computer Communcaton Revew, 2008, pp [19] S. K. Garg and R. Buyya, "NetworkCloudSm: Modellng Parallel Applcatons n Cloud Smulatons," n Fourth IEEE Internatonal Conference on Utlty and Cloud Computng, 2011, pp [20] D. Klazovch, et al., "DENS: data center energy-effcent network-aware schedulng," n Cluster Computng, Volume 16, Issue 1, March 2013, pp [21] D. Klazovch, et al., "Accountng for load varaton n energy-effcent data centers, " n IEEE Internatonal Conference on Communcaton (ICC), 2013, pp

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers

Fair Virtual Bandwidth Allocation Model in Virtual Data Centers Far Vrtual Bandwdth Allocaton Model n Vrtual Data Centers Yng Yuan, Cu-rong Wang, Cong Wang School of Informaton Scence and Engneerng ortheastern Unversty Shenyang, Chna School of Computer and Communcaton

More information

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers Journal of Computatonal Informaton Systems 7: 13 (2011) 4740-4747 Avalable at http://www.jofcs.com A Load-Balancng Algorthm for Cluster-based Mult-core Web Servers Guohua YOU, Yng ZHAO College of Informaton

More information

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS 21 22 September 2007, BULGARIA 119 Proceedngs of the Internatonal Conference on Informaton Technologes (InfoTech-2007) 21 st 22 nd September 2007, Bulgara vol. 2 INVESTIGATION OF VEHICULAR USERS FAIRNESS

More information

Cloud Auto-Scaling with Deadline and Budget Constraints

Cloud Auto-Scaling with Deadline and Budget Constraints Prelmnary verson. Fnal verson appears In Proceedngs of 11th ACM/IEEE Internatonal Conference on Grd Computng (Grd 21). Oct 25-28, 21. Brussels, Belgum. Cloud Auto-Scalng wth Deadlne and Budget Constrants

More information

An Integrated Dynamic Resource Scheduling Framework in On-Demand Clouds *

An Integrated Dynamic Resource Scheduling Framework in On-Demand Clouds * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 30, 1537-1552 (2014) An Integrated Dynamc Resource Schedulng Framework n On-Demand Clouds * College of Computer Scence and Technology Zhejang Unversty Hangzhou,

More information

Virtual Network Embedding with Coordinated Node and Link Mapping

Virtual Network Embedding with Coordinated Node and Link Mapping Vrtual Network Embeddng wth Coordnated Node and Lnk Mappng N. M. Mosharaf Kabr Chowdhury Cherton School of Computer Scence Unversty of Waterloo Waterloo, Canada Emal: nmmkchow@uwaterloo.ca Muntasr Rahan

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers J. Parallel Dstrb. Comput. 71 (2011) 732 749 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: www.elsever.com/locate/pdc Envronment-conscous schedulng of HPC applcatons

More information

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines DBA-VM: Dynamc Bandwdth Allocator for Vrtual Machnes Ahmed Amamou, Manel Bourguba, Kamel Haddadou and Guy Pujolle LIP6, Perre & Mare Cure Unversty, 4 Place Jusseu 755 Pars, France Gand SAS, 65 Boulevard

More information

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints Effectve Network Defense Strateges aganst Malcous Attacks wth Varous Defense Mechansms under Qualty of Servce Constrants Frank Yeong-Sung Ln Department of Informaton Natonal Tawan Unversty Tape, Tawan,

More information

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL

More information

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network)

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network) Network-Wde Load Balancng Routng Wth Performance Guarantees Kartk Gopalan Tz-cker Chueh Yow-Jan Ln Florda State Unversty Stony Brook Unversty Telcorda Research kartk@cs.fsu.edu chueh@cs.sunysb.edu yjln@research.telcorda.com

More information

Load Balancing By Max-Min Algorithm in Private Cloud Environment

Load Balancing By Max-Min Algorithm in Private Cloud Environment Internatonal Journal of Scence and Research (IJSR ISSN (Onlne: 2319-7064 Index Coperncus Value (2013: 6.14 Impact Factor (2013: 4.438 Load Balancng By Max-Mn Algorthm n Prvate Cloud Envronment S M S Suntharam

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

Forecasting the Direction and Strength of Stock Market Movement

Forecasting the Direction and Strength of Stock Market Movement Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract - Stock market s one of the most complcated systems

More information

Calculating the high frequency transmission line parameters of power cables

Calculating the high frequency transmission line parameters of power cables < ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,

More information

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network 700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School

More information

Enabling P2P One-view Multi-party Video Conferencing

Enabling P2P One-view Multi-party Video Conferencing Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT

APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT Toshhko Oda (1), Kochro Iwaoka (2) (1), (2) Infrastructure Systems Busness Unt, Panasonc System Networks Co., Ltd. Saedo-cho

More information

The Load Balancing of Database Allocation in the Cloud

The Load Balancing of Database Allocation in the Cloud , March 3-5, 23, Hong Kong The Load Balancng of Database Allocaton n the Cloud Yu-lung Lo and Mn-Shan La Abstract Each database host n the cloud platform often has to servce more than one database applcaton

More information

QoS-based Scheduling of Workflow Applications on Service Grids

QoS-based Scheduling of Workflow Applications on Service Grids QoS-based Schedulng of Workflow Applcatons on Servce Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted System Laboratory Dept. of Computer Scence and Software Engneerng The Unversty

More information

A New Quality of Service Metric for Hard/Soft Real-Time Applications

A New Quality of Service Metric for Hard/Soft Real-Time Applications A New Qualty of Servce Metrc for Hard/Soft Real-Tme Applcatons Shaoxong Hua and Gang Qu Electrcal and Computer Engneerng Department and Insttute of Advanced Computer Study Unversty of Maryland, College

More information

Dynamic Fleet Management for Cybercars

Dynamic Fleet Management for Cybercars Proceedngs of the IEEE ITSC 2006 2006 IEEE Intellgent Transportaton Systems Conference Toronto, Canada, September 17-20, 2006 TC7.5 Dynamc Fleet Management for Cybercars Fenghu. Wang, Mng. Yang, Ruqng.

More information

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application Internatonal Journal of mart Grd and lean Energy Performance Analyss of Energy onsumpton of martphone Runnng Moble Hotspot Applcaton Yun on hung a chool of Electronc Engneerng, oongsl Unversty, 511 angdo-dong,

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management

More information

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications Methodology to Determne Relatonshps between Performance Factors n Hadoop Cloud Computng Applcatons Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng and

More information

Can Auto Liability Insurance Purchases Signal Risk Attitude?

Can Auto Liability Insurance Purchases Signal Risk Attitude? Internatonal Journal of Busness and Economcs, 2011, Vol. 10, No. 2, 159-164 Can Auto Lablty Insurance Purchases Sgnal Rsk Atttude? Chu-Shu L Department of Internatonal Busness, Asa Unversty, Tawan Sheng-Chang

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng Envronment Congcong Xong, Long Feng, Lxan Chen A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng

More information

A heuristic task deployment approach for load balancing

A heuristic task deployment approach for load balancing Xu Gaochao, Dong Yunmeng, Fu Xaodog, Dng Yan, Lu Peng, Zhao Ja Abstract A heurstc task deployment approach for load balancng Gaochao Xu, Yunmeng Dong, Xaodong Fu, Yan Dng, Peng Lu, Ja Zhao * College of

More information

An agent architecture for network support of distributed simulation systems

An agent architecture for network support of distributed simulation systems An agent archtecture for network support of dstrbuted smulaton systems Robert Smon, Mark Pullen and Woan Sun Chang Department of Computer Scence George Mason Unversty Farfax, VA, 22032 U.S.A. smon, mpullen,

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng

More information

Optimization Model of Reliable Data Storage in Cloud Environment Using Genetic Algorithm

Optimization Model of Reliable Data Storage in Cloud Environment Using Genetic Algorithm Internatonal Journal of Grd Dstrbuton Computng, pp.175-190 http://dx.do.org/10.14257/gdc.2014.7.6.14 Optmzaton odel of Relable Data Storage n Cloud Envronment Usng Genetc Algorthm Feng Lu 1,2,3, Hatao

More information

A Dynamic Energy-Efficiency Mechanism for Data Center Networks

A Dynamic Energy-Efficiency Mechanism for Data Center Networks A Dynamc Energy-Effcency Mechansm for Data Center Networks Sun Lang, Zhang Jnfang, Huang Daochao, Yang Dong, Qn Yajuan A Dynamc Energy-Effcency Mechansm for Data Center Networks 1 Sun Lang, 1 Zhang Jnfang,

More information

Complex Service Provisioning in Collaborative Cloud Markets

Complex Service Provisioning in Collaborative Cloud Markets Melane Sebenhaar, Ulrch Lampe, Tm Lehrg, Sebastan Zöller, Stefan Schulte, Ralf Stenmetz: Complex Servce Provsonng n Collaboratve Cloud Markets. In: W. Abramowcz et al. (Eds.): Proceedngs of the 4th European

More information

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud IJCSI Internatonal Journal of Computer Scence Issues, Vol. 0, Issue, No, March 03 ISSN (Prnt): 694-084 ISSN (Onlne): 694-0784 www.ijcsi.org 37 Proft-Aware DVFS Enabled Resource Management of IaaS Cloud

More information

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance Calbraton Method Instances of the Cell class (one nstance for each FMS cell) contan ADC raw data and methods assocated wth each partcular FMS cell. The calbraton method ncludes event selecton (Class Cell

More information

Dynamic Pricing for Smart Grid with Reinforcement Learning

Dynamic Pricing for Smart Grid with Reinforcement Learning Dynamc Prcng for Smart Grd wth Renforcement Learnng Byung-Gook Km, Yu Zhang, Mhaela van der Schaar, and Jang-Won Lee Samsung Electroncs, Suwon, Korea Department of Electrcal Engneerng, UCLA, Los Angeles,

More information

A Programming Model for the Cloud Platform

A Programming Model for the Cloud Platform Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna luxaodongxht@qq.com

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,

More information

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA).

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA). Managng Server Energy and Operatonal Costs n Hostng Centers Yyu Chen Dept. of IE Penn State Unversty Unversty Park, PA 16802 yzc107@psu.edu Anand Svasubramanam Dept. of CSE Penn State Unversty Unversty

More information

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

Calculation of Sampling Weights

Calculation of Sampling Weights Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample

More information

Hosting Virtual Machines on Distributed Datacenters

Hosting Virtual Machines on Distributed Datacenters Hostng Vrtual Machnes on Dstrbuted Datacenters Chuan Pham Scence and Engneerng, KyungHee Unversty, Korea pchuan@khu.ac.kr Jae Hyeok Son Scence and Engneerng, KyungHee Unversty, Korea sonaehyeok@khu.ac.kr

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS Bogdan Cubotaru, Gabrel-Mro Muntean Performance Engneerng Laboratory, RINCE School of Electronc Engneerng Dubln Cty

More information

A Self-Organized, Fault-Tolerant and Scalable Replication Scheme for Cloud Storage

A Self-Organized, Fault-Tolerant and Scalable Replication Scheme for Cloud Storage A Self-Organzed, Fault-Tolerant and Scalable Replcaton Scheme for Cloud Storage Ncolas Bonvn, Thanass G. Papaoannou and Karl Aberer School of Computer and Communcaton Scences École Polytechnque Fédérale

More information

Modeling and Analysis of 2D Service Differentiation on e-commerce Servers

Modeling and Analysis of 2D Service Differentiation on e-commerce Servers Modelng and Analyss of D Servce Dfferentaton on e-commerce Servers Xaobo Zhou, Unversty of Colorado, Colorado Sprng, CO zbo@cs.uccs.edu Janbn We and Cheng-Zhong Xu Wayne State Unversty, Detrot, Mchgan

More information

Cost-based Scheduling of Scientific Workflow Applications on Utility Grids

Cost-based Scheduling of Scientific Workflow Applications on Utility Grids Cost-based Schedulng of Scentfc Workflow Applcatons on Utlty Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted Systems Laboratory Dept. of Computer Scence and Software Engneerng

More information

A Parallel Architecture for Stateful Intrusion Detection in High Traffic Networks

A Parallel Architecture for Stateful Intrusion Detection in High Traffic Networks A Parallel Archtecture for Stateful Intruson Detecton n Hgh Traffc Networks Mchele Colajann Mrco Marchett Dpartmento d Ingegnera dell Informazone Unversty of Modena {colajann, marchett.mrco}@unmore.t Abstract

More information

A New Paradigm for Load Balancing in Wireless Mesh Networks

A New Paradigm for Load Balancing in Wireless Mesh Networks A New Paradgm for Load Balancng n Wreless Mesh Networks Abstract: Obtanng maxmum throughput across a network or a mesh through optmal load balancng s known to be an NP-hard problem. Desgnng effcent load

More information

Scalability of a Mobile Cloud Management System

Scalability of a Mobile Cloud Management System Scalablty of a Moble Cloud Management System Roberto Bfulco Unversty of Napol Federco II roberto.bfulco2@unna.t Marcus Brunner NEC Laboratores Europe brunner@neclab.eu Peer Hasselmeyer NEC Laboratores

More information

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems STAN-CS-73-355 I SU-SE-73-013 An Analyss of Central Processor Schedulng n Multprogrammed Computer Systems (Dgest Edton) by Thomas G. Prce October 1972 Techncal Report No. 57 Reproducton n whole or n part

More information

IMPACT ANALYSIS OF A CELLULAR PHONE

IMPACT ANALYSIS OF A CELLULAR PHONE 4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng

More information

CloudMedia: When Cloud on Demand Meets Video on Demand

CloudMedia: When Cloud on Demand Meets Video on Demand CloudMeda: When Cloud on Demand Meets Vdeo on Demand Yu Wu, Chuan Wu, Bo L, Xuanja Qu, Francs C.M. Lau Department of Computer Scence, The Unversty of Hong Kong, Emal: {ywu,cwu,xjqu,fcmlau}@cs.hku.hk Department

More information

An Evaluation of the Extended Logistic, Simple Logistic, and Gompertz Models for Forecasting Short Lifecycle Products and Services

An Evaluation of the Extended Logistic, Simple Logistic, and Gompertz Models for Forecasting Short Lifecycle Products and Services An Evaluaton of the Extended Logstc, Smple Logstc, and Gompertz Models for Forecastng Short Lfecycle Products and Servces Charles V. Trappey a,1, Hsn-yng Wu b a Professor (Management Scence), Natonal Chao

More information

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña Proceedngs of the 2008 Wnter Smulaton Conference S. J. Mason, R. R. Hll, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION

More information

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing Effcent Bandwdth Management n Broadband Wreless Access Systems Usng CAC-based Dynamc Prcng Bader Al-Manthar, Ndal Nasser 2, Najah Abu Al 3, Hossam Hassanen Telecommuncatons Research Laboratory School of

More information

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy Fnancal Tme Seres Analyss Patrck McSharry patrck@mcsharry.net www.mcsharry.net Trnty Term 2014 Mathematcal Insttute Unversty of Oxford Course outlne 1. Data analyss, probablty, correlatons, vsualsaton

More information

Study on Model of Risks Assessment of Standard Operation in Rural Power Network

Study on Model of Risks Assessment of Standard Operation in Rural Power Network Study on Model of Rsks Assessment of Standard Operaton n Rural Power Network Qngj L 1, Tao Yang 2 1 Qngj L, College of Informaton and Electrcal Engneerng, Shenyang Agrculture Unversty, Shenyang 110866,

More information

A Novel Auction Mechanism for Selling Time-Sensitive E-Services

A Novel Auction Mechanism for Selling Time-Sensitive E-Services A ovel Aucton Mechansm for Sellng Tme-Senstve E-Servces Juong-Sk Lee and Boleslaw K. Szymansk Optmaret Inc. and Department of Computer Scence Rensselaer Polytechnc Insttute 110 8 th Street, Troy, Y 12180,

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

How To Understand The Results Of The German Meris Cloud And Water Vapour Product Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller

More information

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel

More information

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook)

How To Solve A Problem In A Powerline (Powerline) With A Powerbook (Powerbook) MIT 8.996: Topc n TCS: Internet Research Problems Sprng 2002 Lecture 7 March 20, 2002 Lecturer: Bran Dean Global Load Balancng Scrbe: John Kogel, Ben Leong In today s lecture, we dscuss global load balancng

More information

Adaptive and Dynamic Load Balancing in Grid Using Ant Colony Optimization

Adaptive and Dynamic Load Balancing in Grid Using Ant Colony Optimization Sandp Kumar Goyal et al. / Internatonal Journal of Engneerng and Technology (IJET) Adaptve and Dynamc Load Balancng n Grd Usng Ant Colony Optmzaton Sandp Kumar Goyal 1, Manpreet Sngh 1 Department of Computer

More information

Agile Traffic Merging for Data Center Networks. Qing Yi and Suresh Singh Portland State University, Oregon June 10 th, 2014

Agile Traffic Merging for Data Center Networks. Qing Yi and Suresh Singh Portland State University, Oregon June 10 th, 2014 Agle Traffc Mergng for Data Center Networks Qng Y and Suresh Sngh Portland State Unversty, Oregon June 10 th, 2014 Agenda Background and motvaton Power optmzaton model Smulated greedy algorthm Traffc mergng

More information

Availability-Based Path Selection and Network Vulnerability Assessment

Availability-Based Path Selection and Network Vulnerability Assessment Avalablty-Based Path Selecton and Network Vulnerablty Assessment Song Yang, Stojan Trajanovsk and Fernando A. Kupers Delft Unversty of Technology, The Netherlands {S.Yang, S.Trajanovsk, F.A.Kupers}@tudelft.nl

More information

EVALUATING THE PERCEIVED QUALITY OF INFRASTRUCTURE-LESS VOIP. Kun-chan Lan and Tsung-hsun Wu

EVALUATING THE PERCEIVED QUALITY OF INFRASTRUCTURE-LESS VOIP. Kun-chan Lan and Tsung-hsun Wu EVALUATING THE PERCEIVED QUALITY OF INFRASTRUCTURE-LESS VOIP Kun-chan Lan and Tsung-hsun Wu Natonal Cheng Kung Unversty klan@cse.ncku.edu.tw, ryan@cse.ncku.edu.tw ABSTRACT Voce over IP (VoIP) s one of

More information

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Self-Adaptive SLA-Driven Capacity Management for Internet Services Self-Adaptve SLA-Drven Capacty Management for Internet Servces Bruno Abrahao, Vrglo Almeda and Jussara Almeda Computer Scence Department Federal Unversty of Mnas Geras, Brazl Alex Zhang, Drk Beyer and

More information

Resource Scheduling in Desktop Grid by Grid-JQA

Resource Scheduling in Desktop Grid by Grid-JQA The 3rd Internatonal Conference on Grd and Pervasve Computng - Worshops esource Schedulng n Destop Grd by Grd-JQA L. Mohammad Khanl M. Analou Assstant professor Assstant professor C.S. Dept.Tabrz Unversty

More information

QoS-Aware Active Queue Management for Multimedia Services over the Internet

QoS-Aware Active Queue Management for Multimedia Services over the Internet QoS-Aware Actve Queue Management for Multmeda Servces over the Internet I-Shyan Hwang, *Bor-Junn Hwang, Pen-Mng Chang, Cheng-Yu Wang Abstract Recently, the multmeda servces such as IPTV, vdeo conference

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

CLoud computing technologies have enabled rapid

CLoud computing technologies have enabled rapid 1 Cost-Mnmzng Dynamc Mgraton of Content Dstrbuton Servces nto Hybrd Clouds Xuana Qu, Hongxng L, Chuan Wu, Zongpeng L and Francs C.M. Lau Department of Computer Scence, The Unversty of Hong Kong, Hong Kong,

More information

P2P/ Grid-based Overlay Architecture to Support VoIP Services in Large Scale IP Networks

P2P/ Grid-based Overlay Architecture to Support VoIP Services in Large Scale IP Networks PP/ Grd-based Overlay Archtecture to Support VoIP Servces n Large Scale IP Networks We Yu *, Srram Chellappan # and Dong Xuan # * Dept. of Computer Scence, Texas A&M Unversty, U.S.A. {weyu}@cs.tamu.edu

More information

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays VoIP Playout Buffer Adjustment usng Adaptve Estmaton of Network Delays Mroslaw Narbutt and Lam Murphy* Department of Computer Scence Unversty College Dubln, Belfeld, Dubln, IRELAND Abstract The poor qualty

More information

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment An Optmal Model for Prorty based Servce Schedulng Polcy for Cloud Computng Envronment Dr. M. Dakshayn Dept. of ISE, BMS College of Engneerng, Bangalore, Inda. Dr. H. S. Guruprasad Dept. of ISE, BMS College

More information

Checkng and Testng in Nokia RMS Process

Checkng and Testng in Nokia RMS Process An Integrated Schedulng Mechansm for Fault-Tolerant Modular Avoncs Systems Yann-Hang Lee Mohamed Youns Jeff Zhou CISE Department Unversty of Florda Ganesvlle, FL 326 yhlee@cse.ufl.edu Advanced System Technology

More information

Cooperative Load Balancing in IEEE 802.11 Networks with Cell Breathing

Cooperative Load Balancing in IEEE 802.11 Networks with Cell Breathing Cooperatve Load Balancng n IEEE 82.11 Networks wth Cell Breathng Eduard Garca Rafael Vdal Josep Paradells Wreless Networks Group - Techncal Unversty of Catalona (UPC) {eduardg, rvdal, teljpa}@entel.upc.edu;

More information

Dynamic Resource Allocation for MapReduce with Partitioning Skew

Dynamic Resource Allocation for MapReduce with Partitioning Skew Ths artcle has been accepted for publcaton n a future ssue of ths journal, but has not been fully edted. Content may change pror to fnal publcaton. Ctaton nformaton: DOI 1.119/TC.216.253286, IEEE Transactons

More information

SMART: Scalable, Bandwidth-Aware Monitoring of Continuous Aggregation Queries

SMART: Scalable, Bandwidth-Aware Monitoring of Continuous Aggregation Queries : Scalable, Bandwdth-Aware Montorng of Contnuous Aggregaton Queres Navendu Jan, Praveen Yalagandula, Mke Dahln, and Yn Zhang Unversty of Texas at Austn HP Labs ABSTRACT We present, a scalable, bandwdth-aware

More information