Far Vrtual Bandwdth Allocaton Model n Vrtual Data Centers Yng Yuan, Cu-rong Wang, Cong Wang School of Informaton Scence and Engneerng ortheastern Unversty Shenyang, Chna School of Computer and Communcaton Engneerng ortheastern Unversty at Qnhuangdao Qnhuangdao, Chna yuanyng111@gmal.com Journal of Dgtal Informaton Management ABSTRACT: etwork vrtualzaton opens a promsng way to support dverse applcatons over a shared substrate by runnng multple vrtual networks. Current vrtual data centers n cloud computng have flexble mechansms to partton compute resources but provde lttle control over how tenants share the network. Ths paper proposes a utlty-maxmzaton model for bandwdth sharng between vrtual networks n the same substrate network. The am s to mprove the farness of vrtual bandwdth allocaton for mult tenants, especally for vrtual lnks whch share the same physcal lnk and carry nonfrendly traffc. We frst gven a basc model and then proved the model can be splt nto many sub models whch can run on each rack-swtch port. In the sub model, every physcal lnk s assocated wth a farness ndex constrant n maxmze utlty calculatng. The goal s to lmt the dfferences among the bandwdth allocaton of the vrtual lnks whch share the same physcal lnk. Expermental results show that the vrtual bandwdth allocaton between tenants s more reasonable. Categores and Subject Descrptors: C..6 [Internetworkng]: Standards; C. [COMPUTER- COMMUICATIO ETWORKS] Data Communcatons General Terms: etwork Protocols, etwork Vrtualzaton, Vrtual Bandwdth Allocaton Keywords: etwork Vrtualzaton, Mult-tenant, Bandwdth Allocaton, Vrtual Data Centers Receved: 1 February 013, Revsed 16 March 013, Accepted 5 March 013 1. Introducton Cloud computng delvers nfrastructure, platform, and software that are made avalable as subscrpton-based servces n a pay-as-you-go model to consumers [1]. It ams to power the next generaton data centers as the enablng platform for dynamc and flexble applcaton provsonng. Ths s facltated by exposng data center s capabltes as a network of vrtual servces so that users are able to access and deploy applcatons from anywhere by the demand and QoS (Qualty of Servce requrements []. In practcal applcatons, data centers whch are bult usng vrtualzaton technology wth vrtual machnes as the basc processng elements are called vrtual data centers (VDCs [3]. Comparng wth the tradtonal data centers, VDCs could provde some sgnfcant merts such as server consoldaton, hgh avalablty and lve mgraton, and provde flexble resource management mechansms. Therefore, VDCs are wdely used as the nfrastructure of exstng Cloud computng systems [4, 5]. To acheve cost effcences and on-demand scalng, VDCs are hghly multplexed shared envronments, wth VMs and tasks from multple tenants coexstng n the same cluster. Whle VDCs provde many mechansms to schedule local compute, memory, and dsk resources [6]. Exstng mechansms for apportonng network resources n current VDCs fall short. End host mechansms such as TCP congeston control are wdely deployed to determne network sharng today va a noton of flow-based farness. However, TCP does lttle to solate tenants from one another: poorly-desgned or malcous applcatons can consume network capacty, to the detrment of other Journal of Dgtal Informaton Management Volume 11 umber 3 June 013 179
applcatons whch generate non TCP frendly flows. Thus, whle resource allocaton usng TCP s scalable and acheves hgh network utlzaton, t does not provde robust performance solaton. etwork vrtualzaton can extenuate the ossfyng forces of the current Ethernet and stmulate nnovaton by enablng dverse network archtectures to cohabt on a shared physcal substrate [7]. So t can support dverse applcatons over a shared substrate by runnng multple vrtual networks, whch customzed for dfferent performance objectves. Leveragng network vrtualzaton technology, the VDC supervsor can classfy vrtual machnes of same applcaton provder users who rental the VDC resources (CPU, RAM, storage, bandwdth of servers nto same vrtual networks so as to acheve more aglty that needed by cloud computng and support multtenant demand by cloud computng. In ths condton, n order to save energy and mprove hardware resource utlzaton, vrtual machnes (VMs whch belong to dfferent users may be dstrbuted on a sngle physcal machne. In cloud computng envronment, there are competton ssues between these users. VMs on same physcal machne wll compete for network resources, because they all want to occupy enough bandwdth. Even n some extreme cases there may exst malcous competton, some VMs may try to fll the bandwdth to affect the opponent regardless of whether t really need so much network resources. Therefore the supervsor should to ensure the farness of bandwdth allocaton between those vrtual networks frst of all. Ths paper makes the case for vrtual bandwdth allocaton between vrtual lnks n mult-tenant VDCs to provde more farness and proactvely prevent network congeston. We fst gven a basc model and then proved the model can be splt nto many sub models whch can run on each rack-swtch port. In the sub model, every physcal lnk s assocated wth a farness ndex constrant n maxmze utlty calculatng. The goal s to lmt the dfferences among the bandwdth allocaton of the vrtual lnks whch share the same physcal lnk. The rest of the paper s organzed as follows. Secton gves a basc utlty-maxmzaton model for bandwdth sharng between vrtual networks of mult tenants. Secton 3 proves the basc model can be splt nto many sub models whch can run on each rack-swtch port and ntroduces the sub model n detals. Secton 4 gves the concluson.. Basc Model Whle VDCs provde many mechansms to schedule local compute, memory, and dsk resources [8], exstng mechansms for apportonng network resources fall short. For example, end host mechansms such as TCP congeston control are wdely deployed to determne network sharng today va a noton of flow-based farness. However, TCP does lttle to solate tenants from one another: poorly-desgned or malcous applcatons can consume network capacty, to the detrment of other applcatons whch generate non TCP frendly flows [9]. Thus, whle resource allocaton usng TCP s scalable and acheves hgh network utlzaton, t does not provde robust performance solaton. There s a trade-off between ensurng solaton and retanng hgh network utlzaton. Bandwdth reservatons, as realzed by a host of mechansms, are ether overly conservatve at low load, whch can acheve poor network utlzaton, or overly lenent at hgh load, whch can acheve poor solaton. An deal bandwdth solaton soluton for cloud datacenters has to scale and retan hgh network utlzaton. It has to do so wthout assumng well-behaved or TCP conformant tenants. Snce changes to the ICs and swtches are expensve, take some tme to standardze and deploy, and are hard to customze once deployed, edge or so ware based solutons are preferable. etwork vrtualzaton s a promsng way to support dverse applcatons over a shared substrate by runnng multple vrtual networks, whch customzed for dfferent performance objectves. VDC supervsor can classfy vrtual machnes of same applcaton nto same vrtual networks so as to acheve more aglty that needed by cloud computng. In ths condton, the fabrc owner as the substrate provder to provde the physcal hardware, servce provder rent vrtual machnes and vrtual networks to run ther applcatons to provde servce, therefore can acheve a flexble lease mechansm [10] whch s more sutable for modern commercal operatons. Today, network vrtualzaton s movng from fantasy to realty, as many major router vendors start to support both swtch vrtualzaton and swtch programmablty. In the man thought of network vrtualzaton, vrtual machnes of same applcatons n data center are parttoned nto same vrtual networks by network slcng. In ths case, for QoS guarantee, the most mportant thng n VDC network s the bandwdth ndemnfcaton n each vrtual lnk,.e. the vrtual bandwdth allocaton mechansm must be well consdered. We wll frst gve a basc model for bandwdth allocaton between vrtual networks of mult tenants, and then proof that the global problem can be dvded nto subproblems on each rack swtch s port. The vrtual bandwdth sharng problem n ths secton s consdered from the overall perspectve of the system whch contans vrtual networks and substrate hardware resource n VDC under mult-tenants market mechansm under cloud computng. Consder a substrate network wth a set L of lnks, and let C j be the capacty of substrate lnk j J. The network s shared by a set of vrtual networks and ndexed by. Defne a vector b l whch denotes the allocated bandwdth of vrtual network n lnk j. Let U (b l be the utlty of vrtual network as a functon of hs bandwdth b l. ote that the utlty U (b l should be an ncreasng, nonnegatve and twce contnuously 180 Journal of Dgtal Informaton Management Volume 11 umber 3 June 013
dfferentable functon of b l over the range b l 0. The bandwdth control problem can be formulated as the followng optmal problem. The bandwdth control problem can be formulated as the followng optmal problem P1: MAX Σ = 1Σ L l = 1 U (b l P1: s.t., l, b l 0, l L, Σ b l C l F(b D The meanng of the constrants are: 1 for each vrtual network, ts bandwdth must be greater than or equal to 0; the total bandwdth of all vrtual networks n physcal lnk l does not exceed the maxmum bandwdth C l allowed by the hardware; 3 F(b D s an addton condton to ntroduce the farness constrant whch we wll dscussed n next secton and D s a constant. Snce the utlty functons are strctly concave, and hence contnuous, and the feasble soluton set s compact then the above optmal problem has a unque optmal soluton. P1 s an overall problem wth multple constrants. For each vrtual network, t can calculate ts proft for the allocated vrtual bandwdth b l. The goal s to acheve the maxmze sum utlty of all vrtual networks. However, problem P1 for vrtual bandwdth allocaton n VDC s stll a global ssue. Obvously, t s mpossble to manage vast amounts of vrtual nodes n large-scale network as well as n VDC. There s no gan n usng the local nterpretaton unless we can devse a local way to solve the problem. 3. Dstrbuted Sub Model In practcal VDCs, usng a dstrbuted manner to address the complex optmzaton system such as P1 s an effcent way. In ths subsecton, we dvded the overall problem nto sub-problems whch can be solved on the port of rack swtches n VDC. The overall optmzaton problem P1 can be splt nto several dstrbuted problem. Recall the problem n the prevous secton, each b l n vector b l = (b l, l L denotes the allocated bandwdth for vrtual network n lnk l. For any set of vrtual network lnks L we have L L. Takng nto account of the dversfcaton and varablty of vrtual network topologes, we need to regulate all dmensons of L and make t equal to the dmenson of L. Let L be the new lnk set of vrtual network, the two under sets are equvalent. L L L, l L, l = 1, f l L ; (1 L l = 1, f l L ; L l = 0 means that the bandwdth allocated for vrtual network on lnk l s 0. Then the overall optmal problem P1 can be rewrtten as: MAX Σ = 1Σ L Σ L l = 1 l = 1 U (b l = Σ U = 1 (b l = Σ = 1 U (b 1 + Σ = 1 U (b +...+ Σ = 1 U (b L Because the utlty functon U (x s nonnegatve over the range x 0, then the overall optmal problem P1 can be splt nto L sub optmzaton problems on each network port of all rack swtches n data center. When the bandwdth allocaton among vrtual lnks for every vrtual network on each physcal lnks of the rack swtch s optmal, the overall optmzaton problem P1 can be solved smultaneously. Ths dstrbuted sub model s feasble n large-scale data center network, and can be easly acheved. By now, we have demonstrated the overall problem P1 can be splt nto a number of dstrbuted sub problems (n equaton ( whch can be solved on the rack swtch n VDCs through a ware based solutons leveragng network vrtualzaton technque over a programmable swtches. Then we can dscuss the detals of the sub model for real system. Frstly, we can get the sub optmzaton problems: maxσ n = 1 ω U (b s.t., b 0, Σ n b C F(b D Where ω s the weght of vrtual network, n s the number of vrtual networks, b denotes the allocated bandwdth for vrtual network at lnk l. U (b s the utlty functon dscussed n the prevous secton. Vector b = (b 1, b,..., b n s a soluton for problem (3. Because the feasble regon s compact and the objectve functon s contnuous. We form the Lagrangan for the problem (3: L (b, λ, µ = b Σ n ω U (b +λ (C = 1 Σ n + µ (D F (b (4 Here the second term s a penalty for the physcal bandwdth capacty constrant; the thrd term s an addtve penalty whch s used for the farness constrant of vrtual bandwdth allocaton n our model. Thus we need a method to descrbe the quanttatve of the farness for bandwdth allocaton, whch can be defned as follows: F (b = n (b 1, b,..., b n Σ n b ( Equaton (5 can reflect the farness of resource allocaton n the system. The range of F ( s [1, +, and the more F ( s closer to 1 the more far of the bandwdth allocaton. But consder the weght ω of vrtual networks, whch the bandwdth allocaton should accordng to. Obvously, f all vrtual networks have the same weght then when F ( = 1 the allocaton s completely far. But when the weght s unequal when F ( = 1 the allocaton seems not completely far, thus we must consder the weght of each vrtual network. Substtute ω to b : n (ω 1, ω,..., ω n f = (6 ( ω Σ n (3 (5 Journal of Dgtal Informaton Management Volume 11 umber 3 June 013 181
In normal condtons 1 f < +, f = 1 means each vrtual network has same weght. Thus, constant D n the optmal model (3 should be no less than f, because when the weghts of vrtual network are unequal, the completely far allocaton soluton exts at D = f. In the end of ths secton, we dscuss the exstence of the optmal soluton of problem (3. Substtute (5 to (4: n Σ L (b, λ, µ = Σ n ω U b = 1 (b +λ (C n (b + µ (D 1 + b +...+ b n (7 ( Σ n b The Slater constrant qualfcaton holds for the problem (3 at pont b = 0, because then 0 = Σ n b < C, ths guaran -tees the exstence of Lagrange multplers λ and µ. In other words, because the objectve functon s concave and the feasble regon s convex, there exst at lst one feasble vector b s optmal. when the two vrtual machnes do ther best to send the traffc, so as to occupy the full bandwdth of the physcal lnk. UDP traffc whch s non TCP frendly s domnant, and TCP traffc just rarely able to successfully send few packets (as the result shown n Fgure whch s the frst experment on the topology n Fgure 1. The man cause of ths stuaton s because of the TCP congeston control mechansms, when congeston occurs the sldng wndow wll lmt the sendng rate to a small value. Then, we bult vrtual networks for both traffc (V 1 for TCP traffc and V for UDP traffc and verfed our mode for far vrtual bandwdth allocaton dscussed n ths paper on the same topology (Fgure 1. We set n =, D = 1.1, C = 1 and the weght of each vrtual lnk n1 and n s 1. Fgure 3 shows the bandwdth consumpton of the two non-frendly data traffc after network slcng and the optmal model we proposed n ths paper. From the expermental result we can see that the model presented n ths paper can acheve far bandwdth allocaton between vrtual lnks host on the same physcal lnk. 4. Performance Evaluaton We mplemented the dstrbuted vrtual bandwdth allocaton schema dscussed n Secton 3 usng OpenFlow VM [11]. We made a smple but suffcently persuasve experment as shown n Fgure 1. In vrtual lnk 1, VM1 sends TCP traffc to VM3; n vrtual lnk, VM sends UDP traffc to VM4. Both traffc are generated by Iperf, the physcal lnk bandwdth s set to 1Gbps. Bandwdth (Gb/s 1 0.8 0.6 0.4 0. V 1 V 0 0 5 10 15 Tme (s Fgure 3. Wth network vrtualzaton and far bandwdth allocaton model 5. Concluson Bandwdth (Gb/s 1 0.8 0.6 0.4 0. Fgure 1. Expermental topology TCP Traffc UDP Traffc 0 0 5 10 15 Tme (s Fgure. Wthout network vrtualzaton As well-known n network congeston control problem [9], wthout usng any vrtual bandwdth control mechansm, Ths paper ntroduced a utlty-maxmzaton model for far bandwdth sharng between vrtual networks n VDCs to provde mult tenants mechansm for network resource under cloud computng. In the model, every physcal lnk s assocated wth a farness ndex constrant n maxmze utlty calculatng. The goal s to lmt the dfferences among the bandwdth allocaton of the vrtual lnks whch share the same physcal lnk. In our future work, we wll test the trade-off between farness and the bandwdth utlzng rate under some condtons such as not all vrtual networks need bandwdth guarantee,.e. the dynamc bandwdth allocaton takng nto account the farness. References [1] Armbrust, M., Fox, A., Grffth, R., Joseph, A., Katz, R., Konwnsk, A., Lee, G., Patterson, D., Rabkn, A., Stoca, I., Zahara, M. (010. A Vew of Cloud Computng. ACM Communcatons, 53 (4 50 58. 18 Journal of Dgtal Informaton Management Volume 11 umber 3 June 013
[] Buyya, R., Yeo, C. S., Venugopal, S., Broberg, J., Brandc, I. (009. Cloud Computng and Emergng IT Platforms: Vson, Hype, and Realty for Delverng Computng as the 5 th Utlty. Future Generaton Computer Systems. 5 (6 599 616. [3] Xu, J., Zhao, M., Fortes, J., Carpenter, R., Yousf, M. (007. On the Use of Fuzzy Modelng n Vrtualzed Data Center Management. ICAC 07: Proceedngs of the 4 th Internatonal Conference on Autonomc Computng, p. 395 400, Hong Kong, HK, Chna. IEEE. [4] VMware. (01. VMware vcloud. http://www.vmware. com/technology-/cloud-computng.html. [5] Amazon, (01. Amazon Elastc Compute Cloud (Amazon EC. http://aws.amazon.com/ec/. [6] Yan, W., Zhou, L., Ln, C. (011. VM Co-schedulng: Approxmaton of Optmal Co-Schedulng n Data Center. AIA 11: Proceedngs of the 5 th Internatonal Conference on Advanced Informaton etworkng and Applcatons, p. 340 347, Bopols, Sngapore. IEEE. [7] Khan, A., Zugenmaer, A., Jurca, D., Kellerer, W. (01. etwork vrtualzaton: a hypervsor for the Internet?. IEEE Communcatons Magazne, 50 (1 136 143. [8] íñgo, G., Berral, J. L., Ftó, J. O., Julí, F., ou, R., Gutart, J., Gavaldí, R., Torres, J. (01. Energy-effcent and multfaceted resource management for proft-drven vrtualzed data centers. Future Generaton Computer Systems, 8 (5 718 731. [9] Zhang, P., Wang, H., Cheng, S. (011. Shrnkng MTU to Mtgate TCP Incast Throughput Collapse n Data Center etworks. CMC 11: Proceedngs of the 3 th Internatonal Conference on Communcatons and Moble Computng, p. 16 19. [10] Alan, S., Srkanth, K., Albert, G., Changhoon, K. (010. Seawall: Performance Isolaton for Cloud Datacenter etworks. HotCloud 10: Proceedngs of the nd USEIX Workshop on Hot Topcs n Cloud Computng, p. 1 1, Boston, MA, USA. USEIX. [11] Stanford Clean Slate Program. (01. The OpenFlow Swtch Consortum. http://www.openflowswtch.org/ Journal of Dgtal Informaton Management Volume 11 umber 3 June 013 183