Fair Virtual Bandwidth Allocation Model in Virtual Data Centers



Similar documents
A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

Hosting Virtual Machines on Distributed Datacenters

An Interest-Oriented Network Evolution Mechanism for Online Communities

Support Vector Machines

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

A Design Method of High-availability and Low-optical-loss Optical Aggregation Network Architecture

A heuristic task deployment approach for load balancing

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services

Network Aware Load-Balancing via Parallel VM Migration for Data Centers

A Programming Model for the Cloud Platform

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

A Dynamic Energy-Efficiency Mechanism for Data Center Networks

Pricing Model of Cloud Computing Service with Partial Multihoming

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment

Study on Model of Risks Assessment of Standard Operation in Rural Power Network

Efficient Bandwidth Management in Broadband Wireless Access Systems Using CAC-based Dynamic Pricing

A Secure Password-Authenticated Key Agreement Using Smart Cards

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

Complex Service Provisioning in Collaborative Cloud Markets

A Dynamic Load Balancing for Massive Multiplayer Online Game Server

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

On the Interaction between Load Balancing and Speed Scaling

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Introduction CONTENT. - Whitepaper -

The Load Balancing of Database Allocation in the Cloud

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Load Balancing By Max-Min Algorithm in Private Cloud Environment

Enabling P2P One-view Multi-party Video Conferencing

An Alternative Way to Measure Private Equity Performance

Network Security Situation Evaluation Method for Distributed Denial of Service

Self-Adaptive SLA-Driven Capacity Management for Internet Services

On the Interaction between Load Balancing and Speed Scaling

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems

J. Parallel Distrib. Comput.

A Parallel Architecture for Stateful Intrusion Detection in High Traffic Networks

The Greedy Method. Introduction. 0/1 Knapsack Problem

QoS-Aware Active Queue Management for Multimedia Services over the Internet

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

How To Plan A Network Wide Load Balancing Route For A Network Wde Network (Network)

Recurrence. 1 Definitions and main statements

A New Service Pricing Mechanism based on Coalition Game Theory in

A Performance Analysis of View Maintenance Techniques for Data Warehouses

AN EFFICIENT GROUP AUTHENTICATION FOR GROUP COMMUNICATIONS

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop


APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT

IMPACT ANALYSIS OF A CELLULAR PHONE

A Cluster Based Replication Architecture for Load Balancing in Peer-to-Peer Content Distribution

A Dynamic Load Balancing Algorithm in Heterogeneous Network

Economic Models for Cloud Service Markets

Minimal Coding Network With Combinatorial Structure For Instantaneous Recovery From Edge Failures

Cloud Auto-Scaling with Deadline and Budget Constraints

Optimization Model of Reliable Data Storage in Cloud Environment Using Genetic Algorithm

A Self-Organized, Fault-Tolerant and Scalable Replication Scheme for Cloud Storage

How Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression

Fuzzy Set Approach To Asymmetrical Load Balancing In Distribution Networks

P2P/ Grid-based Overlay Architecture to Support VoIP Services in Large Scale IP Networks

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers

Traffic State Estimation in the Traffic Management Center of Berlin

Efficient On-Demand Data Service Delivery to High-Speed Trains in Cellular/Infostation Integrated Networks

Effective Network Defense Strategies against Malicious Attacks with Various Defense Mechanisms under Quality of Service Constraints

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

What is Candidate Sampling

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Cooperative Load Balancing in IEEE Networks with Cell Breathing

1 Example 1: Axis-aligned rectangles

Cloud-based Social Application Deployment using Local Processing and Global Distribution

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

Transcription:

Far Vrtual Bandwdth Allocaton Model n Vrtual Data Centers Yng Yuan, Cu-rong Wang, Cong Wang School of Informaton Scence and Engneerng ortheastern Unversty Shenyang, Chna School of Computer and Communcaton Engneerng ortheastern Unversty at Qnhuangdao Qnhuangdao, Chna yuanyng111@gmal.com Journal of Dgtal Informaton Management ABSTRACT: etwork vrtualzaton opens a promsng way to support dverse applcatons over a shared substrate by runnng multple vrtual networks. Current vrtual data centers n cloud computng have flexble mechansms to partton compute resources but provde lttle control over how tenants share the network. Ths paper proposes a utlty-maxmzaton model for bandwdth sharng between vrtual networks n the same substrate network. The am s to mprove the farness of vrtual bandwdth allocaton for mult tenants, especally for vrtual lnks whch share the same physcal lnk and carry nonfrendly traffc. We frst gven a basc model and then proved the model can be splt nto many sub models whch can run on each rack-swtch port. In the sub model, every physcal lnk s assocated wth a farness ndex constrant n maxmze utlty calculatng. The goal s to lmt the dfferences among the bandwdth allocaton of the vrtual lnks whch share the same physcal lnk. Expermental results show that the vrtual bandwdth allocaton between tenants s more reasonable. Categores and Subject Descrptors: C..6 [Internetworkng]: Standards; C. [COMPUTER- COMMUICATIO ETWORKS] Data Communcatons General Terms: etwork Protocols, etwork Vrtualzaton, Vrtual Bandwdth Allocaton Keywords: etwork Vrtualzaton, Mult-tenant, Bandwdth Allocaton, Vrtual Data Centers Receved: 1 February 013, Revsed 16 March 013, Accepted 5 March 013 1. Introducton Cloud computng delvers nfrastructure, platform, and software that are made avalable as subscrpton-based servces n a pay-as-you-go model to consumers [1]. It ams to power the next generaton data centers as the enablng platform for dynamc and flexble applcaton provsonng. Ths s facltated by exposng data center s capabltes as a network of vrtual servces so that users are able to access and deploy applcatons from anywhere by the demand and QoS (Qualty of Servce requrements []. In practcal applcatons, data centers whch are bult usng vrtualzaton technology wth vrtual machnes as the basc processng elements are called vrtual data centers (VDCs [3]. Comparng wth the tradtonal data centers, VDCs could provde some sgnfcant merts such as server consoldaton, hgh avalablty and lve mgraton, and provde flexble resource management mechansms. Therefore, VDCs are wdely used as the nfrastructure of exstng Cloud computng systems [4, 5]. To acheve cost effcences and on-demand scalng, VDCs are hghly multplexed shared envronments, wth VMs and tasks from multple tenants coexstng n the same cluster. Whle VDCs provde many mechansms to schedule local compute, memory, and dsk resources [6]. Exstng mechansms for apportonng network resources n current VDCs fall short. End host mechansms such as TCP congeston control are wdely deployed to determne network sharng today va a noton of flow-based farness. However, TCP does lttle to solate tenants from one another: poorly-desgned or malcous applcatons can consume network capacty, to the detrment of other Journal of Dgtal Informaton Management Volume 11 umber 3 June 013 179

applcatons whch generate non TCP frendly flows. Thus, whle resource allocaton usng TCP s scalable and acheves hgh network utlzaton, t does not provde robust performance solaton. etwork vrtualzaton can extenuate the ossfyng forces of the current Ethernet and stmulate nnovaton by enablng dverse network archtectures to cohabt on a shared physcal substrate [7]. So t can support dverse applcatons over a shared substrate by runnng multple vrtual networks, whch customzed for dfferent performance objectves. Leveragng network vrtualzaton technology, the VDC supervsor can classfy vrtual machnes of same applcaton provder users who rental the VDC resources (CPU, RAM, storage, bandwdth of servers nto same vrtual networks so as to acheve more aglty that needed by cloud computng and support multtenant demand by cloud computng. In ths condton, n order to save energy and mprove hardware resource utlzaton, vrtual machnes (VMs whch belong to dfferent users may be dstrbuted on a sngle physcal machne. In cloud computng envronment, there are competton ssues between these users. VMs on same physcal machne wll compete for network resources, because they all want to occupy enough bandwdth. Even n some extreme cases there may exst malcous competton, some VMs may try to fll the bandwdth to affect the opponent regardless of whether t really need so much network resources. Therefore the supervsor should to ensure the farness of bandwdth allocaton between those vrtual networks frst of all. Ths paper makes the case for vrtual bandwdth allocaton between vrtual lnks n mult-tenant VDCs to provde more farness and proactvely prevent network congeston. We fst gven a basc model and then proved the model can be splt nto many sub models whch can run on each rack-swtch port. In the sub model, every physcal lnk s assocated wth a farness ndex constrant n maxmze utlty calculatng. The goal s to lmt the dfferences among the bandwdth allocaton of the vrtual lnks whch share the same physcal lnk. The rest of the paper s organzed as follows. Secton gves a basc utlty-maxmzaton model for bandwdth sharng between vrtual networks of mult tenants. Secton 3 proves the basc model can be splt nto many sub models whch can run on each rack-swtch port and ntroduces the sub model n detals. Secton 4 gves the concluson.. Basc Model Whle VDCs provde many mechansms to schedule local compute, memory, and dsk resources [8], exstng mechansms for apportonng network resources fall short. For example, end host mechansms such as TCP congeston control are wdely deployed to determne network sharng today va a noton of flow-based farness. However, TCP does lttle to solate tenants from one another: poorly-desgned or malcous applcatons can consume network capacty, to the detrment of other applcatons whch generate non TCP frendly flows [9]. Thus, whle resource allocaton usng TCP s scalable and acheves hgh network utlzaton, t does not provde robust performance solaton. There s a trade-off between ensurng solaton and retanng hgh network utlzaton. Bandwdth reservatons, as realzed by a host of mechansms, are ether overly conservatve at low load, whch can acheve poor network utlzaton, or overly lenent at hgh load, whch can acheve poor solaton. An deal bandwdth solaton soluton for cloud datacenters has to scale and retan hgh network utlzaton. It has to do so wthout assumng well-behaved or TCP conformant tenants. Snce changes to the ICs and swtches are expensve, take some tme to standardze and deploy, and are hard to customze once deployed, edge or so ware based solutons are preferable. etwork vrtualzaton s a promsng way to support dverse applcatons over a shared substrate by runnng multple vrtual networks, whch customzed for dfferent performance objectves. VDC supervsor can classfy vrtual machnes of same applcaton nto same vrtual networks so as to acheve more aglty that needed by cloud computng. In ths condton, the fabrc owner as the substrate provder to provde the physcal hardware, servce provder rent vrtual machnes and vrtual networks to run ther applcatons to provde servce, therefore can acheve a flexble lease mechansm [10] whch s more sutable for modern commercal operatons. Today, network vrtualzaton s movng from fantasy to realty, as many major router vendors start to support both swtch vrtualzaton and swtch programmablty. In the man thought of network vrtualzaton, vrtual machnes of same applcatons n data center are parttoned nto same vrtual networks by network slcng. In ths case, for QoS guarantee, the most mportant thng n VDC network s the bandwdth ndemnfcaton n each vrtual lnk,.e. the vrtual bandwdth allocaton mechansm must be well consdered. We wll frst gve a basc model for bandwdth allocaton between vrtual networks of mult tenants, and then proof that the global problem can be dvded nto subproblems on each rack swtch s port. The vrtual bandwdth sharng problem n ths secton s consdered from the overall perspectve of the system whch contans vrtual networks and substrate hardware resource n VDC under mult-tenants market mechansm under cloud computng. Consder a substrate network wth a set L of lnks, and let C j be the capacty of substrate lnk j J. The network s shared by a set of vrtual networks and ndexed by. Defne a vector b l whch denotes the allocated bandwdth of vrtual network n lnk j. Let U (b l be the utlty of vrtual network as a functon of hs bandwdth b l. ote that the utlty U (b l should be an ncreasng, nonnegatve and twce contnuously 180 Journal of Dgtal Informaton Management Volume 11 umber 3 June 013

dfferentable functon of b l over the range b l 0. The bandwdth control problem can be formulated as the followng optmal problem. The bandwdth control problem can be formulated as the followng optmal problem P1: MAX Σ = 1Σ L l = 1 U (b l P1: s.t., l, b l 0, l L, Σ b l C l F(b D The meanng of the constrants are: 1 for each vrtual network, ts bandwdth must be greater than or equal to 0; the total bandwdth of all vrtual networks n physcal lnk l does not exceed the maxmum bandwdth C l allowed by the hardware; 3 F(b D s an addton condton to ntroduce the farness constrant whch we wll dscussed n next secton and D s a constant. Snce the utlty functons are strctly concave, and hence contnuous, and the feasble soluton set s compact then the above optmal problem has a unque optmal soluton. P1 s an overall problem wth multple constrants. For each vrtual network, t can calculate ts proft for the allocated vrtual bandwdth b l. The goal s to acheve the maxmze sum utlty of all vrtual networks. However, problem P1 for vrtual bandwdth allocaton n VDC s stll a global ssue. Obvously, t s mpossble to manage vast amounts of vrtual nodes n large-scale network as well as n VDC. There s no gan n usng the local nterpretaton unless we can devse a local way to solve the problem. 3. Dstrbuted Sub Model In practcal VDCs, usng a dstrbuted manner to address the complex optmzaton system such as P1 s an effcent way. In ths subsecton, we dvded the overall problem nto sub-problems whch can be solved on the port of rack swtches n VDC. The overall optmzaton problem P1 can be splt nto several dstrbuted problem. Recall the problem n the prevous secton, each b l n vector b l = (b l, l L denotes the allocated bandwdth for vrtual network n lnk l. For any set of vrtual network lnks L we have L L. Takng nto account of the dversfcaton and varablty of vrtual network topologes, we need to regulate all dmensons of L and make t equal to the dmenson of L. Let L be the new lnk set of vrtual network, the two under sets are equvalent. L L L, l L, l = 1, f l L ; (1 L l = 1, f l L ; L l = 0 means that the bandwdth allocated for vrtual network on lnk l s 0. Then the overall optmal problem P1 can be rewrtten as: MAX Σ = 1Σ L Σ L l = 1 l = 1 U (b l = Σ U = 1 (b l = Σ = 1 U (b 1 + Σ = 1 U (b +...+ Σ = 1 U (b L Because the utlty functon U (x s nonnegatve over the range x 0, then the overall optmal problem P1 can be splt nto L sub optmzaton problems on each network port of all rack swtches n data center. When the bandwdth allocaton among vrtual lnks for every vrtual network on each physcal lnks of the rack swtch s optmal, the overall optmzaton problem P1 can be solved smultaneously. Ths dstrbuted sub model s feasble n large-scale data center network, and can be easly acheved. By now, we have demonstrated the overall problem P1 can be splt nto a number of dstrbuted sub problems (n equaton ( whch can be solved on the rack swtch n VDCs through a ware based solutons leveragng network vrtualzaton technque over a programmable swtches. Then we can dscuss the detals of the sub model for real system. Frstly, we can get the sub optmzaton problems: maxσ n = 1 ω U (b s.t., b 0, Σ n b C F(b D Where ω s the weght of vrtual network, n s the number of vrtual networks, b denotes the allocated bandwdth for vrtual network at lnk l. U (b s the utlty functon dscussed n the prevous secton. Vector b = (b 1, b,..., b n s a soluton for problem (3. Because the feasble regon s compact and the objectve functon s contnuous. We form the Lagrangan for the problem (3: L (b, λ, µ = b Σ n ω U (b +λ (C = 1 Σ n + µ (D F (b (4 Here the second term s a penalty for the physcal bandwdth capacty constrant; the thrd term s an addtve penalty whch s used for the farness constrant of vrtual bandwdth allocaton n our model. Thus we need a method to descrbe the quanttatve of the farness for bandwdth allocaton, whch can be defned as follows: F (b = n (b 1, b,..., b n Σ n b ( Equaton (5 can reflect the farness of resource allocaton n the system. The range of F ( s [1, +, and the more F ( s closer to 1 the more far of the bandwdth allocaton. But consder the weght ω of vrtual networks, whch the bandwdth allocaton should accordng to. Obvously, f all vrtual networks have the same weght then when F ( = 1 the allocaton s completely far. But when the weght s unequal when F ( = 1 the allocaton seems not completely far, thus we must consder the weght of each vrtual network. Substtute ω to b : n (ω 1, ω,..., ω n f = (6 ( ω Σ n (3 (5 Journal of Dgtal Informaton Management Volume 11 umber 3 June 013 181

In normal condtons 1 f < +, f = 1 means each vrtual network has same weght. Thus, constant D n the optmal model (3 should be no less than f, because when the weghts of vrtual network are unequal, the completely far allocaton soluton exts at D = f. In the end of ths secton, we dscuss the exstence of the optmal soluton of problem (3. Substtute (5 to (4: n Σ L (b, λ, µ = Σ n ω U b = 1 (b +λ (C n (b + µ (D 1 + b +...+ b n (7 ( Σ n b The Slater constrant qualfcaton holds for the problem (3 at pont b = 0, because then 0 = Σ n b < C, ths guaran -tees the exstence of Lagrange multplers λ and µ. In other words, because the objectve functon s concave and the feasble regon s convex, there exst at lst one feasble vector b s optmal. when the two vrtual machnes do ther best to send the traffc, so as to occupy the full bandwdth of the physcal lnk. UDP traffc whch s non TCP frendly s domnant, and TCP traffc just rarely able to successfully send few packets (as the result shown n Fgure whch s the frst experment on the topology n Fgure 1. The man cause of ths stuaton s because of the TCP congeston control mechansms, when congeston occurs the sldng wndow wll lmt the sendng rate to a small value. Then, we bult vrtual networks for both traffc (V 1 for TCP traffc and V for UDP traffc and verfed our mode for far vrtual bandwdth allocaton dscussed n ths paper on the same topology (Fgure 1. We set n =, D = 1.1, C = 1 and the weght of each vrtual lnk n1 and n s 1. Fgure 3 shows the bandwdth consumpton of the two non-frendly data traffc after network slcng and the optmal model we proposed n ths paper. From the expermental result we can see that the model presented n ths paper can acheve far bandwdth allocaton between vrtual lnks host on the same physcal lnk. 4. Performance Evaluaton We mplemented the dstrbuted vrtual bandwdth allocaton schema dscussed n Secton 3 usng OpenFlow VM [11]. We made a smple but suffcently persuasve experment as shown n Fgure 1. In vrtual lnk 1, VM1 sends TCP traffc to VM3; n vrtual lnk, VM sends UDP traffc to VM4. Both traffc are generated by Iperf, the physcal lnk bandwdth s set to 1Gbps. Bandwdth (Gb/s 1 0.8 0.6 0.4 0. V 1 V 0 0 5 10 15 Tme (s Fgure 3. Wth network vrtualzaton and far bandwdth allocaton model 5. Concluson Bandwdth (Gb/s 1 0.8 0.6 0.4 0. Fgure 1. Expermental topology TCP Traffc UDP Traffc 0 0 5 10 15 Tme (s Fgure. Wthout network vrtualzaton As well-known n network congeston control problem [9], wthout usng any vrtual bandwdth control mechansm, Ths paper ntroduced a utlty-maxmzaton model for far bandwdth sharng between vrtual networks n VDCs to provde mult tenants mechansm for network resource under cloud computng. In the model, every physcal lnk s assocated wth a farness ndex constrant n maxmze utlty calculatng. The goal s to lmt the dfferences among the bandwdth allocaton of the vrtual lnks whch share the same physcal lnk. In our future work, we wll test the trade-off between farness and the bandwdth utlzng rate under some condtons such as not all vrtual networks need bandwdth guarantee,.e. the dynamc bandwdth allocaton takng nto account the farness. References [1] Armbrust, M., Fox, A., Grffth, R., Joseph, A., Katz, R., Konwnsk, A., Lee, G., Patterson, D., Rabkn, A., Stoca, I., Zahara, M. (010. A Vew of Cloud Computng. ACM Communcatons, 53 (4 50 58. 18 Journal of Dgtal Informaton Management Volume 11 umber 3 June 013

[] Buyya, R., Yeo, C. S., Venugopal, S., Broberg, J., Brandc, I. (009. Cloud Computng and Emergng IT Platforms: Vson, Hype, and Realty for Delverng Computng as the 5 th Utlty. Future Generaton Computer Systems. 5 (6 599 616. [3] Xu, J., Zhao, M., Fortes, J., Carpenter, R., Yousf, M. (007. On the Use of Fuzzy Modelng n Vrtualzed Data Center Management. ICAC 07: Proceedngs of the 4 th Internatonal Conference on Autonomc Computng, p. 395 400, Hong Kong, HK, Chna. IEEE. [4] VMware. (01. VMware vcloud. http://www.vmware. com/technology-/cloud-computng.html. [5] Amazon, (01. Amazon Elastc Compute Cloud (Amazon EC. http://aws.amazon.com/ec/. [6] Yan, W., Zhou, L., Ln, C. (011. VM Co-schedulng: Approxmaton of Optmal Co-Schedulng n Data Center. AIA 11: Proceedngs of the 5 th Internatonal Conference on Advanced Informaton etworkng and Applcatons, p. 340 347, Bopols, Sngapore. IEEE. [7] Khan, A., Zugenmaer, A., Jurca, D., Kellerer, W. (01. etwork vrtualzaton: a hypervsor for the Internet?. IEEE Communcatons Magazne, 50 (1 136 143. [8] íñgo, G., Berral, J. L., Ftó, J. O., Julí, F., ou, R., Gutart, J., Gavaldí, R., Torres, J. (01. Energy-effcent and multfaceted resource management for proft-drven vrtualzed data centers. Future Generaton Computer Systems, 8 (5 718 731. [9] Zhang, P., Wang, H., Cheng, S. (011. Shrnkng MTU to Mtgate TCP Incast Throughput Collapse n Data Center etworks. CMC 11: Proceedngs of the 3 th Internatonal Conference on Communcatons and Moble Computng, p. 16 19. [10] Alan, S., Srkanth, K., Albert, G., Changhoon, K. (010. Seawall: Performance Isolaton for Cloud Datacenter etworks. HotCloud 10: Proceedngs of the nd USEIX Workshop on Hot Topcs n Cloud Computng, p. 1 1, Boston, MA, USA. USEIX. [11] Stanford Clean Slate Program. (01. The OpenFlow Swtch Consortum. http://www.openflowswtch.org/ Journal of Dgtal Informaton Management Volume 11 umber 3 June 013 183