Jurnal f Cmmunicatins Vl. 0, N. 2, December 205 SELDAC: Sftware-Defined Strage based Efficient Lad Distributin and Aut Scaling in Clud Data Centers Renuga Kanaga and Khin Mi Mi Aung Data Center Technlgy Divisin, A*STAR Data Strage Institute, Singapre. Email: {Renuga_k, Mi_Mi_AUNG}@dsi.a-star.edu.sg Abstract Clud cmputing allws the users t access the shared pl f cnfigurable resurces such as cmpute, netwrk and strage n-demand basis frm the clud Data Centers. Clud strage prvides the strage resurces t the clud users ver the netwrk. The massive grwth f clud data due t the applicatins such as vide streaming, big data prcessing, scial netwrking, banking and scientific applicatins fueled the strage demand. An increase in strage demand resulted in the expansin f the clud strage with additinal strage ndes. The expansin f the clud strage necessitates the migratin f data acrss the strage ndes t keep them balanced. Efficient management and migratin f tera bytes r peta bytes f such clud data depends nt nly n the strage availability but als n the netwrking cst which is prprtinal t the distance and the bandwidth requirement. In this scenari, sftware defined envirnment pens up the new pprtunities t rchestrate the netwrk and strage resurces cntrl strategies t prvide an effective slutin. In this paper, we present the sftware-defined rchestratin framewrk fr efficient lad distributin and aut scaling mechanism and shws that it imprves the verall efficiency in terms f latency and retrieval cst. We demnstrate the effectiveness and the efficiency f the prpsed mechanism using simulatins. Index Terms Sftware-defined strage, sftware-defined netwrking, lad distributin, rchestratin, clud data center I. INTRODUCTION Clud Data Centers cnsisting f tens f thusands f servers with huge cmputing, strage, and netwrk bandwidth resurces are becming cmmn. Virtualizatin technlgy enables the clud users t hire Virtual Machines (VMs) with the required cmputing, strage and bandwidth resurces n-demand basis frm the clud data centers instead f wning them. Clud strage is the mdel f data strage in which data is stred n remte servers, maintained and managed by the clud strage service prviders. It is made available t the clud users ver the netwrk. Clud strage is grwing at the phenmenal rate due t big data, scial netwrks and mbile devices. Accrding t Gartner s reprt [], the cnsumer digital strage will grw t 4. zettabytes in 206, with 36 percent f this strage in the clud. Manuscript received June 25, 205; revised December 8, 205. This wrk was supprted by the the Singapre A*STAR TSRP prject 22804004. Crrespnding authr email: Renuga_K@dsi.a-star.edu.sg di:0.2720/jcm.0.2.020-026 The rising rates f clud adptin resulted in the expansin f clud strage with additinal ndes. As the clud services cntinue t grw, the diverse I/O wrklads can cause significant imbalance acrss the servers causing higher delays t the end users. As a result, tera r peta bytes f clud data are shuffled per day acrss the server ndes in the clud strage t balance the lad. Similarly, during the expansin f the clud strage with additinal strage ndes, data can be migrated acrss them t keep them balanced. This results in high recnfiguratin cst f data migratin due t bandwidth versubscriptin in the Data Center netwrk. In clud cmputing, lad balancing is required t distribute the dynamic wrklad evenly acrss all the ndes t achieve higher user satisfactin, high resurce utilizatin rati, shrt respnse time and high thrughput. Prper lad balancing helps t achieve minimal resurce cnsumptin, scalability, fail-ver and versubscriptin. In clud Data Centers, efficient migratin f data nt nly depends n the strage availability but als n the netwrk bandwidth. Traditinal strage systems use dedicated strage and netwrk fr specific applicatins t guarantee latency, thrughput and Input/utput Operatins Per Secnd (IOPS). But, these resurces are nt evenly utilized. There is a need fr efficient lad distributin mechanism cnsidering netwrk knwledge alng with the strage capacity. The rchestratin f bth strage and netwrk devices in clud cmputing with the verall resurces cntrlled by the cmmn clud cntrller will lead t a prmising research directin. Clud cmputing started with the virtualizatin f cmputing resurces, fllwed by recent advances and rapid innvatins in Sftware-Defined Netwrking (SDN) [2], [3], which aim t virtualize netwrking resurces and separate the cntrl plane frm the data plane. Apart frm that, the current trend twards Sftware-Defined Strage (SDS) aims t virtualize the strage resurces and enfrces the plicies by dynamically, adjusting the strage resurces acrss multiple clud users. Our design called sftware-defined Strage based Efficient Lad Distributin and Aut Scaling (SELDAC) is cnsidered as an instance f sftware defined strage. It includes the lad balancing mdule that act as the sftware defining part f the strage system and prvides the latency/bandwidth, IOPS requirements f the system. It als cntains the traffic 205 Jurnal f Cmmunicatins 020
Jurnal f Cmmunicatins Vl. 0, N. 2, December 205 engineering mdule which will chse an efficient path t migrate the data t reach the destinatin server ndes. The main cntributin f this paper is the design f the rchestrated resurce cntrl framewrk which has several attractive features: () Balanced and Efficient lad distributin in a clud strage envirnment. (2) Redistributing as few bjects as pssible in the event f strage nde additin/failure (aut scale) under varying traffic cnditins. (3) There is n need t maintain the central mapping table which will make the search perfrmance mre efficient and save space. It imprves the verall efficiency f the system in terms f bandwidth and thrughput. The rest f this paper is rganized as fllws: Sectin II gives a review f the related wrk. Sectin III intrduces the prpsed rchestrated framewrk fr lad balancing in clud. Sectin IV describes the perfrmance analysis f the prpsed framewrk. Finally cncluding remarks are made in Sectin V. II. BACK GROUND AND RELATED WORK In this sectin, we first briefly describe abut the sftware- defined netwrking cncept and penflw prtcl and then review previus lad balancing algrithms related t this wrk and discuss their merits and limitatins. A. Sftware Defined Netwrking Recently, Sftware-Defined Netwrking (SDN) architectures in which the cntrl plane is decupled frm data plane have becme ppular as users can intelligently cntrl ruting and resurce usage mechanism. SDN cncept has been intrduced by academia [2], [3] and nw becme the industry standard [4]. By emplying a central server, which uses the knwledge f switch ruting infrmatin and netwrk status, SDN manages ruting algrithms separately while keeping data flws running n riginal netwrk paths. Hence, SDN prvides much easier mdificatin t switch algrithms, as well as faster cntrl ver netwrk state changes. Thus, it prvides agile traffic cntrl and management. OpenFlw, prpsed by McKewn et al. [3], is a cmmunicatin interface between SDN cntrllers and netwrk devices cnsidered as the mst ppular enabler fr SDN as shwn in Fig.. Fig.. SDN architecture. OpenFlw is an pen standard, which enable users t implement experimental ruting prtcls via sftware. Every OpenFlw switch maintains a flw table with entries fr the flws. An entry in the table specifies a few packet fields and assciated actin (e.g. frward, drp, mdify header) t be carried ut in the event f a matching with the incming packet. In the event when a switch is unable t find a match in the flw table, the packet is frwarded t the cntrller t make the ruting decisins. After deciding hw t rute the new flw, the cntrller then installs a new flw entry at the required switches, s that the desired actins can be perfrmed n the new flw. B. Lad Balancing Algrithms Lad balancing is required in the clud cmputing envirnment t achieve shrt respnse time and high thrughput. Varius lad balancing appraches have been prpsed. Cmmnly used static algrithms are Rund Rbin (RR) and Weighted Rund Rbin (WRR). The static methds d nt cnsider run time cnditins. Several algrithms have been prpsed with different gals like energy-saving, time-saving, netwrkbandwidth utilizatin, CPU lad balancing etc. In [4], a data placement algrithm in clud strage system has been prpsed. It selects the strage server based n the strage capacity. Hwever; it des nt cnsider the netwrk utilizatin fr selecting a strage server. In [5], a virtual cluster management has been prpsed t guarantee the bandwidth requirement. In [6], netwrk I/O lad balancing strategy has been prpsed t efficiently utilize the netwrk resurces. In [7], a data placement algrithm in clud strage system has been prpsed. It selects the strage server based n the strage capacity. Hwever; it des nt cnsider the netwrk utilizatin fr selecting the strage server. Our wrk differs frm the abve wrk by cnsidering the netwrk knwledge alng with the strage capacity fr efficient lad distributin in the clud Data Center. III. SYSTEM ARCHITECTURE A. SELDAC Functinal Cmpnents Fig. 2 shws the functinal cmpnents f ur prpsed SELDAC framewrk. The SELDAC framewrk cnsists f three main cmpnents: Access tier, Clud rchestratr, strage servers cnnected with strage pls. Access tier: An access tier cnsists f several clients r applicatins that are cnnected t a server which acts as an entry pint fr the frame wrk. The hst system is serviced by clud rchestratr, which is cnnected t ne r mre strage servers. Clud rchestratr: This is the centralised cntrller (cntrl plane) which plays a vital rle in rchestrating netwrk and strage cntrl strategies. The functin f this rchestratr is t distribute the 205 Jurnal f Cmmunicatins 02
Jurnal f Cmmunicatins Vl. 0, N. 2, December 205 requests frm the clud users t the apprpriate strage servers in a balanced manner. We briefly describe belw the functins f each mdule in clud rchestratr as shwn in Fig. 3. Fig. 2. SELDAC framewrk. Fig. 3. Clud rchestratr functinal mdules. Resurce Cnfiguratin Mdule is respnsible fr creating the vlumes based n the replicatin plicies n the strage systems and allcating these vlumes t the virtual machines (VMs). The strage Mnitr cmpnent which will mnitr the number f Read and Write requests, the latencies invlved and the usage f the strage vlume. Based n the Strage Mnitr values, the lad distributin mdule selects the strage servers fr request distributin. The Strage infrmatin mdule keeps track f the infrmatin abut the different types f strage devices, vlume infrmatin and the available strage capacity. The Netwrk Mnitr cmpnent which leverages n Sftware-Defined Netwrking (SDN) will mnitr the bandwidth availability acrss the netwrk links. The respnsibility f this mdule is t cllects, stres, and queries statistics frm all OpenFlw switches that will 205 Jurnal f Cmmunicatins be used by the Path cmputatin mdule t cmpute and cmpare the lad n varius links. These statistics are plled at fixed intervals and stred in clud rchestratr as snapsht bjects. Every snapsht will be assigned a sequence number, which increments by ne after each interval. The tw mst recent snapshts will be maintained in memry at any instant f time. Netwrk Infrmatin mdule keeps track f all the servers detected n the netwrk. It stres the MAC address, IP address f the interface f the servers cnnected t the netwrk, the OpenFlw switch id and prt number each f the servers is cnnected t. Path cmputatin is als respnsible fr finding the ptimal path with enugh available bandwidth fr ruting the migrated data when a new strage nde is added in the clud strage. As the current clud Data Centers adpting the multi-rted tree architecture [8], [9] with multiple paths between the servers, it is required t find the ptimal ruting path with enugh available bandwidth amng the multiple paths available between tw end hsts. It cmputes the least-laded shrtest candidate paths between any pair f end hsts based n the statistics cllected frm the switches by the Netwrk Mnitr mdule. Lad Distributin mdule makes the decisin n the selectin f the strage servers and the ptimized netwrk paths between the strage servers based n the mnitred values. Our algrithm named weighted bit-windw based lad distributin is used in this mdule t distribute the lad acrss the strage servers in a balanced manner, which is described in next sectin. Strage server: Each strage server (Data plane) can be cnnected t ne r mre hetergeneus strage devices. Each strage server has the necessary strage driver t cmmunicate with the strage devices. The strage servers send keep alive messages t the rchestratr at frequent intervals s that the rchestratr will knw hw many servers are available fr the client request. They als exchange the change in the strage capacity infrmatin f the strage pls that are attached t. The strage servers are gruped accrding t the strage capacity. The purpse f this is t assign the large request t the server with large capacity and s n. B. The Prpsed Lad Distributin Algrithm As we stated earlier, lad distributin is an imprtant prblem that need t be addressed in the grwing clud strage envirnment. We develped a new algrithm named weighted bit-windw based lad distributin with ruting wrks in tw phases. Fr example, a strage server with mre capacity tends t have large number f bjects than a strage server with lw capacity. This helps t reduce access time and cngestin. 022
Jurnal f Cmmunicatins Vl. 0, N. 2, December 205 In Phase I, when an rchestratr receives a request frm the access tier, first it attempts t find a suitable grup f strage servers with varying strage capacity t hst a request based n the infrmatin btained frm strage mnitr. We cnsider the number f active strage servers and the strage resurce usage within the servers are the main factrs in this phase. Our weighted bit-windw algrithm then distributes the lad acrss the servers in a balanced manner. In Phase II, the impact f the lad distributin n netwrk traffic with the assumptin f single shrtest path ruting is cnsidered. We nte that the netwrk cngestin is largely influenced by the data migratin when a new strage server is added during the clud strage expansin. Based n the path lad statistics btained frm netwrk mnitr cmpnent, the least laded path which is the path with the minimum lad amng all the candidate paths is selected fr data migratin in this phase. The cst f the path is calculated as the sum f the utilizatin f the links alng the path. If there is mre than ne path with the same minimum cst, the path with maximum bandwidth is preferred. We reduce the netwrk cngestin by taking the lad int the cnsideratin. ) Strage server gruping and lad distributin In phase, the incming requests are mapped t the apprpriate server grup using ur weighted bit-windw lad distributin methd. Once the strage server is chsen, the request will be placed in that server. In this sectin, we briefly explain the wrking f weighted bitwindw lad distributin algrithm and explain hw it supprts peratins like server nde additin during the clud strage expansin and strage nde deletin during nde failure. Let n be the number f existing strage servers with index frm 0 t n. Let X be the ttal number f requests. We grup the servers accrding t the available resurce capacity c. These servers are referred as active servers. The server indices and the crrespnding server IP addresses are maintained in a table at the rchestratr. The clud client requests are translated int an h-bit identifier using a base hash functin SHA-. We call the identifier as request_id. The hashed h-bit request_id is a fixed size string f h bits. The h-bit identifier is divided int number f bit windws f size k = lg 2 ( n ). While the server_ids 0 thrugh n are said t be active servers, the remaining server_ids n thrugh k 2 are called inactive server_ids that crrespnd t inactive strage servers. The size f the bit windw depends n the number f the strage servers and it varies accrding t the additin r deletin f strage servers. The bit windws are labeled starting frm 0 frm the right end, dented as BW 0, BW BW v-, where v is the maximum number f windws used. The bit-windws f a request_id are examined ne by ne starting frm BW 0. If the value f the BW 0 is valid then the request_id is placed in the strage server whse index is equal t that value. If the value f BW 0 is nt valid, then we cntinue t examine BW, BW 2 BW v- until an active strage server is fund. If n valid strage server index is fund amng the v windws, the request_id is placed at the strage server whse index is btained by inverting the mst significant bit (MSB) f BW 0. T cunt fr varying strage capacity, we use lgical server ids. A server with capacity c will have id whereas a server with capacity 2c will have tw ids and s n. This methd des nt require any table t maintain the relatin between the request_id and the server index and thus save the memry usage fr string the table. As the demand grws, the system will be scaled with a new server autmatically. When a new server nde is added, ur algrithm migrates less number f bjects acrss ther server ndes t keep them balanced. Fig. 4. Illustratin f Bit Windw algrithm (N=6). (a) BW0 is searched - invalid. (b) BW is searched- valid. (c) Assigned the request_id t server nde 3. Fig. 4 illustrates the weighted bit-windw lad balancing algrithm with the available number f servers N= 6. The server_ids 0 thru 5 are nw active servers as shwn in Fig. 3. The request_id generated by SHA is 000.0000000 as shwn in Fig. 4 (a). The size f the Bit-Windw is given by k=3. The value f bit windw BW 0 is 6 which is nt a valid nde index. S the search cntinues with BW as shwn in Fig. 4 (b). The value f the BW is 3 which is a valid index and hence the request_id is assigned t nde 3 as shwn in Fig. 4(c). Let us nw analyze hw the bjects are evenly distributed in the case new server nde addin/deletin. Server nde additin: As the demand grws, clud strage is expanded by adding mre sever ndes. When new nde jins (assigned index n ) and this new nde additin des nt change the bit windw size, then X c * requests will be migrated t the new nde n labelled n. If the new server nde additin resulted in the expansin f the bit-windw, X c * requests will be 2 n migrated. 205 Jurnal f Cmmunicatins 023
Jurnal f Cmmunicatins Vl. 0, N. 2, December 205 Server nde deletin: When a nde with an index n is deleted due t nde failure, and this deletin des nt change the size f the bit-windw, then c* X n requests will be migrated t ther server ndes. Fig. 5. Clud data center netwrk tplgy 2) Path selectin and ruting We cnsider a multi-rted Data Center netwrk as shwn in Fig. 5. The netwrk is cmprised f switches intercnnected in three layers- Tp f Rack (TR), aggregate, and cre- as shwn in the Fig. 5. When a new strage server is added t the Clud Data Center due t the increase in demand, the migratin peratin is initiated t migrate the requests frm the existing strage server t the newly added strage server as shwn in the Fig. 5. We develp an adaptive ruting methd which selects apprpriate rutes fr the incming request based n the current netwrk state. Based n the lad statistics at the switches and based n the lads n the paths, the minimum cst path is selected. The path cmputatin mdule as described earlier chses the least-laded path which is the path with minimum lad amng all candidate paths. The shrtest-hp paths are cnsidered as candidate paths. We nte that there exist many shrtest-hp paths in a Data Center netwrk. The cst f a path is calculated as the sum f the utilizatin f the links alng the path. If there is mre than ne path with the same minimum cst, the path with maximum bandwidth is preferred. Each linkk has a link lad f Lk bps cmputed frm the statistics, and link capacity Ck bps. Mathematically, the cst f the link is given as, can meet its design gals, we cnsidered tw scenaris. In the first scenari, we want t shw that bjects distributin and average retrieval latency is fairly even althugh servers may be f varying capacity. In this scenari, we create 24 server ndes, a number smewhere in between tw bit-windws, and 000 bjects. The server ndes have capacity varying frm 50 t 470 GB, while the bjects have size between 4k and 0GB. In the secnd scenari, we first created 3 server ndes and 50,000 bjects. The capacity f the servers ranged between 6000 t 4,000 GB, while the bjects size remained between and 0GB. We then added anther nde t determine the migratin f bjects when there is n change t the bit-windw size. B. Lad Distributin We use lad distributin as the perfrmance metric t evaluate the effectiveness f ur algrithm. Fig. 6 shws the lad distributin in the first scenari with 24 server ndes f varying capacity. It can be bserved that there is an even distributin f bjects accrding t varying server capacity. The servers with lwer capacity tend t have lesser number f bjects and the servers with higher capacity tend t have mre number f bjects. Fig. 6. Lad distributin acrss servers f varying capacity. rute_cstk= fr all k (Lk/Ck) After selecting the minimal cst path, the Lad distributin mdule migrates the requests t the newly jined server nde. IV. PERFORMANCE STUDY Fig. 7. Average retrieval cst In this sectin, we present the simulatin results f the weighted bit-windw based lad distributin algrithm. The ns3 simulatr [0], [] is used fr simulatin. A. Simulatin Setup We cnsidered strage servers f varying capacity and bjects with varying sizes. T shw that the algrithm 205 Jurnal f Cmmunicatins C. Average Retrieval Cst We evaluate the perfrmance f ur algrithm in terms f average retrieval cst. In Fig. 7, we plt the average retrieving cst fr the bjects acrss the server ndes. We define average retrieval cst as the sum f the strage access time and latency. We can see that the average 024
Jurnal f Cmmunicatins Vl. 0, N. 2, December 205 retrieving cst curve remains flat acrss the server ndes despite the variatin in the number f bjects n each nde. This shws that the weighted data placement algrithm is capable f distributing bjects acrss servers f varying capacity while maintaining similar retrieval cast acrss all servers. D. Lad Migratin Fig. 8 t Fig. 9 shw the distributin and migratin f bjects when server ndes are added t the envirnment. In Fig. 8, we plt the number f bjects that are distributed acrss 3 server ndes t shw the initial distributin. The bjects distributin acrss the ndes varies with the server ndes capacity. As described previusly, the servers with lwer capacity tend t have lesser number f bjects and the servers with higher capacity tend t have mre number f bjects. We cnsider the scenari f adding the new server t the netwrk. When a new server nde is added and the number f server ndes increases t 32, sme f the bjects will be migrated frm existing ndes t server nde 32. Fig. 9 shws the number f bjects that are migrated ut f each f the previus 3 ndes and mved t the newly added nde. We can see that the number f bjects that needs t be migrated frm the existing ndes is quite small. When there is n change in bit-windws size, the number f bjects that needs t be migrated whenever a new nde is added is fairly insignificant. t be hashed int; the bject will nt be migrated and can remain n the current nde. Fig. 9. Distributin and migratin f bjects when a server nde is added withut bit-windw size change Fig. 0. Distributin and migratin f bjects when a server nde is added causing bit-windw size change. V. CONCLUSIONS Fig. 8. Lad distributin acrss 3 servers E. Impact f Windw Size We evaluate the impact f the bit-windw size n lad distributin. Frm Fig. 9, we understand that when there is n change in bit-windws size during new server additin, the number f bjects that needs t be migrated is small. T study the impact f the new server nde additin that causes the change in bit windw size, anther nde is added t the system subsequently bringing the ttal number f server ndes t 33. Fig. 0 shws the distributin and migratin f bjects when the 33rd server nde is added which causes a change in the bit-windw size. The change in the bit-windw size means that bjects that were previusly hashed int a nde may nw be hashed int a different nde. T reduce the number f migratin, if an bject is currently n a nde that is adjacent t the new nde that it is suppsed In the paper, we presented the design, implementatin and the evaluatin f SELDAC, a sftware-defined rchestratin framewrk that can adapt t the changes in the clud strage expansin. We develped an efficient weighted bit-windw based lad distributin algrithm in clud Data center envirnment that ensures even distributin f bjects in prprtin t strage server resurce capacity and netwrk bandwidth availability. It supprts strage server nde additins and deletins with lw number f bject migratins. We studied the perfrmance f the prpsed mechanism thrugh simulatin results. ACKNOWLEDGMENT This wrk was supprted mainly by the Singapre A*STAR TSRP prject 22804004. REFERENCES [] Gartner Says That Cnsumers Will Stre Mre Than a Third f Their Digital Cntent in the Clud by 206. (202). [Online]. Available: http://www.gartner.cm/newsrm/id/206025 [2] N. McKewn, T. Andersn, B. Hari, G. Parulkar, et al., OpenFlw: Enabling innvatin in campus netwrks, in Prc. SIGCOMM, 2008. 205 Jurnal f Cmmunicatins 025
Jurnal f Cmmunicatins Vl. 0, N. 2, December 205 [3] A. Tavakli, M. Casad, and T. Kpnen, Applying NOX t the datacenter, in Prc. 8th ACM Wrkshp n Ht Tpics in Netwrks, 2009. [4] Open Netwrking Fundatin. (203). Open Netwrking Fundatin. [Online]. Available: https://www:pennetwrking:rg/ [5] C. Gu, G. Lu, H. Wang, S. Yang, C. Kng, et al., Secndnet: A data center netwrk virtualizatin architecture with bandwidth guarantees, in Prc. 6th Internatinal Cnference n Emerging Netwrking Experiments and Technlgies, USA, 200. [6] W. Sng, F. Wang, X. H. Sri, H, Lin, and Z. W. Wang, Netwrk I/O lad based virtual machine placement algrithm in HPC Clud, China Science: Infrmatin Science, vl. 42, pp. 290-302, 202. [7] J. Myint and T. Niang, A data placement algrithm with binary weighted tree n PC cluster-based clud strage system, in Prc. IEEE Internatinal Cnference n Clud and Service Cmputing, 20. [8] R. N. Mysre, A. Pambris, N. Farringtn, N. Huang, et al., PrtLand: A scalable fault-tlerant layer 2 data center netwrk fabric, in Prc. ACM Cnference n Data Cmmunicatin, New Yrk, NY, USA, 2009, pp. 39 50. [9] A. Greenberg, J. R. Hamiltn, N. Jain, S. Kandula, et al., VL2: A scalable and flexible datacenter netwrk, in Prc. ACM SIGCOMM 2009 Cnference n Data Cmmunicatin, New Yrk, NY, USA, 2009, pp. 5 62. [0] T. R. Hendersn, M. Lacage, and G. F. Riley, Netwrk simulatins with the ns-3 simulatr, in Prc. ACM SIGCOMM, 2008. [] URL: The ns-3 Netwrk Simulatr. [Online]. Available: http://www.nsnam.rg/ Renuga Kanagevelu received a PhD degree f Cmputer Engineering frm Nanyang Technlgical University, in 205. She is currently a Senir Research Engineer with A*STAR, Data Strage Institute, Singapre. Her research interests include Sftware- defined netwrks, Data Center and Netwrk Strage Technlgies. Khin M. M. Aung received a PhD degree f Cmputer Engineering frm Krea Aerspace University, in 2006. She is currently a Scientist with A*STAR, Data Strage Institute, Singapre. Her research interests include Data Center and Netwrk Strage Technlgies. 205 Jurnal f Cmmunicatins 026