How To Improve Traffic Engineering
|
|
|
- Angelica Skinner
- 5 years ago
- Views:
Transcription
1 Software Defined Networking-based Traffic Engineering for Data Center Networks Yoonseon Han, Sin-seok Seo, Jian Li, Jonghwan Hyun, Jae-Hyoung Yoo, James Won-Ki Hong Division of IT Convergence Engineering, POSTECH, Korea {seon54, Department of Computer Science and Engineering, POSTECH, Korea {sesise, noraki, styoo, Abstract Today s Data Center Networks (DCNs) contain tens of thousands of hosts with significant bandwidth requirements as the needs for cloud computing, multimedia contents, and big data analysis are increasing. However, the existing DCN technologies accompany the following two problems. First, power consumptions of a DCN is constant regardless of the utilization of network resources. Second, due to a static routing scheme, a few links in DCNs are experiencing congestions while other majority links are being underutilized. To overcome these limitations of the current DCNs, we propose a Software Defined Networking (SDN)-based Traffic Engineering (TE), which consists of optimal topology composition and traffic load balancing. We can reduce the power consumptions of the DCN by turning off links and switches that are not included in the optimal subset topology. To diminish network congestions, the traffic load balancing distributes everchanging traffic demands over the found optimal subset topology. Simulation results revealed that the proposed SDN-based TE approach can reduce power consumptions of a DCN about 41% and Maximum Link Utilization (MLU) about 6% on average in comparison with a static routing scheme. Index Terms Data Center Network, Traffic Engineering, Software Defined Networking I. INTRODUCTION A Data Center (DC) is a facility used to house computer servers or hosts, and a Data Center Network (DCN) interconnects these hosts using dedicated links and switches. Today s DCs may contain tens of thousands of hosts with significant bandwidth requirements as the needs for cloud computing, multimedia contents, and big data analysis are increasing [1]. Current DCNs are suffering from high operational expenditures and frequent link congestions. First of all, the amount of power consumed by a current DCN is constant regardless of the usage ratio of network resources, while the network utilization fluctuates depending on the time of day. The varying traffic demands in the DCN can be satisfied by a subset of the network resources most of the time [2]. As a result, operating costs of the DCN are higher than what are actually required. Second, due to a static flow path selection scheme, a few specific links are frequently experiencing congestions while other majority links are being under-utilized [3]. The static ECMP [4], most widely used for core routers, does not consider dynamic nature of DCN traffic characteristics and this leads to a congestion of a specific link even in a situation where almost every other links are underutilized. Considering these problems of current DCN technologies, this paper proposes a dynamic Traffic Engineering (TE) system for a DCN using Software Defined Networking (SDN) technologies. With SDN, a network administrator can vendor independently control the entire network from a single logical point (i.e. SDN controller) and this simplifies the network design and operation. TE is defined as network engineering dealing with the issue of performance evaluation and performance optimization of operational IP networks [5]. Typical objectives of TE include balancing network load and minimizing network utilization. A lot of studies have been proposed in the area of TE and they can be classified into three categories according to routing enforcement mechanisms: MultiProtocol Label Switching (MPLS)- [6], [7], Internet Protocol (IP)- [8], [9], or SDN-based [2], [1] [15]. Our proposed dynamic TE proposal for a DCN takes the SDN-based approach. The benefits of SDN-based TE approaches for a DCN are summarized as follows. It is relatively easier to obtain traffic and failure information via a (logically) centralized SDN controller. Any flow format with arbitrary granularity can be exploited for TE It is easy to apply TE results to switches in a DCN by modifying flow tables within the switches. The proposed TE system consists of two procedures: optimal topology composition and traffic load balancing. To reduce power consumptions of a DCN, the optimal topology composition finds a subset DCN topology that can accommodate expected traffic demands at the moment. We can reduce the power consumptions of the DCN by turning off links and switches that are not included in the optimal subset topology. To diminish network congestions, the traffic load balancing distributes ever-changing traffic demands over the found optimal subset topology. This traffic distribution makes it possible to accommodate more traffic without causing network congestions. To validate the proposed dynamic TE system, we have implemented a prototype of the proposed TE system for a DCN. The APIs provided by SDN controller were used for applying outputs of the proposed TE system to the Copyright IEICE - Asia-Pacific Network Operation and Management Symposium (APNOMS) 214
2 DCN. We also have evaluated performance of the proposed TE system using simulations in terms of power saving ratio and Maximum Link Utilization (MLU). II. RELATED WORK Many TE techniques for the traditional Internet have been proposed including [4], [6] [9], but TE for DCNs is still in a preliminary stage [16]. In the meantime, the needs for largescale DCNs are increasing sharply as well as the needs for efficient utilization of DCN resources. Recently, for that reason, several studies about TE for DCNs have been published including [2], [1] [12], [15], [16]. We briefly introduce major TE studies that are highly related with our proposed approach. Hedera [11], a dynamic flow routing system for a multistage switching fabric, utilizes a centralized approach to route elephant flows exceeding 1 percent of the host Network Interface Card (NIC) bandwidth while it utilizes static ECMP for the rest short lived mice flows. The purpose of Hedera is maximizing bisection bandwidth of a DCN by appropriately placing the elephant flows among multiple alternative paths; estimated flow demands of the elephant flows are used for the placement. The main limitation of Hedera is that it let the static ECMP routes the mice flows, which comprise more than 8% of the actual DCN traffic [3], [17]. Benson et al. proposed microte [1], a centralized system that adapts to traffic variations by leveraging the short term predictability of the DCN traffic, to achieve fine grained TE. It constantly monitors traffic variations, determines which Topof-Rack (ToR) pairs have predictable traffic, and assigns the predicted traffic to the optimal path. Similar to Hedera, the remaining unpredictable traffic is then routed using weighted ECMP, where the weights reflect the available capacity after the predictable traffic has been assigned. The major shortcomings of microte is twofold: a) it requires host modifications to collect fine-grained traffic data and b) it lacks of scalability due to its extremely short execution cycle Penalizing Exponential Flow-spliTting (PEFT) was proposed by Xu et al. [18] to achieve optimal TE for widearea ISP networks. Switches running PEFT make forwarding and traffic splitting decisions locally and independently with each other. In addition, packets, even in a same flow, can be forwarded through a set of unequal cost paths by exponentially penalized higher cost paths. Later, Tso et al. [16] modified the PEFT for DCN TE as a reactive online version to cope with dynamic and unpredictable nature of DCN traffic. However, PEFmposes heavy load on switches because each switch in PEFT has to measure traffic volume incoming to and outgoing from its ports and it has to calculate optimal routing paths using the measured traffic data. Another limitation of PEFT is that routing decisions of switches are not global optimal solutions. Finally, PEFT delivers packets in the same flow through a set of unequal cost paths by splitting them and this causes a packet reordering problem. ElasticTree [2] is a centralized system for dynamically adapting the energy consumption of a DCN by turning off the links and switches that do not essentially necessary to meet Switch & Link On/Off, Flow Table Update Traffic Engineering Manager SDN Controller Data Center Network Traffic Information Control Data Plane Interface Fig. 1. Traffic engineering system architecture. the varying traffic demands at the time. The fact that traffic can be satisfied by a subset of the entire network links and switches most of the time makes ElasticTree feasible. To find the minimum subset topology, i.e. essential links and switches, ElasticTree proposed three algorithms with different optimality and scalability: Linear Programming (LP)-based formal model, greedy bin packing heuristic, and topology-aware heuristic. The main problem of ElasticTree is that its flow allocation algorithms probably cause severe link congestions because they allocate flows to links while maximizing the utilization of link capacity. III. TRAFFIC ENGINEERING FOR DCNS In this section, we present the proposed dynamic TE system for DCNs in detail. First, we explain the overall system architecture of the proposed TE system. Thereafter, detailed algorithms for optimal topology composition and traffic load balancing are described. A. Traffic Engineering System Architecture Fig. 1 shows an overall architecture of the proposed dynamic TE system that has three components: a Data Center Network (DCN), an SDN Controller, and a TE Manager. The DCN, which contains many servers and SDN switches, is a target network of our TE system. The SDN switches in the DCN report their traffic and failure status to the SDN controller through the control data plane interface (e.g., OpenFlow). The SDN controller aggregates and summarizes the collected information. The TE manager takes the summarized traffic and failure information to dynamically make an appropriate TE decision at the time. The SDN controller also changes switching behavior of SDN switches by updating their flow tables, and turns on/off switches and links in the DCN to apply the TE decision that minimizes power consumptions and link congestions. The TE manager, a core component of our dynamic TE system, periodically takes traffic and failure information from the SDN controller, makes a TE decision using the information, and notifies the decision results to the SDN controller.
3 Network Topology and Link Capacity Traffic Engineering Manager Long-term Cycle (~hours) Short-term Cycle (~minutes) Switch & Link On/Off Optimal Topology Composition Traffic Load Balancing Traffic Matrix Switch & Link On/Off Status Flow Table Update Fig. 2. Traffic engineering manager. The TE manager consists of two major procedures: optimal topology composition and traffic load balancing (see Figure 2). Optimal topology composition periodically finds a minimum subset of links and switches that can accommodate hourly estimated traffic demands at the moment within the whole DCN topology. Traffic load balancing periodically distributes ever-changing traffic demands over the found optimal topology to minimize Maximum Link Utilization (MLU). Note that traffic load balancing is repeatedly executed in the time scale of minutes whereas the optimal topology composition is executed in the time scale of hours. B. Optimal Topology Composition The traffic demands in the DCN varies depending on the time. However, power consumption of the DCN are constant at higher lever than what are actually required. To reduce power consumption of a DCN, Heller et al. [2] proposed ElasticTree. Their proposal contains both an MCF-based formal model using linear programming, and a greedy bin packing heuristic which quickly produces a near-optimal solution. MCF (Multi Commodity Flow) problem is a network flow problem with multiple flow demands between different source and target nodes. In this paper, we propose a quicker algorithm to find the minimum subset topology using path-based MCF, which improves the link-based MCF ElasticTree. We also propose a refined greedy bin packing heuristic for minimum subset topology composition corresponding to the path-based MCF. 1) Subset Topology Composition using Path-based MCF: To reduce the computation time, we propose a minimum subset topology composition model based on path-based MCF, which requires significantly less decision variables and constraints than the MCF-based model [2]. The minimum subset topology composition model using path-based MCF is defined as follows. Input Network topology: G(V, E) Traffic matrix: T Link capacity: (u, v) E, c(u, v) Set of considered paths of flows: i, P Ti = {p i,,, p i,j,, p i,l } Subset of considered paths that contain a link (u, v): i, P (u,v) P Ti Set of all switches: S V Set of nodes connected to a switch: u S, V u Power cost of links: (u, v) E, a(u, v) Power cost of switches: u S, b(u) Decision Variables Flows along each path: i, p P Ti, f i (p) Binary decision variable indicating whether a link (u, v) is powered on: (u, v) E, X u,v Binary decision variable indicating whether a switch u is powered on: u S, Y u Objective Minimize (u,v) E X u,v a(u, v) + u S Y u b(u) Constraints Capacity limitation: s (u, v) E, k i=1 f p P (u,v) i (p) X u,v c(u, v) Demand satisfaction: i, p P Ti f i (p) = d i Bidirectional link power: (u, v) E, X u,v = X v,u Switch-to-link correlation: u S, w V u, X u,w = X w,u Y u Link-to-switch correlation: u S, Y u w V u X w,u = w V u X u,w The decision variables f i (p) represent allocation of flow to each path p. In order to allocate flows in T to the network topology G(V, E), path-based MCF model finds values of f i (p) that satisfy both capacity limitation and demand satisfaction constraints. The binary decision variables X u,v or Y u indicate whether a link (u, v) or a switch u is powered on respectively. With these binary decision variables, we can define an objective function that represents minimization of power costs of a DCN. The capacity limitation constraints force the sum of flows along each link (u, v) does not exceed the link capacity c(u, v) and ensure flows are allocated to only those links that are powered on. The demand satisfaction constraints ensure each source s i or target t i in a flow sends or receives an equal amount of traffic to its demand d i. The bidirectional link power constraints make the power statuses of both a link (u, v) and (v, u) consistent. The switch-to-link correlation constraints ensure that when a switch u is powered off, all links connected to this switch are also powered off. Similarly, the link-to-switch correlation constraints ensure that when all links connected to a switch u are powered off, the switch is also powered off. 2) Subset Topology Composition using Heuristic: We can find a subset topology using a heuristic within a short period of time even for a large scale DCN. While this heuristic does not guarantee a solution within a bound of optimal, it produces a high-quality subset topology in practice. In brief, this heuristic, which is called greedy bin packing [2], evaluates
4 Algorithm 1: Heuristic for Subset Topology Composition Input : T (Traffic Matrix), DCN Topology, Link Capacity Output: On/Off Status of switches and links 1: for all flow f in T do 2: list p possible paths from f src to f dst 3: sort list p by position from left to right 4: for all path p in list p do 5: if All links in p satisfy cap avail > f dmd then 6: Assign f to p 7: for all link l in p do 8: cap avail [l] = cap avail [l] f dmd 9: end for 1: break 11: end if 12: end for 13: end for 14: for all link l in DCN do 15: if cap avail [l] < cap max[l] then 16: Set l and connected switches to turn on 17: end if 18: end for possible paths between a source and a target of each flow, and allocates the flow to the leftmost or the rightmost path with sufficient capacity. Similar to the linear programmingbased formal models, this approach requires knowledge of the traffic matrix in advance, but it can compute the solution incrementally; we can process the traffic matrix on-line with this heuristic. Algorithm 1 describes the heuristic for subset topology composition in detail. It takes a traffic matrix T, a DCN topology, and a capacity of each link as input. The traffic matrix s a set of flows specified by (source, target, demand) tuples. The output of this algorithm is on/off status of switches and links that constitute the entire physical DCN topology. By using only turned on switches and links specified by this algorithm, we can construct the near-minimum subset topology that satisfies all the traffic demands in T. This subset topology composition algorithm consists of two parts: 1) allocation of each flows in T to the left most path with sufficient capacity (line 1 13) and 2) determination of switches and links to be turned on (line 14 18). 3) Extra Switch and Link Addition: The subset topology composition algorithms try to find the subset topology with the minimum number of links and switches. Thus, the probability that these subset links would experience traffic congestions is very high. By adding extra switches and links on the found minimum topology, the link congestion probability could be diminished. A procedure for re-distributing the everchanging traffic demands over the augmented optimal topology is necessary to fully utilize the reinforced extra links and to reduce possible link congestions. C. Traffic Load Balancing A few specific links in DCN are experiencing congestions, especially the links connected to core switches, while other majority links are being underutilized due to a static routing path selection scheme [3]. We propose a dynamic traffic load balancing algorithms to minimize possible link congestions by distributing traffic demands over the entire DCN topology or over the subset topology found by the optimal topology composition algorithms. We propose two different traffic load balancing algorithms for predicted traffic demands using pathbased MCF and a heuristic. These algorithms periodically distribute the ever-changing estimated traffic demands, which are predicted in the form of traffic matrix and in the time scale of a few minutes or seconds, over the found optimal topology. This makes it possible to accommodate more traffic demands without installing extra network resources. 1) Traffic Load Balancing using Path-based MCF: We can allocate flows in a traffic matrix T to network resources using Path-based MCF while distributing traffic load to minimize MLU. It is required to define additional decision variable m, which represents MLU, and to modify capacity limitation constraints with this variable m. The predicted traffic load balancing model using path-based MCF is defined as follows. The modified capacity limitation constraints ensure the sum of flows along each link (u, v) does not exceed the adjusted link capacity m c(u, v). This predicted traffic load balancing model exploits the modified capacity limitation constraints to minimize the MLU of all the links in the network by setting the objective function to minimize m. Input Network topology: G(V, E) Traffic matrix: T Link capacity: (u, v) E, c(u, v) Set of considered paths of flows: i, P Ti = {p i,,, p i,j,, p i,l } Subset of considered paths that contain a link (u, v): i, P (u,v) P Ti Decision Variables Maximum Link Utilization (MLU): m Flows along each path: i, p P Ti, f i (p) Objective: Minimize m Constraints Capacity limitation: (u, v) E, k i=1 f p P (u,v) i (p) m c(u, v) Demand satisfaction: i, p P Ti f i (p) = d i 2) Traffic Load Balancing using Heuristic: We can allocate flows to minimize MLU using a heuristic within a short period of time even for a large scale DCN. While this heuristic does not guarantee a solution within a bound of optimal, it produces a high-quality traffic allocation that minimizes MLU in practice. Algorithm 2 describes the heuristic for predicted traffic load balancing in detail. It takes a traffic matrix T, a DCN topology, and a capacity of each link as input. The traffic matrix s a set of flows specified by (source, target, demand) tuples. The output of this algorithm is allocation, which minimizes MLU, of flows in T to paths. This heuristic sorts n descending order of traffic demands to allocate large flows first (line 1). Then, for each flow in T, it evaluates possible paths from a source f src to a target f dst
5 Algorithm 2: Heuristic for Traffic Load Balancing Input : T (Traffic Matrix), DCN Topology, Link Capacity Output: Minimizing MLU path allocation of flows in T 1: sort n descending order of demands 2: for all flow f in T do 3: list p possible paths from f src to f dst 4: list MLU null 5: for all path p in list p do 6: list MLU [p] = MLU of p 7: end for 8: p sel = list p[index of minimum list MLU ] 9: Assign f to p sel 1: for all link l in p sel do 11: cap avail [l] = cap avail [l] f dmd 12: end for 13: end for TABLE I TRAFFIC MATRIX DATA SETS. Category Set 1 Set 2 Set 3 k of Fat tree # of Hosts 16 11,664 8,192 8,192 # of Flows per Host Intra-Rack Traffic % Traffic Demands 1 2% of a maximum link capacity (line 3). Among these considered paths, this heuristic allocates the flow f to a path p sel with the minimum MLU (line 4 9). Finally, it decreases the available capacities cap avail of links consisting the selected path p sel as many as the amount of traffic demand f dmd of the flow f (line 1 12). IV. IMPLEMENTATION AND EVALUATION We employed Mininet [19] network emulation tool for constructing a virtual DCN topology. Each switch in the virtual topology emulated by Mininet is a virtual instance of Open vswitch [2]. To generate traffic among hosts, we used iperf [21], which is a network testing tool that can create TCP or UDP data streams. It also provides measurement logs of the data streams. we have chosen to use Floodlight [22], a Java-based OpenFlow controller, as a centralized manager of the DCN. It updates flow tables of switches using OpenFlow protocol to apply TE results. The traffic engineering manager, a core component of our TE system, executes the proposed TE mechanism, and notifies the results to the SDN controller using Floodlight APIs. We implemented prototype of this TE manager using a Python programming language. We used three different data sets of traffic matrix summarized in Table I for experiments. In Table I, k means the Fat-Tree topology is organized using only k-port commodity switches. Each flow in these three traffic matrices has a demand that was randomly selected in between 1 2% of a maximum link capacity in common. The data set 1 were generated by increasing the number of hosts to examine influences of the size of a DCN. The data set 2 was generated by increasing the number of flows per host to identify influences of a traffic load in a DCN. Lastly, the data set 3 was generated by changing the intra-rack traffic ratio. Moreover, the data sets contain unpredicted traffic in common; the unpredicted traffic was assumed to contain 5 flows per host with traffic demands that were randomly selected in between 1 to 5% of a maximum link capacity. Fig. 3 shows power consumption ratio when the proposed optimal topology composition approach is applied. We assumed that the power costs of each switch and link were 15 and 1, respectively. We calculated power saving ratios after adding extra i aggregate switches per pod and j core switches to the found optimal subset topology (represented as A(i, j)). The result shows that our proposed mechanism reduces around 41% energy consumption with minimum topology composition (35% for A(1, k/8), 31% for A(2, 2k/8), and 27% for A(3, 3k/8)). We can identify four trends of the power saving ratio. 1) the power saving ratio was increased as the size of the DCN grows. 2) the saving ratio was decreased as the number of flows per host increases. 3) the ratio was increased as the intra-rack traffic ratio increases. 4) the gap of the power saving ratio between an optimal and an augmented topology was decreased as the size of the DCN grows. Fig. 4 shows Maximum Link Utilization (MLU). Note that the static routing scheme of Fat-Tree cannot be applied on an optimal subset topology because it was designed to make use of an entire Fat-Tree topology to distribute traffic load. We also plotted an average Link Utilization (LU) of aggregate and core links in the Figure as a base line. We can identify that the MLU of the static routing scheme was increased 1) as the size of the DCN grows, 2) as the number of flows per host increases, and 3) as the intra-rack traffic ratio decreases. In other words, our traffic load balancing schemes significantly reduced the MLU in comparison with the static scheme (maximum 6%). Moreover, we can identify that our traffic load balancing schemes stably maintained the MLUs as low as possible against the average LUs, whereas the static scheme failed to address. V. CONCLUSION AND FUTURE WORK In this paper, we have presented a dynamic TE system for a DCN in detail. We have explained the overall system architecture of the TE system, which has three components: a DCN, an SDN controller, and a TE manager. Thereafter, optimal topology composition and traffic load balancing for the TE manager are described using both linear programming and a heuristic approaches. We have implemented a prototype of the proposed TE system for a DCN using various tools. The evaluation results in simulations shows that the proposed method reduces 41% power consumption with minimal topology composition, and 6% lower Maximum Link Utilization (MLU) compared to a static routing scheme. As future work, we will further improve those TM estimation techniques considering characteristics of a DCN. Possible research directions to that include a) exploiting data obtained from end hosts as well as data from switches for estimating
6 6 6 6 Energy Saving Ratio (%) Minimum A(1, k/8) A(2, 2k/8) A2(3, 3k/8) Number of hosts (a) TM Set 1 Energy Saving Ratio (%) Minimum A(1, k/8) A(2, 2k/8) A2(3, 3k/8) Number of Flows per Host (b) TM Set 2 Energy Saving Ratio (%) Minimum 1 A(1, k/8) A(2, 2k/8) A2(3, 3k/8) Intra-Rack Traffic Ratio (%) (c) TM Set 3 Fig. 3. Power saving ratios. (k = a parameter to construct k-ary Fat-Tree topology) Link Utilization (%) Static Heuristic LP Link Utilization (%) Static Heuristic LP Link Utilization (%) Static Heuristic LP Number of hosts (a) TM Set Number of Flows per Host (b) TM Set Intra-Rack Traffic Ratio (%) (c) TM Set 3 Fig. 4. Maximum link utilization comparisons. TM and b) extending TE algorithm by utilizing flow-level characteristics of DCN. REFERENCES [1] M. Al-Fares, A. Loukissas, and A. Vahdat, A scalable, commodity data center network architecture, in Proc. ACM SIGCOMM 8, Seattle, USA, Aug , 28, pp [2] B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, and N. McKeown, ElasticTree: Saving energy in data center networks, in Proc. 7th USENIX Symposium on Networked Systems Design and Implementation (NSDI 1), San Jose, USA, Apr. 28 3, 21, pp [3] T. Benson, A. Akella, and D. A. Maltz, Network traffic characteristics of data centers in the wild, in Proc. ACM Internet Measurement Conference 21 (IMC 1), Melbourne, Australia, Nov. 1 3, 21, pp [4] C. Hopps, Analysis of an Equal-Cost Multi-Path algorithm, RFC 2992, Nov. 2. [5] D. Awduche, A. Chiu, A. Elwalid, I. Widjaja, and X. Xiao, Overview and principles of internet traffic engineering, RFC 3272, May 22. [6] D. Awduche, J. Malcolm, J. Agogbua, M. O Dell, and J. MacManus, Requirements for traffic engineering over MPLS, RFC 272, Sep [7] D. Awduche, MPLS and traffic engineering in ip networks, IEEE Communications Magazine, vol. 37, no. 12, pp , [8] B. Fortz and M. Thorup, Internet traffic engineering by optimizing OSPF weights, in Proc. 19th IEEE International Conference on Computer Communications (INFOCOM ), Tel Aviv, Israel, Mar. 26 3, 2, pp [9] B. Fortz, J. Rexford, and M. Thorup, Traffic engineering with traditional IP routing protocols, IEEE Communications Magazine, vol. 4, no. 1, pp , 22. [1] T. Benson, A. Anand, A. Akella, and M. Zhang, MicroTE: Fine grained traffic engineering for data centers, in Proc. 7th International Conference on emerging Networking EXperiments and Technologies (CoNEXT 11), Tokyo, Japan, Dec. 6 9, 211, pp [11] M. Al-Fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat, Hedera: Dynamic flow scheduling for data center networks, in Proc. 7th USENIX Symposium on Networked Systems Design and Implementation (NSDI 1), San Jose, USA, Apr. 28 3, 21, pp [12] Y. Li and D. Pan, OpenFlow based load balancing for Fat-Tree networks with multipath support, in Proc. 12th IEEE International Conference on Communications (ICC 13), Budapest, Hungary, Jun. 9 13, 213, pp [13] S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh, S. Venkata, J. Wanderer, J. Zhou, M. Zhu, and J. Zolla, B4: Experience with a globally-deployed software defined wan, in Proc. ACM SIGCOMM 13, Hong Kong, China, Aug , 213, pp [14] C.-Y. Hong, S. Kandula, R. Mahajan, M. Zhang, V. Gill, M. Nanduri, and R. Wattenhofer, Achieving high utilization with software-driven wan, in Proc. ACM SIGCOMM 13, Hong Kong, China, Aug , 213, pp [15] H. Long, Y. Shen, M. Guo, and F. Tang, LABERIO: Dynamic loadbalanced routing in OpenFlow-enabled networks, in Proc. 27th IEEE International Conference on Advanced Information Networking and Applications (AINA 13), Barcelona, Spain, Mar , 213, pp [16] F. P. Tso and D. P. Pezaros, Improving data center network utilization using near-optimal traffic engineering, IEEE Transactions on Parallel and Distributed Systems, vol. 24, no. 6, pp , Jun [17] S. Kandula, S. Sengupta, A. Greenberg, P. Patel, and R. Chaiken, The nature of datacenter traffic: Measurement & analysis, in Proc. ACM Internet Measurement Conference 29 (IMC 9), Chicago, USA, Nov. 4 6, 29, pp [18] D. Xu, M. Chiang, and J. Rexford, Link-state routing with hop-byhop forwarding can achieve optimal traffic engineering, IEEE/ACM Transactions on Networking, vol. 19, no. 6, pp , Dec [19] Mininet: An instant virtual network on your laptop (or other pc), [Online; accessed Nov. 27, 213]. [2] Open vswitch: An open virtual switch, [Online; accessed Dec. 2, 213]. [21] Iperf, [Online; accessed Dec. 2, 213]. [22] Floodlight OpenFlow controller, [Online; accessed Oct. 23, 213].
International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.
IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Ph.D. Research Plan. Designing a Data Center Network Based on Software Defined Networking
Ph.D. Research Plan Title: Designing a Data Center Network Based on Software Defined Networking Submitted by Nabajyoti Medhi Department of CSE, NIT Meghalaya Under the guidance of Prof. D. K. Saikia INDEX
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Traffic Engineering in SDN/OSPF Hybrid Network
2014 IEEE 22nd International Conference on Network Protocols Traffic Engineering in SDN/OSPF Hybrid Network Yingya Guo, Zhiliang Wang, Xia Yin, Xingang Shi,and Jianping Wu Department of Computer Science
Simulation of Heuristic Usage for Load Balancing In Routing Efficiency
Simulation of Heuristic Usage for Load Balancing In Routing Efficiency Nor Musliza Mustafa Fakulti Sains dan Teknologi Maklumat, Kolej Universiti Islam Antarabangsa Selangor [email protected] Abstract.
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
The Case for Fine-Grained Traffic Engineering in Data Centers
The Case for Fine-Grained Traffic Engineering in Data Centers Theophilus Benson, Ashok Anand, Aditya Akella and Ming Zhang University of Wisconsin-Madison; Microsoft Research Abstract In recent years,
Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP
IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York
A Reliability Analysis of Datacenter Topologies
A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br
Energy Optimizations for Data Center Network: Formulation and its Solution
Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin
Multi-Commodity Flow Traffic Engineering with Hybrid MPLS/OSPF Routing
Multi-Commodity Flow Traffic Engineering with Hybrid MPLS/ Routing Mingui Zhang Tsinghua University Beijing, China [email protected] Bin Liu Tsinghua University Beijing, China [email protected]
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Computer Science City University of Hong Kong HKUST, March 2, 2015 Background Aggregator Aggregator Aggregator Worker Worker Worker Worker Low latency for
Combined Smart Sleeping and Power Scaling for Energy Efficiency in Green Data Center Networks
UNIFI@ECTI-CON 2013 (May 14 th 17 th 2013, Krabi, Thailand) Combined Smart Sleeping and Power Scaling for Energy Efficiency in Green Data Center Networks Nguyen Huu Thanh Department of Communications Engineering
Minimizing Energy Consumption of Fat-Tree Data Center. Network
Minimizing Energy Consumption of Fat-Tree Data Center Networks ABSTRACT Qing Yi Department of Computer Science Portland State University Portland, OR 9727 [email protected] Many data centers are built using
Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks
Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks Yoonseon Han, Jae-Hyoung Yoo, James Won-Ki Hong Division of IT Convergence Engineering, POSTECH {seon54, jwkhong}@postech.ac.kr
Data Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
Hedera: Dynamic Flow Scheduling for Data Center Networks
Hedera: Dynamic Flow Scheduling for Data Center Networks Mohammad Al-Fares Sivasankar Radhakrishnan Barath Raghavan * Nelson Huang Amin Vahdat UC San Diego * Williams College - USENIX NSDI 2010 - Motivation!"#$%&'($)*
Adaptive Routing for Layer-2 Load Balancing in Data Center Networks
Adaptive Routing for Layer-2 Load Balancing in Data Center Networks Renuga Kanagavelu, 2 Bu-Sung Lee, Francis, 3 Vasanth Ragavendran, Khin Mi Mi Aung,* Corresponding Author Data Storage Institute, Singapore.E-mail:
Datacenter Network Large Flow Detection and Scheduling from the Edge
Datacenter Network Large Flow Detection and Scheduling from the Edge Rui (Ray) Zhou [email protected] Supervisor : Prof. Rodrigo Fonseca Reading & Research Project - Spring 2014 Abstract Today, datacenter
A New Forwarding Policy for Load Balancing in Communication Networks
A New Forwarding Policy for Load Balancing in Communication Networks Martin Heusse Yvon Kermarrec ENST de Bretagne BP 83, 985 Brest Cedex, France [email protected] Abstract We present in this
Resolving Packet Loss in a Computer Centre Applications
International Journal of Computer Applications (975 8887) olume 74 No., July 3 Resolving Packet Loss in a Computer Centre Applications M. Rajalakshmi C.Angel K. M. Brindha Shree ABSTRACT The modern data
Data Center Load Balancing. 11.11.2015 Kristian Hartikainen
Data Center Load Balancing 11.11.2015 Kristian Hartikainen Load Balancing in Computing Efficient distribution of the workload across the available computing resources Distributing computation over multiple
3D On-chip Data Center Networks Using Circuit Switches and Packet Switches
3D On-chip Data Center Networks Using Circuit Switches and Packet Switches Takahide Ikeda Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka,
2004 Networks UK Publishers. Reprinted with permission.
Riikka Susitaival and Samuli Aalto. Adaptive load balancing with OSPF. In Proceedings of the Second International Working Conference on Performance Modelling and Evaluation of Heterogeneous Networks (HET
Cracking Network Monitoring in DCNs with SDN
Cracking Network Monitoring in DCNs with SDN Zhiming Hu and Jun Luo School of Computer Engineering, Nanyang Technological University, Singapore Email: {zhu7, junluo}@ntu.edu.sg Abstract The outputs of
MANY big-data computing applications have been deployed
1 Data Transfer Scheduling for Maximizing Throughput of Big-Data Computing in Cloud Systems Ruitao Xie and Xiaohua Jia, Fellow, IEEE Abstract Many big-data computing applications have been deployed in
Data Center Network Topologies: VL2 (Virtual Layer 2)
Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted
Data Center Network Architectures
Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology [email protected] Abstract Data centers have become increasingly essential part of
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks Hong Xu, Baochun Li [email protected], [email protected] Department of Computer Science, City University of Hong Kong Department
Improving Flow Completion Time for Short Flows in Datacenter Networks
Improving Flow Completion Time for Short Flows in Datacenter Networks Sijo Joy, Amiya Nayak School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada {sjoy028, nayak}@uottawa.ca
Load Balancing by MPLS in Differentiated Services Networks
Load Balancing by MPLS in Differentiated Services Networks Riikka Susitaival, Jorma Virtamo, and Samuli Aalto Networking Laboratory, Helsinki University of Technology P.O.Box 3000, FIN-02015 HUT, Finland
Portland: how to use the topology feature of the datacenter network to scale routing and forwarding
LECTURE 15: DATACENTER NETWORK: TOPOLOGY AND ROUTING Xiaowei Yang 1 OVERVIEW Portland: how to use the topology feature of the datacenter network to scale routing and forwarding ElasticTree: topology control
On Tackling Virtual Data Center Embedding Problem
On Tackling Virtual Data Center Embedding Problem Md Golam Rabbani, Rafael Esteves, Maxim Podlesny, Gwendal Simon Lisandro Zambenedetti Granville, Raouf Boutaba D.R. Cheriton School of Computer Science,
IP Traffic Engineering over OMP technique
IP Traffic Engineering over OMP technique 1 Károly Farkas, 1 Zoltán Balogh, 2 Henrik Villför 1 High Speed Networks Laboratory Department of Telecommunications and Telematics Technical University of Budapest,
Disjoint Path Algorithm for Load Balancing in MPLS network
International Journal of Innovation and Scientific Research ISSN 2351-8014 Vol. 13 No. 1 Jan. 2015, pp. 193-199 2015 Innovative Space of Scientific Research Journals http://www.ijisr.issr-journals.org/
OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?
OpenFlow and Onix Bowei Xu [email protected] [1] McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM CCR, 38(2):69-74, Apr. 2008. [2] Koponen et al., "Onix: a Distributed Control
PCube: Improving Power Efficiency in Data Center Networks
PCube: Improving Power Efficiency in Data Center Networks Lei Huang, Qin Jia, Xin Wang Fudan University Shanghai, China [email protected] [email protected] [email protected] Shuang Yang Stanford
Data Center Networking with Multipath TCP
Data Center Networking with Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley University College London, Universite Catholique de Louvain ABSTRACT Recently
Future of DDoS Attacks Mitigation in Software Defined Networks
Future of DDoS Attacks Mitigation in Software Defined Networks Martin Vizváry, Jan Vykopal Institute of Computer Science, Masaryk University, Brno, Czech Republic {vizvary vykopal}@ics.muni.cz Abstract.
Green Routing in Data Center Network: Modeling and Algorithm Design
Green Routing in Data Center Network: Modeling and Algorithm Design Yunfei Shang, Dan Li, Mingwei Xu Tsinghua University Beijing China, {shangyunfei, lidan, xmw}@csnet1.cs.tsinghua.edu.cn ABSTRACT The
AIN: A Blueprint for an All-IP Data Center Network
AIN: A Blueprint for an All-IP Data Center Network Vasileios Pappas Hani Jamjoom Dan Williams IBM T. J. Watson Research Center, Yorktown Heights, NY Abstract With both Ethernet and IP powering Data Center
International Journal of Advanced Research in Computer Science and Software Engineering
Volume 2, Issue 9, September 2012 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com An Experimental
Optimization of IP Load-Balanced Routing for Hose Model
2009 21st IEEE International Conference on Tools with Artificial Intelligence Optimization of IP Load-Balanced Routing for Hose Model Invited Paper Eiji Oki Ayako Iwaki The University of Electro-Communications,
Empowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
Traffic Engineering With Traditional IP Routing Protocols
Traffic Engineering With Traditional IP Routing Protocols Bernard Fortz Jennifer Rexford Mikkel Thorup Institut d Administration et de Gestion Internet and Networking Systems Universite Catholique de Louvain
Longer is Better? Exploiting Path Diversity in Data Centre Networks
Longer is Better? Exploiting Path Diversity in Data Centre Networks Fung Po (Posco) Tso, Gregg Hamilton, Rene Weber, Colin S. Perkins and Dimitrios P. Pezaros University of Glasgow Cloud Data Centres Are
Data Center Networking with Multipath TCP
Data Center Networking with Multipath TCP Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley Hotnets 2010 報 告 者 : 莊 延 安 Outline Introduction Analysis Conclusion
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support Qingwei Du and Huaidong Zhuang College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics,
HERCULES: Integrated Control Framework for Datacenter Traffic Management
HERCULES: Integrated Control Framework for Datacenter Traffic Management Wonho Kim Princeton University Princeton, NJ, USA Email: [email protected] Puneet Sharma HP Labs Palo Alto, CA, USA Email:
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012
VL2: A Scalable and Flexible Data Center Network 15744: Computer Networks, Fall 2012 Presented by Naveen Chekuri Outline Introduction Solution Approach Design Decisions Addressing and Routing Evaluation
Load-Dependent Flow Splitting for Traffic Engineering in Resilient OpenFlow Networks
c 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising
An Adaptive Load Balancing to Provide Quality of Service
An Adaptive Load Balancing to Provide Quality of Service 1 Zahra Vali, 2 Massoud Reza Hashemi, 3 Neda Moghim *1, Isfahan University of Technology, Isfahan, Iran 2, Isfahan University of Technology, Isfahan,
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Distributed Out-bound Load Balancing in Inter-AS Routing by Random Matchings
Distributed Out-bound Load Balancing in Inter-AS Routing by Random Matchings Ravi Musunuri Jorge A. Cobb Department of Computer Science The University of Texas at Dallas Richardson, TX-75083-0688 Email:
Network Virtualization Server for Adaptive Network Control
Network Virtualization Server for Adaptive Network Control Takashi Miyamura,YuichiOhsita, Shin ichi Arakawa,YukiKoizumi, Akeo Masuda, Kohei Shiomoto and Masayuki Murata NTT Network Service Systems Laboratories,
Impact of Ethernet Multipath Routing on Data Center Network Consolidations
Impact of Ethernet Multipath Routing on Data Center Network Consolidations Dallal Belabed, Stefano Secci, Guy Pujolle, Deep Medhi Sorbonne Universities, UPMC Univ Paris 6, UMR 766, LIP6, F-7, Paris, France.
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
Power-efficient Virtual Machine Placement and Migration in Data Centers
2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing Power-efficient Virtual Machine Placement and Migration
HybNET: Network Manager for a Hybrid Network Infrastructure
HybNET: Network Manager for a Hybrid Network Infrastructure Hui Lu, Nipun Arora, Hui Zhang, Cristian Lumezanu, Junghwan Rhee, Guofei Jiang Purdue University NEC Laboratories America [email protected] {nipun,huizhang,lume,rhee,gfj}@nec-labs.com
OpenDaylight Project Proposal Dynamic Flow Management
OpenDaylight Project Proposal Dynamic Flow Management Ram (Ramki) Krishnan, Varma Bhupatiraju et al. (Brocade Communications) Sriganesh Kini et al. (Ericsson) Debo~ Dutta, Yathiraj Udupi (Cisco) 1 Table
Lecture 7: Data Center Networks"
Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE
Hadoop Technology for Flow Analysis of the Internet Traffic
Hadoop Technology for Flow Analysis of the Internet Traffic Rakshitha Kiran P PG Scholar, Dept. of C.S, Shree Devi Institute of Technology, Mangalore, Karnataka, India ABSTRACT: Flow analysis of the internet
CARPO: Correlation-Aware Power Optimization in Data Center Networks
: Correlation-Aware Power Optimization in Data Center Networks Xiaodong Wang, Yanjun Yao, Xiaorui Wang, Kefa Lu, Qing Cao The Ohio State University, USA University of Tennessee, Knoxville, USA Abstract
Quality of Service Routing Network and Performance Evaluation*
Quality of Service Routing Network and Performance Evaluation* Shen Lin, Cui Yong, Xu Ming-wei, and Xu Ke Department of Computer Science, Tsinghua University, Beijing, P.R.China, 100084 {shenlin, cy, xmw,
Data Center Netwokring with Multipath TCP
UCL DEPARTMENT OF COMPUTER SCIENCE Research Note RN/1/3 Data Center Netwokring with 1/7/21 Costin Raiciu Christopher Pluntke Sebastien Barre Adam Greenhalgh Damon Wischik Mark Handley Abstract Data centre
WHITE PAPER. Engineered Elephant Flows for Boosting Application Performance in Large-Scale CLOS Networks. March 2014
WHITE PAPER Engineered Elephant s for Boosting Application Performance in Large-Scale CLOS Networks Abstract. Data Center networks built using Layer 3 ECMP-based CLOS topology deliver excellent bandwidth
Demand-Aware Flow Allocation in Data Center Networks
Demand-Aware Flow Allocation in Data Center Networks Dmitriy Kuptsov Aalto University/HIIT Espoo, Finland [email protected] Boris Nechaev Aalto University/HIIT Espoo, Finland [email protected]
Falloc: Fair Network Bandwidth Allocation in IaaS Datacenters via a Bargaining Game Approach
Falloc: Fair Network Bandwidth Allocation in IaaS Datacenters via a Bargaining Game Approach Fangming Liu 1,2 In collaboration with Jian Guo 1,2, Haowen Tang 1,2, Yingnan Lian 1,2, Hai Jin 2 and John C.S.
On the effect of forwarding table size on SDN network utilization
IBM Haifa Research Lab On the effect of forwarding table size on SDN network utilization Rami Cohen IBM Haifa Research Lab Liane Lewin Eytan Yahoo Research, Haifa Seffi Naor CS Technion, Israel Danny Raz
MicroTE: Fine Grained Traffic Engineering for Data Centers
MicroTE: Fine Grained Traffic Engineering for Data Centers Theophilus Benson, Ashok Anand, Aditya Akella and Ming Zhang University of Wisconsin-Madison; Microsoft Research ABSTRACT The effects of data
Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee
International Journal of Science and Engineering Vol.4 No.2(2014):251-256 251 Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee Yu-Jia Chen, Feng-Yi Lin and Li-Chun Wang Department of
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
SDN. What's Software Defined Networking? Angelo Capossele
SDN What's Software Defined Networking? Angelo Capossele Outline Introduction to SDN OpenFlow Network Functions Virtualization Some examples Opportunities Research problems Security Case study: LTE (Mini)Tutorial
Multipath and Dynamic Queuing base load balancing in Data Centre Network
Multipath and Dynamic Queuing base load balancing in Data Centre Network Sadhana Gotiya Pursuing M-Tech NIIST, Bhopal,India Nitin Mishra Asst.Professor NIIST, Bhopal,India ABSTRACT Data Centre Networks
Enhanced Variable Splitting Ratio Algorithm for Effective Load Balancing in MPLS Networks
Journal of Computer Science 4 (3): 232-238, 2008 ISSN 1549-3636 2008 Science Publications Enhanced Variable Splitting Ratio Algorithm for Effective Load Balancing in MPLS Networks 1 G. Murugesan, 2 A.M.
CREATE: CoRrelation Enhanced traffic matrix Estimation in Data Center Networks
: CoRrelation Enhanced traffic matrix Estimation in Data Center Networks Zhiming Hu Yan Qiao Jun Luo Peng Sun Yonggang Wen School of Computer Engineering, Nanyang Technological University, Singapore Email:
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Department of Computer Science City University of Hong Kong ShanghaiTech Symposium on Data Scinece June 23, 2015 Today s plan Overview of research in DCN
A MIXED-METRIC ROUTING WITH OPTIMAL FLOW DISTRIBUTION FOR MPLS NETWORKS
Journal of Engineering Sciences, Assiut University, Vol. 33, No.1, pp. 159-172, January 2006 A MIXED-METRIC ROUTING WITH OPTIMAL FLOW DISTRIBUTION FOR MPLS NETWORKS Associate Professor at Communications
Open Source Network: Software-Defined Networking (SDN) and OpenFlow
Open Source Network: Software-Defined Networking (SDN) and OpenFlow Insop Song, Ericsson LinuxCon North America, Aug. 2012, San Diego CA Objectives Overview of OpenFlow Overview of Software Defined Networking
A Comparative Study of Data Center Network Architectures
A Comparative Study of Data Center Network Architectures Kashif Bilal Fargo, ND 58108, USA [email protected] Limin Zhang Fargo, ND 58108, USA [email protected] Nasro Min-Allah COMSATS Institute
CS6204 Advanced Topics in Networking
CS6204 Advanced Topics in Networking Assoc Prof. Chan Mun Choon School of Computing National University of Singapore Aug 14, 2015 CS6204 Lecturer Chan Mun Choon Office: COM2, #04-17 Email: [email protected]
