Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP
|
|
|
- Tyrone Hardy
- 10 years ago
- Views:
Transcription
1 IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York University, Brooklyn, New York Abstract Data center networks often use densely interconnected topologies to provide high bandwidth for internal data exchange. In such network, it is critical to employ effective load balancing schemes so that the bandwidth resources can be fully utilized. A simple and widely adopted scheme is equalcost multi-path (ECMP) routing, which is generally supported by commodity switches and routers. However, research shows that ECMP cannot always ensure even traffic distribution among multiple paths. Consequently, ECMP cannot guarantee optimal resource utilization. We propose a scheme to complement ECMP with per-flow reroute. The basic idea is to perform ECMPbased load balancing by default. When a congestion occurs on a certain link, we dynamically reroute one or a few big flows to alternative paths to alleviate the congestion. The main contribution of our research is to design a scheme that enables per-flow reroute without introducing any modifications to IP switches and routers. All the flow-based operations and reroute functionalities are implemented in software that are installed on end hosts and centralized controllers. We call this scheme PROBE (Probe and RerOute based on ECMP). PROBE uses a traceroutelike approach to discover alternative paths and modifies packet headers to enable flow-based reroute. We show that PROBE is a low cost, low complexity and feasible scheme that can be easily implemented in existing data center networks that consist of commodity switches and routers. Index Terms reroute; ECMP; data center network; load balancing; flow-based routing I. INTRODUCTION The purpose of load balancing in communication networks is to route traffic across multiple paths in a good way so that the work load on the links and/or nodes are evenly distributed. In practice, people usually focus on links to design and evaluate load balancing. Typically, routing in an autonomous system (AS) is based on shortest path algorithms, e.g., open shortest path first (OSPF) [14]. Without load balancing over multiple paths, the shortest path from a source to a destination is calculated in advance, and all the according traffic is directed through this shortest path. Data center networks often use densely interconnected topologies to provide large bandwidth for internal data exchange. For example, fat-tree and Clos networks are widely A G Fig. 1. B E H C Traffic load balancing. adopted where a large number of paths exist between each pair of nodes [6], [15]. The newly proposed data center network topologies, including DCell [8], BCube [7], DPillar [11], also share the feature of dense interconnections. In this type of networks, using single-path routing without load balancing cannot fully utilize the network capacity. As a result, congestion may occur even if the network still has abundant unused bandwidth. In Fig. 1, if two shortest paths A-E-F-D and G-E-F-J are selected for single-path routing, link E-F may be overloaded even if paths A-B-C-D and G-H-I-J have unused bandwidth. This problem can be alleviated using equal-cost multi-path (ECMP) routing [9]. With ECMP, multiple shortest paths are calculated from a source to a destination, and traffic is distributed across these equal-cost paths to achieve load balancing. In Fig. 1, if A-E-F-D and A-B-C-D are used to carry traffic from A to D, while G-E-F-J and G-H-I-J are used to carry traffic from G to J, the network utilization will be greatly improved. With ECMP, each router may have multiple output ports for the same destination prefix, which lead to multiple paths. When a packet arrives, the router calculates a hash value based on the packet header and select one of the feasible output ports based on the hash value. It is common practice to use the 5-tuple header fields [source IP address, destination IP address, protocol type, source port, destination port] to calculate the hash value. With this approach, packets belonging to the same flow follow the same path, thus avoiding out-of-order delivery. F I D J /11/$ IEEE 614
2 However, using ECMP cannot guarantee good load balancing because of two major reasons. First, hash based traffic distribution is per-flow based, not per packet based. Thus the result is to balance the number of flows on different paths, not the bit rate. Even if two paths carry the same number of flows, the traffic loads may not be equal since the flows have different bit rates [1]. Second, from the network-wide viewpoint, using ECMP may still lead to overload on certain links. In Fig. 1, if A D and G J evenly distribute their traffic between the two paths, the load on link E-F would be twice of the load on any other links. One may think about adjusting the hash function in a sophisticate way to achieve networkwide load balancing. Unfortunately, this is infeasible because the traffic fluctuates all the time and route recalculation occurs each time there is a topology change. Therefore, tuning hash functions can barely follow such dynamic changes, let aside the considerable complexity. A common approach to solve the problems of ECMP is flow-based routing. OpenFlow [13] defines a framework where switches and routers maintain flow tables and perform per-flow routing. Such flow tables can be dynamically modified from a remote station. Hedera [1] shows how to use OpenFlow in data center networks to achieve load balancing. There is no doubt that OpenFlow is a powerful scheme with great potential. However, it is not supported by existing commodity switches and routers, and the flow table configuration and maintenance are non-trivial. In this paper we present a scheme called PROBE (Probe and RerOute Based on ECMP). PROBE exploits the multipath routing capability provided by ECMP to realize flow-based rerouting, thus enabling fine granular traffic control. The best advantage of PROBE is that it achieves flow-based rerouting without requiring flow tables in the routers. Actually it does not require any modifications of the existing routers. Instead, all the operations are performed at the end hosts. With this property, PROBE can be easily deployed in existing data center networks to achieve performance improvement. A. Problem Description II. ARCHITECTURE OF PROBE We consider a network that employs ECMP for load balancing but do not introduce any restrictions to the hash function that is used by each router to determine traffic distribution. Consider the situation in Fig. 2 where a big flow from host s to host d experiences congestion on link B-C, our goal is to design a scheme that can discover an alternative path (i.e., either A-B-F-D or A-E-F-D in this example) and reroute the flow through this path to bypass the congested link. s agent software Original path Source port: Destination port: 5000 A Alternative path 2 Source port: Destination port: 9000 B E C F Alternative path 1 Source port: Destination port: 9000 Fig. 2. Example of PROBE. By translating ports [10000, 5000] to [15008,9000] in packet headers, the source agent reroutes the flow from the original path to alternative path 2. The destination agent translates [15008, 9000] back to [10000, 5000]. B. Overview of PROBE The essence of PROBE is to let the source host probe the network to discover alternative paths using a traceroute-like probe [12]. Since routers perform multi-path routing based on the 5-tuple header fields, our scheme conduct multiple probes with various 5-tuple field values to obtain alternative paths. Among the 5-tuple fields, we only change the protocol port numbers and keep the other fields unchanged. From the probe results, the host is able to create a table indicating the 5- tuple values for each alternative path. When the host needs to send packets along a specific alternative path for rerouting, it only needs to modify the 5-tuple header fields with the corresponding values from the probe results. Fig. 2 shows an example. 1) Set up: A TCP flow from s to d is currently taking A-B- C-D with source/destination ports [10000,5000]. Hosts s and d run an agent software that implements the PROBE scheme. 2) Probe: The agent software in host s starts multiple traceroute-like probes to discover alternative paths by setting the port numbers to various values. The IP addresses and protocol type are set to the same values as the original flow. From the probe responding packets, the source agent software finds that ports [15000,9000] corresponds to path A-B-F-D, while [15008,9000] corresponds to path A-E-F-D. 3) Reroute: When the original flow needs to be rerouted to avoid the congestion on link B-C, the agent software in host s creates an entry in its flow table to translate ports [10000,5000] to [15008,9000]. This entry is sent to the agent software at host d before any packet is rerouted. After that, the source agent translates each packet with ports [10000,5000] to [15008,9000] and sends it to the network. The routers perform ECMP-based multipath routing and deliver these packets along A-E-F-D to host d, not aware of the port translation. The destination agent at host d performs a simple table lookup and translate D d 615
3 Source host detects a big flow? Yes Source host performs probe to find alternative paths Source host reports the alternative paths and flow to a controller Controller monitors link. Upon congestion, it determines which flow to reroute and which path to use. At the notification from the controller, the host reroutes the big flow to the selected alternative path by TCP/UDP port translation. Fig. 3. Architecture of PROBE. the ports back to the original values. This is similar to network address translation (NAT) [5]. Finally, the packets are delivered to the applications as usual. C. Technical Details The architecture of PROBE is illustrated in Fig. 3. Flow detection, path probe, and flow reroute are function modules in end hosts and are implemented as part of the agent software. The path monitoring and reroute decision-making can be implemented either in a centralized controller or distributed in the agent software. Due to efficiency and simplicity, centralized controllers have been widely adopted in enterprise networks and data centers [1], [3], [6], [15]. In this paper, we focus on the implementation of PROBE with a centralized controller. The details of the function modules are explained below. Flow detection: Each end host monitors its out going traffic to identify big flows. A flow is defined using the 5-tuple header fields. A big flow is defined as a packet stream that lasts longer than a predetermined duration threshold and has a bit rate that is higher than a predetermined rate threshold. Reroute is performed only to big flows because moving such flows is effective to alleviate congestion. In contrast, small flows either last a short period or have low bit rate. Rerouting such flows is less effective to resolve congestion and improve quality of service. Flow detection can be performed using various methods. One typical approach is based on flow sampling, where packets are periodically sampled to measure/estimate the rate of flows, such as sflow [16] and Cisco NetFlow [4]. While flow sampling may generate considerable error for small flows, it is generally good enough to detect big flows since such flows have more packets and have a higher chance of being sampled. Path probe: Since end hosts do not participate in routing, they do not have the topology information and routing information that are necessary for flow rerouting. To resolve this problem, we let end hosts obtain path information by probing the network. The approach we take is similar to traceroute. The software agent in the source node initiates multiple probes. Each probe is different from the other in that the probe packets carry different 5-tuple header field values. Since the routers hash the 5-tuple header fields to perform load balancing using ECMP, it is very likely that packets belonging to different probe sessions may take different paths. Similar to traceroute, the agent software sends multiple packets with the TTL fields set to 1, 2, 3, etc, thus getting feedback packets from the routers, with which the agent is able to reconstruct the multiple paths being maintained in the network. Unlike conventional traceroute, the agent software generates probe packets with TCP/UDP headers, just like tcptraceroute [17] and Paris traceroute [2], which is necessary since we rely on port numbers to achieve path control. We would like to clarify that path probe is performed only after a big flow is detected. For small flows, the conventional routing is applied. Since the number of big flows is a small fraction of the total traffic [10], the probe does not generate too much overhead traffic. We discuss this and several other critical issues in the next section. Path monitoring: In a data center network (or enterprise network) that employs centralized controllers, the traffic load of each link is constantly measured and reported to a controller. Therefore, it is straightforward to exploit this functionality to let the controller make rerouting decisions. After a host detects a big flow and completes path probe, it reports the big flow and the alternative paths to the controller, as illustrated in Fig. 4. The routers perform periodic measurement and reports the load on each link to the controller. When the load on a link exceeds a threshold, the controller picks a big flow that traverses this link and ensures that it has an 616
4 reroute notification Fig. 4. host controller flow and path info probe response (ICMP) link load report router Reroute decision-making using centralized control. alternative path with available bandwidth for rerouting. After that, the controller notifies the source host to start the rerouting. The notification includes the flow ID and the path ID, with which the host agent software knows what values to use for port translation. Flow reroute: After a host receives a reroute notification from the controller, it first creates a translation bind between the original ports and the new ports and sends the information to the destination agent. The source and destination agents maintain a TCP connection to ensure reliable message exchange. After that, the big flow is rerouted simply by performing port translations at source and destination hosts. The IP address and protocol values are not changed to ensure the correctness of packet delivery. D. Advantages of PROBE The most prominent advantage of PROBE is that it enables explicit path control to avoid congested links without introducing any modifications to the routers. All the function modules are implemented in software and are running on end hosts and centralized controllers. PROBE can be easily deployed in an existing network using commodity IP routers and switches. It is a low cost, low complexity, and practical solution. PROBE performs flow-based traffic control. Unlike Open- Flow [13] and Ethane [3], it does not require flow tables in switches and routers. While the software agents need to maintain flow-based lookup tables, the complexity is substantially lower because the table size is small, the update is simple, and the maintenance is totally distributed. PROBE exploits the feature of ECMP, and complements it to achieve better load balancing. While conventional ECMP is simple, it does not achieve even traffic distribution and does not offer per-flow routing controllability. PROBE runs on top of ECMP and has the ability to fine tune the routing of big flows to fully utilize the resource utilization. PROBE helps to achieve great performance without introducing high complexity. III. DESIGN CONSIDERATIONS A. Path Probing One may wonder if the path probe would take too much time to meet the timing requirement of flow reroute. Conventional traceroute probes a path in a hop-by-hop manner. It starts by setting TTL=1 to discover the first router, and then increments the TTL value by one in each time. Overall the delay increases with the hop count. PROBE takes several methods to reduce the delay. The first method is to probe early. Probes are triggered immediately after a big flow is detected, most likely this is before congestion occurs. The second method is to probe multiple hops in parallel, that is, the source agent software sends out multiple packets with the TTL fields set to 1, 2, 3,... Consequently, the probe delay does not increase with the hop count. The third method is to probe multiple paths in parallel. E.g., we can start 10 probes (with 10 different port pair values) all at once. The last method is to configure the routers to respond to such probe packets quickly. Our measurement shows that even low-end commodity routers can easily reach sub millisecond level delay. While certain ISPs restrict the response rate for traceroute due to security considerations, it should not be a problem for data center networks where the hosts are trusted parties. Also the propagation delay is trivial in data center networks where the physical distance is short. We believe for data center networks it is feasible to achieve sub 100 ms probe delay. Noting that big flows often last for seconds or even more than 10 seconds [10], 100 ms is a fairly reasonable number. B. Agent Communication and Caching The source and destination hosts maintain a permanent TCP connection to exchange control messages. Once a rerouting decision is made, the source host creates a port translation rule and sends it to the destination agent for reverse port translation. Therefore, the source and destination agents have consistent translation lookup tables. The source agent can cache the results of a path probe operation for later use, so that it does not need to perform probe frequently to the same destination host. To keep the table up-to-date we allow the entries to age out. C. Traffic Overhead While each probe generates extra overhead traffic, it does not introduce impact to data center networks for several reasons. First, the probe is triggered only by big flows. According to traffic measurement [10], only a small fraction of flows (about 20%) are big flows while most flows are small flows. Second, the PROBE agents cache path information for later use, thus they do not probe the same destination frequently. 617
5 3 rows 3 stages Fig. 5. A multi-stage topology (R =3,T =3). Third, the probe packets are small, containing a 20-byte IP header and a 20-byte TCP header (or a 8-byte UDP header). So the total short-term peak rate bound of a probe is r = N H L/Δ, (1) where N is the number of parallel probe sessions, H is the maximum hop count, L is the length of a probe packet, and Δ is the probe duration. When a physical computer is virtualized into multiple virtual machines, the overhead can be reduced by consolidating the probe operation. E.g., we can integrate the PROBE agent with the hypervisor so that all the virtual machines can share the same probe results. IV. PERFORMANCE EVALUATION In this paper we focus on the design of PROBE and its feasibility. While the evaluation of load balancing performance improvement using PROBE is an important issue, we leave it to our future research. In this section we mainly study the effectiveness of path probe. Noting that ECMP distributes packets randomly (using hash function), it is possible that two or more probe operations happen to give the same path. Intuitively, in a data center with rich connectively (e.g., fat tree or Clos network topology), it is fairly easy to discover a number of different paths (not necessarily disjoint, though) using multiple probe. We perform in-depth study of this issue in the follows. For simplicity, we conduct performance evaluation in a regular topology shown in Fig. 5, where the source node s connects to the destination node d through a R-row, T -stage Clos network. A. Analysis Assume all the links have equal cost, it is easy to see that the total number of equal-cost shortest paths from s to d is R T +1. We assume all the routers perform load balancing using independent, uniformly distributed hash functions. Given an existing path, the probability that a random probe finds a nonoverlapping path is p 1 =1 1. (2) RT +1 Therefore, the average number of probes needed to find a different path is M 1 = mp 1 (1 p 1 ) m 1 = 1 +1 RT = p 1 R T (3) m=1 Since data center networks are densely interconnected, the total number of path R T +1 is a large number. Therefore, M 1 is close to 1, which means it is very easy to find a different path in just a few probes. The above analysis does not specifically require the new path to be link-disjoint from the existing one. Although PROBE does not require the reroute to be link-disjoint, it is interesting to study the probability. Based on simple analysis, we find the probability that a random probe gives a linkdisjoint path is p 2 = R 1 1 R (R2 R 2 ) T. (4) Similarly, the average number of probes needed to find a linkdisjoint path is M 2 = mp 2 (1 p 2 ) m 1 = 1 = R p 2 R 1 ( R 2 R 2 1 )T. (5) m=1 For a small network with R =10and T =5, the average number of probes is M 2 =1.16. B. Simulation With the above analysis, we have the intuition and with a few probes, we should be able to discover enough alternative paths to get prepared for reroute. We conduct computer simulations to obtain the number of different paths being discovered vs the number of probes in various size topologies. The hash function we use in the routers is salted CRC16, where each router uses a random salt to avoid correlation between different routers in traffic distribution. Note that this is a necessary requirement by the native ECMP to achieve load balancing, not a requirement by PROBE. In the ideal case, k probes should discover k different paths. In practice, the number of paths is most likely less than k. In our simulation, we perform a sequence of probes to observe the number of paths being discovered. We conduct such experiment multiple times to get the average numbers and worst numbers. Fig. 6 shows the results for a small network with 5 rows and 4 stages. With 20 probes, we are able to find more than 19 alternative paths on average. In the worst case, we find 17 alternative paths. Fig. 7 shows a large network with 20 rows and 6 stages. With 20 probes, we always find 20 different alternative paths. 618
6 number of paths ideal average worst number of probes Fig. 6. Number of alternative paths discovered (R =5,T =4). number of paths ideal average worst number of probes Fig. 7. Number of alternative paths discovered (R =20,T =6). V. CONCLUSIONS AND FUTURE WORK We present a scheme called PROBE to enable flow-based reroute in IP network that uses ECMP. ECMP performs rough and static load balancing, thus cannot achieve even traffic distribution. The proposed scheme complements ECMP to perform dynamic flow-based reroute, thus achieving good load balancing and improving resource utilization. A prominent advantage of PROBE is that although it performs per-flow reroute, it does not require flow tables in IP routers. Everything is implemented in software that can be easily installed on host servers and controllers. Therefore, PROBE is a lowcomplexity, low-cost, and low-maintenance scheme, especially for data center applications. Since PROBE does not introduce any modifications to the routers, it can be used to upgrade existing data center networks for performance enhancement. This paper focus on the design and feasibility of PROBE, in our future research we will investigate the application of PROBE for network performance improvement and will build a testbed to demonstrate the design. REFERENCES [1] M. Al-fares, S. Radhakrishnan, B. Raghavan, N. Huang, and A. Vahdat. Hedera: Dynamic flow scheduling for data center networks. In In Proc. of Networked Systems Design and Implementation (NSDI) Symposium, [2] B. Augustin, X. Cuvellier, B. Orgogozo, F. Viger, T. Friedman, M. Latapy, C. Magnien, and R. Teixeira. Avoiding traceroute anomalies with paris traceroute. In IMC, [3] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, and S. Shenker. Ethane: taking control of the enterprise. SIGCOMM Comput. Commun. Rev., 37(4):1 12, [4] B. Claise. Cisco Systems NetFlow Services Export Version 9. RFC 3954 (Informational), Oct [5] K. Egevang and P. Francis. The IP Network Address Translator (NAT). RFC 1631 (Informational), May Obsoleted by RFC [6] A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta. VL2: a scalable and flexible data center network. In SIGCOMM 09: Proceedings of the ACM SIGCOMM 2009 conference on Data communication, pages 51 62, New York, NY, USA, ACM. [7] C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, and S. Lu. BCube: a high performance, server-centric network architecture for modular data centers. In SIGCOMM 09: Proceedings of the ACM SIGCOMM 2009 conference on Data communication, pages 63 74, New York, NY, USA, ACM. [8] C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu. DCell: A scalable and fault-tolerant network structure for data centers. In SIGCOMM 08: Proceedings of the ACM SIGCOMM 2008 conference on Data communication, pages 75 86, New York, NY, USA, ACM. [9] C. Hopps. Analysis of an Equal-Cost Multi-Path Algorithm. RFC 2992 (Informational), Nov [10] S. Kandula, S. Sengupta, A. Greenberg, P. Patel, and R. Chaiken. The nature of data center traffic: measurements & analysis. In Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference, IMC 09, pages , New York, NY, USA, ACM. [11] Y. Liao, D. Yin, and L. Gao. DPillar: Scalable Dual-Port Server Interconnection for Data Center Networks. In IEEE ICCCN, [12] G. Malkin. Traceroute Using an IP Option. RFC 1393 (Experimental), Jan [13] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner. OpenFlow: enabling innovation in campus networks. SIGCOMM Comput. Commun. Rev., 38(2):69 74, [14] J. Moy. OSPF Version 2. RFC 2328 (Standard), Apr [15] R. Niranjan Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat. PortLand: a scalable fault-tolerant layer 2 data center network fabric. In SIGCOMM 09: Proceedings of the ACM SIGCOMM 2009 conference on Data communication, pages 39 50, New York, NY, USA, ACM. [16] P. Phaal, S. Panchen, and N. McKee. InMon Corporation s sflow: A Method for Monitoring Traffic in Switched and Routed Networks. RFC 3176 (Informational), Sept [17] M. C. Toren. tcptraceroute. In 619
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.
IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam
Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments
Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments Yueping Zhang NEC Labs America, Inc. Princeton, NJ 854, USA Email: [email protected]
AIN: A Blueprint for an All-IP Data Center Network
AIN: A Blueprint for an All-IP Data Center Network Vasileios Pappas Hani Jamjoom Dan Williams IBM T. J. Watson Research Center, Yorktown Heights, NY Abstract With both Ethernet and IP powering Data Center
Scafida: A Scale-Free Network Inspired Data Center Architecture
Scafida: A Scale-Free Network Inspired Data Center Architecture László Gyarmati, Tuan Anh Trinh Network Economics Group Department of Telecommunications and Media Informatics Budapest University of Technology
A Reliability Analysis of Datacenter Topologies
A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br
PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric
PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya,
Adaptive Routing for Layer-2 Load Balancing in Data Center Networks
Adaptive Routing for Layer-2 Load Balancing in Data Center Networks Renuga Kanagavelu, 2 Bu-Sung Lee, Francis, 3 Vasanth Ragavendran, Khin Mi Mi Aung,* Corresponding Author Data Storage Institute, Singapore.E-mail:
Data Center Network Architectures
Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology [email protected] Abstract Data centers have become increasingly essential part of
Data Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
Applying NOX to the Datacenter
Applying NOX to the Datacenter Arsalan Tavakoli UC Berkeley Martin Casado and Teemu Koponen Nicira Networks Scott Shenker UC Berkeley, ICSI 1 Introduction Internet datacenters offer unprecedented computing
Resolving Packet Loss in a Computer Centre Applications
International Journal of Computer Applications (975 8887) olume 74 No., July 3 Resolving Packet Loss in a Computer Centre Applications M. Rajalakshmi C.Angel K. M. Brindha Shree ABSTRACT The modern data
Layer-3 Multipathing in Commodity-based Data Center Networks
Layer-3 Multipathing in Commodity-based Data Center Networks Ryo Nakamura University of Tokyo Email: [email protected] Yuji Sekiya University of Tokyo Email: [email protected] Hiroshi Esaki University of
Demand-Aware Flow Allocation in Data Center Networks
Demand-Aware Flow Allocation in Data Center Networks Dmitriy Kuptsov Aalto University/HIIT Espoo, Finland [email protected] Boris Nechaev Aalto University/HIIT Espoo, Finland [email protected]
A Comparative Study of Data Center Network Architectures
A Comparative Study of Data Center Network Architectures Kashif Bilal Fargo, ND 58108, USA [email protected] Limin Zhang Fargo, ND 58108, USA [email protected] Nasro Min-Allah COMSATS Institute
Data Center Network Topologies: VL2 (Virtual Layer 2)
Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted
OpenDaylight Project Proposal Dynamic Flow Management
OpenDaylight Project Proposal Dynamic Flow Management Ram (Ramki) Krishnan, Varma Bhupatiraju et al. (Brocade Communications) Sriganesh Kini et al. (Ericsson) Debo~ Dutta, Yathiraj Udupi (Cisco) 1 Table
Data Center Networking with Multipath TCP
Data Center Networking with Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley University College London, Universite Catholique de Louvain ABSTRACT Recently
Experimental Framework for Mobile Cloud Computing System
Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 00 (2015) 000 000 www.elsevier.com/locate/procedia First International Workshop on Mobile Cloud Computing Systems, Management,
Data Center Load Balancing. 11.11.2015 Kristian Hartikainen
Data Center Load Balancing 11.11.2015 Kristian Hartikainen Load Balancing in Computing Efficient distribution of the workload across the available computing resources Distributing computation over multiple
Internet Protocol: IP packet headers. vendredi 18 octobre 13
Internet Protocol: IP packet headers 1 IPv4 header V L TOS Total Length Identification F Frag TTL Proto Checksum Options Source address Destination address Data (payload) Padding V: Version (IPv4 ; IPv6)
The Case for Fine-Grained Traffic Engineering in Data Centers
The Case for Fine-Grained Traffic Engineering in Data Centers Theophilus Benson, Ashok Anand, Aditya Akella and Ming Zhang University of Wisconsin-Madison; Microsoft Research Abstract In recent years,
Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012
VL2: A Scalable and Flexible Data Center Network 15744: Computer Networks, Fall 2012 Presented by Naveen Chekuri Outline Introduction Solution Approach Design Decisions Addressing and Routing Evaluation
Research on Errors of Utilized Bandwidth Measured by NetFlow
Research on s of Utilized Bandwidth Measured by NetFlow Haiting Zhu 1, Xiaoguo Zhang 1,2, Wei Ding 1 1 School of Computer Science and Engineering, Southeast University, Nanjing 211189, China 2 Electronic
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Network congestion control using NetFlow
Network congestion control using NetFlow Maxim A. Kolosovskiy Elena N. Kryuchkova Altai State Technical University, Russia Abstract The goal of congestion control is to avoid congestion in network elements.
Datacenter Network Large Flow Detection and Scheduling from the Edge
Datacenter Network Large Flow Detection and Scheduling from the Edge Rui (Ray) Zhou [email protected] Supervisor : Prof. Rodrigo Fonseca Reading & Research Project - Spring 2014 Abstract Today, datacenter
Secure Cloud Computing with a Virtualized Network Infrastructure
Secure Cloud Computing with a Virtualized Network Infrastructure Fang Hao, T.V. Lakshman, Sarit Mukherjee, Haoyu Song Bell Labs, Alcatel-Lucent {firstname.lastname}@alcatel-lucent.com ABSTRACT Despite
Chapter 6. Paper Study: Data Center Networking
Chapter 6 Paper Study: Data Center Networking 1 Data Center Networks Major theme: What are new networking issues posed by large-scale data centers? Network Architecture? Topology design? Addressing? Routing?
Wireless Link Scheduling for Data Center Networks
Wireless Link Scheduling for Data Center Networks Yong Cui Tsinghua University Beijing, 10084, P.R.China [email protected] Hongyi Wang Tsinghua University Beijing, 10084, P.R.China wanghongyi09@mails.
Datagram-based network layer: forwarding; routing. Additional function of VCbased network layer: call setup.
CEN 007C Computer Networks Fundamentals Instructor: Prof. A. Helmy Homework : Network Layer Assigned: Nov. 28 th, 2011. Due Date: Dec 8 th, 2011 (to the TA) 1. ( points) What are the 2 most important network-layer
Improved Routing in the Data Centre Networks HCN and BCN
Improved Routing in the Data Centre Networks HCN and BCN Iain A. Stewart School of Engineering and Computing Sciences, Durham University, South Road, Durham DH 3LE, U.K. Email: [email protected]
Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering
Internet Firewall CSIS 4222 A combination of hardware and software that isolates an organization s internal network from the Internet at large Ch 27: Internet Routing Ch 30: Packet filtering & firewalls
Configuring a Load-Balancing Scheme
This module contains information about Cisco Express Forwarding and describes the tasks for configuring a load-balancing scheme for Cisco Express Forwarding traffic. Load-balancing allows you to optimize
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
SDN. What's Software Defined Networking? Angelo Capossele
SDN What's Software Defined Networking? Angelo Capossele Outline Introduction to SDN OpenFlow Network Functions Virtualization Some examples Opportunities Research problems Security Case study: LTE (Mini)Tutorial
Portland: how to use the topology feature of the datacenter network to scale routing and forwarding
LECTURE 15: DATACENTER NETWORK: TOPOLOGY AND ROUTING Xiaowei Yang 1 OVERVIEW Portland: how to use the topology feature of the datacenter network to scale routing and forwarding ElasticTree: topology control
Energy Optimizations for Data Center Network: Formulation and its Solution
Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin
Generalized DCell Structure for Load-Balanced Data Center Networks
Generalized DCell Structure for Load-Balanced Data Center Networks Markus Kliegl, Jason Lee,JunLi, Xinchao Zhang, Chuanxiong Guo,DavidRincón Swarthmore College, Duke University, Fudan University, Shanghai
Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks
Dynamic Congestion-Based Load Balanced Routing in Optical Burst-Switched Networks Guru P.V. Thodime, Vinod M. Vokkarane, and Jason P. Jue The University of Texas at Dallas, Richardson, TX 75083-0688 vgt015000,
RDCM: Reliable Data Center Multicast
This paper was presented as part of the Mini-Conference at IEEE INFOCOM 2011 RDCM: Reliable Data Center Multicast Dan Li, Mingwei Xu, Ming-chen Zhao, Chuanxiong Guo, Yongguang Zhang, Min-you Wu Tsinghua
Path Selection Methods for Localized Quality of Service Routing
Path Selection Methods for Localized Quality of Service Routing Xin Yuan and Arif Saifee Department of Computer Science, Florida State University, Tallahassee, FL Abstract Localized Quality of Service
ProActive Routing in Scalable Data Centers with PARIS
ProActive Routing in Scalable Data Centers with PARIS Dushyant Arora Arista Networks [email protected] Theophilus Benson Duke University [email protected] Jennifer Rexford Princeton University [email protected]
Xperience of Programmable Network with OpenFlow
International Journal of Computer Theory and Engineering, Vol. 5, No. 2, April 2013 Xperience of Programmable Network with OpenFlow Hasnat Ahmed, Irshad, Muhammad Asif Razzaq, and Adeel Baig each one is
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Computer Science City University of Hong Kong HKUST, March 2, 2015 Background Aggregator Aggregator Aggregator Worker Worker Worker Worker Low latency for
TeachCloud: A Cloud Computing Educational Toolkit
TeachCloud: A Cloud Computing Educational Toolkit Y. Jararweh* and Z. Alshara Department of Computer Science, Jordan University of Science and Technology, Jordan E-mail:[email protected] * Corresponding
VIDEO STREAMING OVER SOFTWARE DEFINED NETWORKS WITH SERVER LOAD BALANCING. Selin Yilmaz, A. Murat Tekalp, Bige D. Unluturk
VIDEO STREAMING OVER SOFTWARE DEFINED NETWORKS WITH SERVER LOAD BALANCING Selin Yilmaz, A. Murat Tekalp, Bige D. Unluturk College of Engineering, Koç University, 34450 Sariyer, Istanbul, Turkey ABSTRACT
Internet Packets. Forwarding Datagrams
Internet Packets Packets at the network layer level are called datagrams They are encapsulated in frames for delivery across physical networks Frames are packets at the data link layer Datagrams are formed
Application-aware Virtual Machine Migration in Data Centers
This paper was presented as part of the Mini-Conference at IEEE INFOCOM Application-aware Virtual Machine Migration in Data Centers Vivek Shrivastava,PetrosZerfos,Kang-wonLee,HaniJamjoom,Yew-HueyLiu,SumanBanerjee
Cisco IOS Flexible NetFlow Technology
Cisco IOS Flexible NetFlow Technology Last Updated: December 2008 The Challenge: The ability to characterize IP traffic and understand the origin, the traffic destination, the time of day, the application
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
Extensible and Scalable Network Monitoring Using OpenSAFE
Extensible and Scalable Network Monitoring Using OpenSAFE Jeffrey R. Ballard [email protected] Ian Rae [email protected] Aditya Akella [email protected] Abstract Administrators of today s networks are
A New Forwarding Policy for Load Balancing in Communication Networks
A New Forwarding Policy for Load Balancing in Communication Networks Martin Heusse Yvon Kermarrec ENST de Bretagne BP 83, 985 Brest Cedex, France [email protected] Abstract We present in this
Load Balancing. Final Network Exam LSNAT. Sommaire. How works a "traditional" NAT? Un article de Le wiki des TPs RSM.
Load Balancing Un article de Le wiki des TPs RSM. PC Final Network Exam Sommaire 1 LSNAT 1.1 Deployement of LSNAT in a globally unique address space (LS-NAT) 1.2 Operation of LSNAT in conjunction with
Information- Centric Networks. Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics
Information- Centric Networks Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics Funding These educational materials have been developed as part of the instructors educational
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat 1 PortLand In A Nutshell PortLand is a single logical
Autonomous Fast Rerouting for Software Defined Network
Autonomous ast Rerouting for Software Defined Network 2012.10.29 NTT Network Service System Laboratories, NTT Corporation Shohei Kamamura, Akeo Masuda, Koji Sasayama Page 1 Outline 1. Background and Motivation
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks Hong Xu, Baochun Li [email protected], [email protected] Department of Computer Science, City University of Hong Kong Department
Ph.D. Research Plan. Designing a Data Center Network Based on Software Defined Networking
Ph.D. Research Plan Title: Designing a Data Center Network Based on Software Defined Networking Submitted by Nabajyoti Medhi Department of CSE, NIT Meghalaya Under the guidance of Prof. D. K. Saikia INDEX
Limitations of Current Networking Architecture OpenFlow Architecture
CECS 572 Student Name Monday/Wednesday 5:00 PM Dr. Tracy Bradley Maples OpenFlow OpenFlow is the first open standard communications interface that enables Software Defined Networking (SDN) [6]. It was
PCube: Improving Power Efficiency in Data Center Networks
PCube: Improving Power Efficiency in Data Center Networks Lei Huang, Qin Jia, Xin Wang Fudan University Shanghai, China [email protected] [email protected] [email protected] Shuang Yang Stanford
OpenFlow: Enabling Innovation in Campus Networks
OpenFlow: Enabling Innovation in Campus Networks Nick McKeown Stanford University Presenter: Munhwan Choi Table of contents What is OpenFlow? The OpenFlow switch Using OpenFlow OpenFlow Switch Specification
Comprehensive IP Traffic Monitoring with FTAS System
Comprehensive IP Traffic Monitoring with FTAS System Tomáš Košňar [email protected] CESNET, association of legal entities Prague, Czech Republic Abstract System FTAS is designed for large-scale continuous
Using IPM to Measure Network Performance
CHAPTER 3 Using IPM to Measure Network Performance This chapter provides details on using IPM to measure latency, jitter, availability, packet loss, and errors. It includes the following sections: Measuring
Data Center Netwokring with Multipath TCP
UCL DEPARTMENT OF COMPUTER SCIENCE Research Note RN/1/3 Data Center Netwokring with 1/7/21 Costin Raiciu Christopher Pluntke Sebastien Barre Adam Greenhalgh Damon Wischik Mark Handley Abstract Data centre
Assignment 6: Internetworking Due October 17/18, 2012
Assignment 6: Internetworking Due October 17/18, 2012 Our topic this week will be the notion of internetworking in general and IP, the Internet Protocol, in particular. IP is the foundation of the Internet
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Stability of QOS. Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu
Stability of QOS Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu Abstract Given a choice between two services, rest of the things being equal, it is natural to prefer the one with more
Hedera: Dynamic Flow Scheduling for Data Center Networks
Hedera: Dynamic Flow Scheduling for Data Center Networks Mohammad Al-Fares Sivasankar Radhakrishnan Barath Raghavan * Nelson Huang Amin Vahdat UC San Diego * Williams College - USENIX NSDI 2010 - Motivation!"#$%&'($)*
Introduction. The Inherent Unpredictability of IP Networks # $# #
Introduction " $ % & ' The Inherent Unpredictability of IP Networks A major reason that IP became the de facto worldwide standard for data communications networks is its automated resiliency based on intelligent
TE in action. Some problems that TE tries to solve. Concept of Traffic Engineering (TE)
1/28 2/28 TE in action S-38.3192 Verkkopalvelujen tuotanto S-38.3192 Network Service Provisioning Networking laboratory 3/28 4/28 Concept of Traffic Engineering (TE) Traffic Engineering (TE) (Traffic Management)
Traceroute Anomalies
Traceroute Anomalies Martin Erich Jobst Supervisor: Dipl.-Inf. Johann Schlamp Seminar Future Internet SS2012 Chair for Network Architectures and Services Department for Computer Science, Technische Universität
A Topology-Aware Relay Lookup Scheme for P2P VoIP System
Int. J. Communications, Network and System Sciences, 2010, 3, 119-125 doi:10.4236/ijcns.2010.32018 Published Online February 2010 (http://www.scirp.org/journal/ijcns/). A Topology-Aware Relay Lookup Scheme
Network Level Multihoming and BGP Challenges
Network Level Multihoming and BGP Challenges Li Jia Helsinki University of Technology [email protected] Abstract Multihoming has been traditionally employed by enterprises and ISPs to improve network connectivity.
Green Routing in Data Center Network: Modeling and Algorithm Design
Green Routing in Data Center Network: Modeling and Algorithm Design Yunfei Shang, Dan Li, Mingwei Xu Tsinghua University Beijing China, {shangyunfei, lidan, xmw}@csnet1.cs.tsinghua.edu.cn ABSTRACT The
Empowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
DARD: Distributed Adaptive Routing for Datacenter Networks
DARD: Distributed Adaptive Routing for Datacenter Networks Xin Wu Dept. of Computer Science, Duke University Durham, USA [email protected] Xiaowei Yang Dept. of Computer Science, Duke University Durham,
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
Configuring Static and Dynamic NAT Simultaneously
Configuring Static and Dynamic NAT Simultaneously Document ID: 13778 Contents Introduction Prerequisites Requirements Components Used Conventions Configuring NAT Related Information Introduction In some
A Passive Method for Estimating End-to-End TCP Packet Loss
A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
Simulation of Heuristic Usage for Load Balancing In Routing Efficiency
Simulation of Heuristic Usage for Load Balancing In Routing Efficiency Nor Musliza Mustafa Fakulti Sains dan Teknologi Maklumat, Kolej Universiti Islam Antarabangsa Selangor [email protected] Abstract.
Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks
Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks Yoonseon Han, Jae-Hyoung Yoo, James Won-Ki Hong Division of IT Convergence Engineering, POSTECH {seon54, jwkhong}@postech.ac.kr
HP Intelligent Management Center v7.1 Network Traffic Analyzer Administrator Guide
HP Intelligent Management Center v7.1 Network Traffic Analyzer Administrator Guide Abstract This guide contains comprehensive information for network administrators, engineers, and operators working with
IP Traffic Engineering over OMP technique
IP Traffic Engineering over OMP technique 1 Károly Farkas, 1 Zoltán Balogh, 2 Henrik Villför 1 High Speed Networks Laboratory Department of Telecommunications and Telematics Technical University of Budapest,
Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain
Praveen Bhaniramka, Wei Sun, Raj Jain Department of Computer and Information Science The Ohio State University 201 Neil Ave, DL39 Columbus, OH 43210 USA Telephone Number: +1 614-292-3989 FAX number: +1
DARD: Distributed Adaptive Routing for Datacenter Networks
DARD: Distributed Adaptive Routing for Datacenter Networks Xin Wu Xiaowei Yang Dept. of Computer Science, Duke University Duke-CS-TR-2011-01 ABSTRACT Datacenter networks typically have many paths connecting
Integrating Servers and Networking using an XOR-based Flat Routing Mechanism in 3-cube Server-centric Data Centers
Integrating Servers and Networking using an XOR-based Flat Routing Mechanism in 3-cube Server-centric Data Centers Rafael Pasquini 1, Fábio L. Verdi 2 and Maurício F. Magalhães 1 1 Department of Computer
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
