WHITE PAPER. Engineered Elephant Flows for Boosting Application Performance in Large-Scale CLOS Networks. March 2014
|
|
|
- Nicholas McCoy
- 10 years ago
- Views:
Transcription
1 WHITE PAPER Engineered Elephant s for Boosting Application Performance in Large-Scale CLOS Networks Abstract. Data Center networks built using Layer 3 ECMP-based CLOS topology deliver excellent bandwidth and latency characteristics, but they can also cause short-lived latency sensitive mice flows to be choked by long-lived bandwidth-hungry elephant flows degrading application performance. A representative traffic engineering application is implemented on available hardware switches using Broadcom's Open compliant Open Data Plane Abstraction (OF-DPA) Specification and Software, and an open source Open agent (Indigo 2.0) and controller (Ryu). This application demonstrates the ability to implement scalable and workload-sensitive Layer 3 ECMPbased CLOS networks using Open. March 2014 Broadcom Corporation 5300 California Avenue Irvine, CA by Broadcom Corporation. All Rights Reserved. OF-DPA-WP102-R March 2014
2 Introduction Open, as defined by the Open Networking Foundation (ONF), promises to accelerate innovation in networking. Since the introduction of Open, the quality and quantity of networking innovation and research has accelerated dramatically. Much of this has focused on data center traffic engineering, with emerging use cases evolving for carrier or service provider networks. Open v1.0 has been used in both academic research and commercial networks. It relies on a single table model based on Ternary Content Addressable Memory (TCAM). This has resulted in limited scale and deployments because TCAM is an expensive resource in switch ASICs when it comes to scale of network topologies and number of flow entries that can be supported. Open v1.3.1 enables use of multiple tables in packet processing pipelines available in existing Broadcom StrataXGS switch ASICs to alleviate or resolve the cost-effective scaling challenges with Open v1.0-based switches. In the recent past, Open research efforts have concentrated on algorithm development and use of software-based switches. While desirable to prove benefits using real networks built with physical switches, the lack of available representative hardware switches has limited data gathering and research. As a result, most have used simulated network environments. There are many significant network issues that arise with scale and performance measurements in real networks with real traffic that the industry has not been able to explore. Broadcom s Open Data Plane Abstraction (OF-DPA) is based on Open v1.3.1 and addresses the challenges of cost-effective scalability and supporting commercial and research use cases with available switch hardware. It does so by enabling Open controllers to program the switch hardware that is deployed in the majority of data centers today. OF- DPA provides a hardware abstraction layer (HAL) for Broadcom StrataXGS switches; it also provides an API library for Open compliant agents. It is intended for both real network deployments and enabling research networks. OF- DPA is available as an easily deployable development package as well as a turnkey reference package which can be installed on widely available white box switches to enable network innovation. This white paper demonstrates the advantages of using an OF-DPA-based Open switch in a representative traffic engineering application. This application is based on real implementation on available hardware switches using OF-DPA software, Open open source agent, controller, and referenced research papers. This paper is organized as follows. Data Centers and ECMP-Based CLOS Networks provides background on the form of data center network infrastructure that is used for this paper. Congestion Scenario with Mice and Elephant s describes the problem. Using Open to Build ECMP Networks discusses solution approaches. Using Open to Traffic Engineer Elephant s describes how to use Open to build the network of Data Centers and ECMP- Based CLOS Networks. Open Hardware and Software Implementation describes how to use Open to implement the solution approaches in Using Open to Build ECMP Networks. This document assumes familiarity with Open and related Software Defined Networking (SDN) technologies and concepts (centralization, open API, separation of control and data planes, switch, agent and controller, etc.). The OF-DPA v1.0 specification and the turnkey reference package are openly available on GitHub at OF-DPA-WP102-R 2
3 Data Centers and ECMP-Based CLOS Networks Data centers and evolving carrier networks require a scalable infrastructure capable of supporting multiple application workloads. To achieve this goal, a number of large scale data center operators have chosen to deploy folded CLOS network topologies because of their non-blocking multipath and resiliency properties. Such network topologies are desirable for applications that generate east-west (server to server or server to storage) traffic. A representative two-level folded CLOS topology is shown in Figure 1. In this simple leaf-and-spine design, servers are attached to edge (top of rack) or leaf switches, and are interconnected in turn by aggregation or spine switches. FIGURE 1: Two-Level Folded CLOS Network Topology Example Servers in racks connect to the network using the leaf switches. Connectivity to the WAN and other data centers is provided using the Core to WAN services where Core switches reside. Firewall, load balancing, and other specialized network services are provided by appliances connected to the CLOS network. A common method for utilizing the inherent multipath in a large CLOS network is to use Layer 3 routing everywhere, including the leaf switches. With each point-to-point link between switches in a separate subnet, standard equal cost multipath (ECMP) routing can be used to distribute traffic across links. This is attractive since cost-effective Layer 3-based leaf and spine switches provide hardware-based hashing for assigning packets to links and support standard distributed routing protocols such as OSPF or BGP. This Layer 3 routing in a large CLOS network requires switches to maintain large numbers of subnet routes, but relatively few Layer 2 addresses, for directly attached switches and servers. Many cloud data centers use Layer 2 overlay tunnels to create virtual tenant or application networks that allow tenant or application servers to be deployed anywhere, independent of underlay topology. OF-DPA-WP102-R 3
4 Congestion Scenario with Mice and Elephant s While CLOS networks that are built using Layer 3 routing and ECMP deliver excellent bandwidth and latency characteristics for east-west traffic in the data center, the main problem with this approach is that CLOS networks built using Layer 3 routing and ECMP are insensitive to workload. Depending on the application, data center traffic tends to exhibit a mix of latency sensitive short-lived flows (referred to colloquially as mice), and bandwidth intensive longer lived flows (elephants) for which throughput is more important than latency. Applications such as Hadoop (Reference [10] on page 14) that use MapReduce techniques cause traffic patterns that include elephants and mice. Elephant flows may also consist of large data transfer transactions and peer-to-peer file sharing, while mice flows are experienced in on-line gaming, small-sized web requests, multimedia broadcasting, and voice over IP. Typically, fewer than 10% of flows are elephants, but they account for over 80% of traffic Reference [1] on page 14. ECMP is designed to optimize for latency, and hash-based multipath selection assumes traffic is distributed more or less uniformly Reference [2] on page 14 and Reference [3] on page 14. Mice flows tend to be very bursty and short lived, performing well with stateless, hash-based multipathing over equal cost paths. However, stateless, hash-based multipathing can cause long-lived TCP flows to pile up on a particular path even though other paths might be free, filling network buffers along that path which can result in congestion events. FIGURE 2: ECMP CLOS Network Topology with Elephant s Figure 2 shows a scenario where an ECMP-based network is congested because of elephant flows. S1 through S4 are sources for the flows in the network, while D1 through D4 are the destinations. As can be seen in Figure 2, the red and green flows share links, so do the yellow and purple flows. Also, there are many links that are not utilized at all, and as a result, long lived flows can constrain the short lived flows and cause significant detriment to application performance. At the same time, a large percentage of the bisection bandwidth may be wasted. The answer is to augment ECMP with some form of traffic engineering. Ideally, link assignments would take into account loading on links and internal switch buffer state to avoid congestion. But even with hardware support for load balancing, the control plane functionality needed is not well understood. In general, it has proven difficult to design algorithms that adaptively respond quickly to changing network traffic patterns based solely on local knowledge. As a result, traffic engineering until recently has remained an interesting research area. OF-DPA-WP102-R 4
5 Elephant s and Traffic Engineering Traffic engineering for elephant flows using Open has been extensively reported in the research literature Reference [4] on page 14, Reference [5] on page 14, Reference [6] on page 14, Reference [7] on page 14, and Reference [8] on page 14. These generally use two steps: identifying the problem flows and modifying the forwarding to treat problem flows differently. While identification of elephant flows is not the focus of this paper, the techniques for detection are worth summarizing. The most reliable way is to simply have the applications determine them a priori, as in Reference [9] on page 14 or Reference [11] on page 14. For example, MapReduce phases are orchestrated by control nodes capable of estimating traffic demands in advance, including addresses and other identifying information. Another way is to dynamically detect long lived flows by maintaining per-flow traffic statistics in the network and declaring a flow to be an elephant flow when its byte count exceeds some threshold. Alternatively, traffic sampling could be used, for example, by s Reference [12] on page 14. Passive traffic snooping approaches tend to be less reliable in that the detection time has been reported to vary from the order of 100 ms to many seconds due to polling delays. By the time the flows are identified, congestion events may already have taken place. A variant of this is to collect byte counts at the end hosts by monitoring socket buffers (Reference [5] on page 14), and declaring a flow to be an elephant if it transfers more than 100 MB, and then marking the packets that belong to elephant flows (e.g., using the DSCP field in the IP header). Once identified, Open can be used to give elephant flows differential forwarding treatment. Hedera Reference [4] on page 14 load-balanced the elephant flows as a separate aggregate from mice flows. Hedera found it acceptable to loadbalance the mice randomly using hashing. Since they were using Open 1.0, this was applied at the controller for every new flow. Hedera used Open both for identification and placing elephant flows, while Mahout Reference [5] on page 14 used end host detection with in-band signaling of elephant flows to the Open controller at the edge Open switch. OF-DPA-WP102-R 5
6 Using Open to Build ECMP Networks Open v1.0 uses only a single flow table with packet editing and forwarding actions in an action list. With Open v1.0 wildcard, flow entries can be used to forward packets as macro flows, with little or no controller intervention. But for Open v1.0 to support multipath load balancing, the controller needs to reactively place each flow independently as a separate micro flow. As a result, large numbers of flow entries are required, as well as significantly more transactions with the controller. Since Open v1.0 implementations on commodity switches can only utilize a TCAM for load balancing, the total number of flows required can easily exceed what the relatively low cost switches provide. Open v1.3.1 by contrast uses multiple flow tables, but more significantly, provides the group table that can be used to leverage the multipath hashing capabilities in switch hardware. The Open v1.3.1 select group type is designed to map directly to hardware ECMP group tables. As a result, most forwarding can be handled using macro flows with significantly fewer entries in the flow table. In addition, multiple tables can be used to minimize the cross-product explosion effect of using a single table. With Open Data Plane Abstraction (OF-DPA), an Open v1.3.1 Controller could set up the routes and ECMP group forwarding for all of the infrastructure switches. In particular, it can be used to populate the routing tables and select groups for multipath. In this scenario, the controller uses LLDP to map the topology and detect link failures. The controller also polls switches for port statistics and keeps track of the overall health of the network. The OF-DPA flow tables and group tables that would be used for this application are shown in Figure 3. These are programmed from an Open Controller using standard Open protocol messages. The VLAN table is used to allow particular VLANs on a port or to assign a VLAN to untagged packets. The Termination MAC table is used to recognize the router MAC. Forwarding rules are installed in the Routing table after the output groups are defined. The L2 Interface group entries define the tagging policy for egress member ports, the L3 Unicast group entries provide the packet editing actions for the next hop header, and the L3 ECMP group entry selects an output member based on hardware hashing. FIGURE 3: OF-DPA Programming Pipeline for ECMP L3 ECMP Select Group L3 Unicast Indirect Group L2 Interface Indirect Gorup Physical Port Physical Port Ingress Port VLAN Termination MAC Routing ACL Policy Bucket 4 Bucket 3 Bucket 2 Bucket 1 L3 Unicast Indirect Group L3 Unicast Indirect Group L2 Interface Indirect Gorup L2 Interface Indirect Gorup Physical Port Physical Port L3 Unicast Indirect Group L2 Interface Indirect Gorup Physical Port One of the advantages of using OF-DPA is that it can install customized ECMP select groups. For example, rather than being limited to the fixed N-way ECMP groupings that would be installed by a distributed routing protocol, the Open Controller can use OF-DPA to install finer grained groups with buckets for different sets of ECMP members, e.g., it can assign groups to flows. It can also customize the select groups to realized different weights for different members. OF-DPA-WP102-R 6
7 To illustrate the scalability of an Open v1.3.1 Switch in terms of table entries required, the following example is considered. Shown is Leaf or Top-of-Rack switch with 48 downlink (to servers or hosts) and six uplink ports (to spine switches) in a ECMP-based CLOS network. The Leaf or Top-of-Rack switch processes anywhere from 1K to 192K flows 1. 1 and Figure 4 show the number of switch table entries required when using a single (TCAM) table Open v1.0 implementation versus a multi-table Open v1.3.1 implementation using OF-DPA. Hosts, Ingress Ports IP DEST L4 SRC Ports TABLE 1: Leaf Switch for an ECMP-based CLOS Network Connections (flows) Egress (Uplink) Ports VLAN Term MAC ECMP Group L3 Unicast Group L2 Interface Group L3 Routing Open 1.0 (TCAM) Open FIGURE 4: Leaf Switch for Building an ECMP-based CLOS Network Switch for Building an ECMP-based CLOS Network Number of (Log Scale) Open 1.0 Open Number of s (actual scale) 1. These are unidirectional flows. OF-DPA-WP102-R 7
8 As is evident from 1 and Figure 4 above, when using the Open v1.0 single table implementation, the switch table size requirement significantly increases as the number of flows increase, whereas with the Open v1.3.1-based implementation with OF-DPA, the switch s table resources are utilized with much higher efficiently. For example, in a scenario with 48 hosts (servers), with 16K Layer 4 source ports and 12 IP destination ports (resulting in about 196K flows), the number of table entries required with Open v1.3.1 with OF-DPA is only 74 using economical SRAM-based tables, compared to 196K entries required with Open v1.0 using expensive TCAM-based tables. This is a remarkable improvement in the utilization of switch hardware resources, enabling much higher scale and cost-effectiveness. OF-DPA-WP102-R 8
9 Using Open to Traffic Engineer Elephant s The example discussed in the previous section showed how a large ECMP-based CLOS network can be built to process a large number of flows, while conserving switch table entry resources. However, applying ECMP-based load balancing for both elephant and mice flows is not effective as described earlier. Open v1.3.1 with OF-DPA also makes it straightforward to traffic engineer the elephant flows. As a first order solution, select group multipath forwarding can be used for the mice flows. Only the elephant flows, which typically comprise fewer than 10% of the flows, need to be individually placed by the controller. Furthermore, with application orchestrators identifying elephant flows a priori to the controller, elephant flow placement can be done in advance of traffic appearing on the network. Figure 5 on page 10, these redirection flows would be placed using entries in the Routing if the criterion is the destination host, or in the ACL Policy for application- or source-based selection. They could forward directly to an L3 Unicast group entry or to an engineered L3 ECMP group entry. The example below is one where the elephant flows are redirected using application- or source-based selection criterion, therefore requiring entries in the ACL Policy. Hosts, Ingress Ports TABLE 2: Leaf Switch for Engineered Elephant s in an ECMP-based CLOS Network IP DEST L4 SRC Ports Connect -ions (flows) Elephant s (10%) Egress (Uplink) Ports VLAN Term MAC L3 ECMP Group Unicast Group L3 L2 Interface Group Routing ACL (TCAM) Open 1.0 (TCAM) Open OF-DPA-WP102-R 9
10 An Open v1.3.1 switch implemented with OF-DPA scales for this Traffic Engineered scenario. All the mice flows can use the inexpensive SRAM-based hash tables, while the elephant flows will need entries in the TCAM ACL table. 2 and the graph shown in Figure 5 show the switch table entry requirements for Open v1.0 and Open v1.3.1 with OF-DPA for implementing traffic engineering of elephant flows in an ECMP-based CLOS network. FIGURE 5: Leaf Switch for Engineered Elephant s in an ECMP-based CLOS Network Switch - Engineered Elephant s in an ECMP-based CLOS Network. Open 1.0 versus Open with OF-DPA Number of Open 1.0 (TCAM) Open Number of s As is evident from 2 and Figure 5, when using the Open v1.0 single table implementation, the switch table size requirement exacerbates further when the TCAM entries needed to engineer the elephant flows are added. On the other hand, in the case of Open v1.3.1-based implementation with OF-DPA, the switch s table resources continue to be utilized with much higher efficiency. For example, in a scenario with 48 hosts (servers), with 6K Layer 4 source ports and six IP destination ports (resulting in about 37K flows and where about 3.7K entries are elephant flows), the number of table entries required with Open v1.3.1 with OF-DPA is about 3.7K using both TCAM- and SRAM-based tables, compared to 41K entries required with Open v1.0 using only TCAM-based tables. In the case where the elephant flows are redirected using the destination host-based selection criterion, there is no requirement for the extra costly ACL Policy table entries; the inexpensive Routing table can be used instead. In both cases, not only is the performance of applications improved, but also there is an order of magnitude improvement in the utilization of switch hardware resources, enabling much higher scale and cost-effectiveness. OF-DPA-WP102-R 10
11 Open Hardware and Software Implementation The Layer 3 ECMP CLOS network and engineered elephant flows application is implemented using open hardware and software solutions. The switch hardware system is based on the Broadcom StrataXGS Trident II switch-based OCP (Open Compute Project) open switch specification (draft), for details see FIGURE 6: Open Software and Hardware Implementation The switch system includes an open source Ubuntu Linux distribution, open source open network install environment (ONIE) based installer, switch SDK, OF-DPA software, and open OF-DPA API. An open source reference agent (based on Indigo 2.0, see is integrated with the OF-DPA API. The northbound interface of the agent is integrated with the open source RYU Open Controller (see The ECMP CLOS provisioning and engineered elephant flows applications are built into the RYU Open Controller. Figure 6 shows the hardware and software implementation. OF-DPA-WP102-R 11
12 Summary and Future Work Open v1.3.1 with OF-DPA enables implementation of Layer 3 ECMP-based CLOS networks that can scale to a large number of nodes, while delivering high cross-sectional bandwidth for east-west traffic. Compared to Open v1.0, Open v1.3.1 with OF-DPA delivers an order of magnitude improvement in how switch table resources are utilized, enabling much higher scale and performance. ECMP-based load balancing features available in switch hardware can be utilized for the larger number of latency sensitive and short lived mice flows in data center networks. Layer 3 ECMP-based CLOS networks are not sensitive to workloads and specific performance requirements. Elephant flows, unless specifically traffic-engineered, can cause significant degradation of application performance. Open v1.3.1 with OF-DPA enables traffic engineering of elephant flows in large scale CLOS networks, enabling higher application performance, while conserving switch table resources to deliver greater scale. MiceTrap (Reference [13] on page 14), considers traffic engineering the mice flows. It makes the case that random ECMP combined with preferential placement of elephant flows could degrade the latency sensitive mice for applications with short flows that temporarily are assigned to paths with elephants. The solution is to adaptively use weighted routing for spreading traffic. Such capability can be enabled by using OF-DPA to engineer custom Open select groups with different sets of bucket members. Bucket weights can be supported in OF-DPA by defining more than one Layer 3 ECMP group entry bucket referencing the same layer Unicast group entry, so that output ports where elephant flows are placed can be assigned lower weights for randomly placing mice flows. OF-DPA-WP102-R 12
13 Acronyms and Abbreviations In most cases, acronyms and abbreviations are defined on first use and are also defined in the table below. Term ASIC CLOS ECMP Entry Group Group Entry Leaf Switch Open protocol Software-Defined Networking Spine Switch Description Application Specific Integrated Circuits. Represented networking switch silicon in this white paper. CLOS-based networks provide full cross sectional bandwidth to nodes connected to the network. Clos network is a kind of multistage switching network, first formalized by Charles Clos in Also known as Fat Tree topology. Equal Cost Multi-Path, used for multi-path routing. Sequence of packets with the same selection of header field values. s are unidirectional. Open flow table as defined in the Open v1.3.1 specification. Entry in an Open flow table with its match fields and instructions. Open group table as defined in the Open v1.3.1 specification. Entry in an Open group table, with its Group Identifier, Group Type, Counters and Action Buckets fields. A top of rack (TOR) switch in a CLOS network where the network is built using spine and leaf switches. A communications protocol between control and data plane of supported network devices, as defined by Open Networking Foundation (ONF). User programmable control plane. In this document, Software-Defined Networking (SDN) refers to the user ability to customize packet processing and forwarding. An aggregation switch connected to top of rack switches in a in a CLOS network where the network is built using spine and leaf switches. SRAM Static random-access memory, a type of semiconductor memory used for Layer 2, Layer 3 and hash tables lookup in switch ASICs. TCAM Ternary Content Addressable Memory. Used for wide wild card match rules entry and processing in switch ASICs. Traffic Engineering Classification of traffic into distinct types and application of quality of service or load balancing policies to those traffic types. OF-DPA-WP102-R 13
14 References Document (or Item) Name [1] Kandula, S.; Sengupta, S.; Greenberg, A. G.; Patel, P. & Chaiken, R. (2009), The nature of data center traffic: measurements & analysis., in Anja Feldmann & Laurent Mathy, ed., 'Internet Measurement Conference', ACM, pp [2] Thaler, D., & Hopps, C. (2000), Multipath Issues in Unicast and Multicast Next-Hop Selection, IETF RFC [3] Hopps, C. (2000), 'Analysis of an Equal-Cost Multi-Path Algorithm', IETF RFC [4] Al-Fares, M.; Radhakrishnan, S.; Raghavan, B.; Huang, N. & Vahdat, A. (2010), Hedera: Dynamic Scheduling for Data Center Networks., in 'NSDI', USENIX Association, pp [5] Curtis, A. R.; Kim, W. & Yalagandula, P. (2011), Mahout: Low-overhead datacenter traffic management using endhost-based elephant detection., in 'INFOCOM', IEEE, pp [6] Benson, T.; Anand, A.; Akella, A. & Zhang, M. (2011), MicroTE: fine grained traffic engineering for data centers., in Kenjiro Cho & Mark Crovella, ed., 'CoNEXT', ACM, pp. 8. [7] Mogul, J. C.; Tourrilhes, J.; Yalagandula, P.; Sharma, P.; Curtis, A. R. & Banerjee, S. (2010), Devo: costeffective flow management for high performance enterprise networks., in Geoffrey G. Xie; Robert Beverly; Robert Morris & Bruce Davie, ed., 'HotNets', ACM, pp. 1. [8] Farrington, N.; Porter, G.; Radhakrishnan, S.; Bazzaz, H. H.; Subramanya, V.; Fainman, Y.; Papen, G. & Vahdat, A. (2010), Helios: a hybrid electrical/optical switch architecture for modular data centers., in Shivkumar Kalyanaraman; Venkata N. Padmanabhan; K. K. Ramakrishnan; Rajeev Shorey & Geoffrey M. Voelker, ed., 'SIGCOMM', ACM, pp [9] Seedorf, J., Berger, E. (2009), Application Layer Traffic Optimization (ALTO) Problem Statement, IETF RFC [10] White, T. (2009), Hadoop The Definitive Guide: MapReduce for the Cloud, O'Reilly [11] Jain, S., et. al (2013), B4: Experience with a Globally-Deployed Software Defined WAN, SIGCOMM, ACM, [12] Phaal, P.; Panchen, S. & McKee, N. (2001), 'InMon Corporation's s: A Method for Monitoring Traffic in Switched and Routed Networks', IETF, RFC [13] Trestian, R.; Muntean, G.-M. & Katrinis, K. (2013), MiceTrap: Scalable traffic engineering of datacenter mice flows using Open., in Filip De Turck; Yixin Diao; Choong Seon Hong; Deep Medhi & Ramin Sadre, ed., IEEE Symposium on Integrated Network Management (IM2013), IEEE, pp OF-DPA-WP102-R 14
BROADCOM SDN SOLUTIONS OF-DPA (OPENFLOW DATA PLANE ABSTRACTION) SOFTWARE
BROADCOM SDN SOLUTIONS OF-DPA (OPENFLOW DATA PLANE ABSTRACTION) SOFTWARE Network Switch Business Unit Infrastructure and Networking Group 1 TOPICS SDN Principles Open Switch Options Introducing OF-DPA
OpenDaylight Project Proposal Dynamic Flow Management
OpenDaylight Project Proposal Dynamic Flow Management Ram (Ramki) Krishnan, Varma Bhupatiraju et al. (Brocade Communications) Sriganesh Kini et al. (Ericsson) Debo~ Dutta, Yathiraj Udupi (Cisco) 1 Table
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.
IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam
Avoiding Network Polarization and Increasing Visibility in Cloud Networks Using Broadcom Smart- Hash Technology
Avoiding Network Polarization and Increasing Visibility in Cloud Networks Using Broadcom Smart- Hash Technology Sujal Das Product Marketing Director Network Switching Karthik Mandakolathur Sr Product Line
Data Center Load Balancing. 11.11.2015 Kristian Hartikainen
Data Center Load Balancing 11.11.2015 Kristian Hartikainen Load Balancing in Computing Efficient distribution of the workload across the available computing resources Distributing computation over multiple
A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks Hong Xu, Baochun Li [email protected], [email protected] Department of Computer Science, City University of Hong Kong Department
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
CORD Fabric, Overlay Virtualization, and Service Composition
CORD Design Notes CORD Fabric, Overlay Virtualization, and Service Composition Saurav Das Open Networking Foundation Ali Al Shabibi, Jonathan Hart, Charles Chan and Flavio Castro Open Networking Lab Hyunsun
Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP
IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York
SOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Freddie Örnebjär TREX Workshop 2012 2012 Brocade Communications Systems, Inc. 2012/09/14 Software-Defined Networking (SDN): Fundamental Control
Why Software Defined Networking (SDN)? Boyan Sotirov
Why Software Defined Networking (SDN)? Boyan Sotirov Agenda Current State of Networking Why What How When 2 Conventional Networking Many complex functions embedded into the infrastructure OSPF, BGP, Multicast,
Cisco s Massively Scalable Data Center
Cisco s Massively Scalable Data Center Network Fabric for Warehouse Scale Computer At-A-Glance Datacenter is the Computer MSDC is the Network Cisco s Massively Scalable Data Center (MSDC) is a framework
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Computer Science City University of Hong Kong HKUST, March 2, 2015 Background Aggregator Aggregator Aggregator Worker Worker Worker Worker Low latency for
Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure
White Paper Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure What You Will Learn The new Cisco Application Centric Infrastructure
Scalable Approaches for Multitenant Cloud Data Centers
WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,
SDN. WHITE PAPER Intel Ethernet Switch FM6000 Series - Software Defined Networking. Recep Ozdag Intel Corporation
WHITE PAPER Intel Ethernet Switch FM6000 Series - Software Defined Networking Intel Ethernet Switch FM6000 Series - Software Defined Networking Recep Ozdag Intel Corporation Software Defined Networking
T. S. Eugene Ng Rice University
T. S. Eugene Ng Rice University Guohui Wang, David Andersen, Michael Kaminsky, Konstantina Papagiannaki, Eugene Ng, Michael Kozuch, Michael Ryan, "c-through: Part-time Optics in Data Centers, SIGCOMM'10
Datacenter Network Large Flow Detection and Scheduling from the Edge
Datacenter Network Large Flow Detection and Scheduling from the Edge Rui (Ray) Zhou [email protected] Supervisor : Prof. Rodrigo Fonseca Reading & Research Project - Spring 2014 Abstract Today, datacenter
AIN: A Blueprint for an All-IP Data Center Network
AIN: A Blueprint for an All-IP Data Center Network Vasileios Pappas Hani Jamjoom Dan Williams IBM T. J. Watson Research Center, Yorktown Heights, NY Abstract With both Ethernet and IP powering Data Center
DEMYSTIFYING ROUTING SERVICES IN SOFTWAREDEFINED NETWORKING
DEMYSTIFYING ROUTING SERVICES IN STWAREDEFINED NETWORKING GAUTAM KHETRAPAL Engineering Project Manager, Aricent SAURABH KUMAR SHARMA Principal Systems Engineer, Technology, Aricent DEMYSTIFYING ROUTING
Delivering Scale Out Data Center Networking with Optics Why and How. Amin Vahdat Google/UC San Diego [email protected]
Delivering Scale Out Data Center Networking with Optics Why and How Amin Vahdat Google/UC San Diego [email protected] Vignette 1: Explosion in Data Center Buildout Motivation Blueprints for 200k sq. ft.
Open vswitch and the Intelligent Edge
Open vswitch and the Intelligent Edge Justin Pettit OpenStack 2014 Atlanta 2014 VMware Inc. All rights reserved. Hypervisor as Edge VM1 VM2 VM3 Open vswitch Hypervisor 2 An Intelligent Edge We view the
CS6204 Advanced Topics in Networking
CS6204 Advanced Topics in Networking Assoc Prof. Chan Mun Choon School of Computing National University of Singapore Aug 14, 2015 CS6204 Lecturer Chan Mun Choon Office: COM2, #04-17 Email: [email protected]
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
Networking in the Era of Virtualization
SOLUTIONS WHITEPAPER Networking in the Era of Virtualization Compute virtualization has changed IT s expectations regarding the efficiency, cost, and provisioning speeds of new applications and services.
Empowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
Ethernet Fabric Requirements for FCoE in the Data Center
Ethernet Fabric Requirements for FCoE in the Data Center Gary Lee Director of Product Marketing [email protected] February 2010 1 FCoE Market Overview FC networks are relatively high cost solutions
Simplify Your Data Center Network to Improve Performance and Decrease Costs
Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
SDN and Data Center Networks
SDN and Data Center Networks 10/9/2013 1 The Rise of SDN The Current Internet and Ethernet Network Technology is based on Autonomous Principle to form a Robust and Fault Tolerant Global Network (Distributed)
Brocade Data Center Fabric Architectures
WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center. TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building
SOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Eric Choi < [email protected]> Senior Manager, Service Provider Business Unit, APJ 2012 Brocade Communications Systems, Inc. EPF 7 2012/09/17 Software-Defined Networking
Definition of a White Box. Benefits of White Boxes
Smart Network Processing for White Boxes Sandeep Shah Director, Systems Architecture EZchip Technologies [email protected] Linley Carrier Conference June 10-11, 2014 Santa Clara, CA 1 EZchip Overview
Core and Pod Data Center Design
Overview The Core and Pod data center design used by most hyperscale data centers is a dramatically more modern approach than traditional data center network design, and is starting to be understood by
Datacenter architectures
Datacenter architectures Paolo Giaccone Notes for the class on Router and Switch Architectures Politecnico di Torino January 205 Outline What is a data center 2 Datacenter traffic 3 Routing and addressing
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
SDN CONTROLLER. Emil Gągała. PLNOG, 30.09.2013, Kraków
SDN CONTROLLER IN VIRTUAL DATA CENTER Emil Gągała PLNOG, 30.09.2013, Kraków INSTEAD OF AGENDA 2 Copyright 2013 Juniper Networks, Inc. www.juniper.net ACKLOWLEDGEMENTS Many thanks to Bruno Rijsman for his
Adaptive Routing for Layer-2 Load Balancing in Data Center Networks
Adaptive Routing for Layer-2 Load Balancing in Data Center Networks Renuga Kanagavelu, 2 Bu-Sung Lee, Francis, 3 Vasanth Ragavendran, Khin Mi Mi Aung,* Corresponding Author Data Storage Institute, Singapore.E-mail:
Brocade Data Center Fabric Architectures
WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building
Data Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
A Reliability Analysis of Datacenter Topologies
A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
HERCULES: Integrated Control Framework for Datacenter Traffic Management
HERCULES: Integrated Control Framework for Datacenter Traffic Management Wonho Kim Princeton University Princeton, NJ, USA Email: [email protected] Puneet Sharma HP Labs Palo Alto, CA, USA Email:
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Department of Computer Science City University of Hong Kong ShanghaiTech Symposium on Data Scinece June 23, 2015 Today s plan Overview of research in DCN
Data Center Network Architectures
Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology [email protected] Abstract Data centers have become increasingly essential part of
Lecture 7: Data Center Networks"
Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE
Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
Tutorial: OpenFlow in GENI
Tutorial: OpenFlow in GENI GENI Project Office The current Internet is at an impasse because new architecture cannot be deployed or even adequately evaluated [PST04] [PST04]: Overcoming the Internet Impasse
Software-Defined Networking for the Data Center. Dr. Peer Hasselmeyer NEC Laboratories Europe
Software-Defined Networking for the Data Center Dr. Peer Hasselmeyer NEC Laboratories Europe NW Technology Can t Cope with Current Needs We still use old technology... but we just pimp it To make it suitable
Energy Optimizations for Data Center Network: Formulation and its Solution
Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Leveraging SDN and NFV in the WAN
Leveraging SDN and NFV in the WAN Introduction Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are two of the key components of the overall movement towards software defined
Non-blocking Switching in the Cloud Computing Era
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Improving Flow Completion Time for Short Flows in Datacenter Networks
Improving Flow Completion Time for Short Flows in Datacenter Networks Sijo Joy, Amiya Nayak School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada {sjoy028, nayak}@uottawa.ca
OpenFlow and Software Defined Networking presented by Greg Ferro. OpenFlow Functions and Flow Tables
OpenFlow and Software Defined Networking presented by Greg Ferro OpenFlow Functions and Flow Tables would like to thank Greg Ferro and Ivan Pepelnjak for giving us the opportunity to sponsor to this educational
Data Center Network Topologies
Data Center Network Topologies. Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: 3-1 Overview
How To Improve Traffic Engineering
Software Defined Networking-based Traffic Engineering for Data Center Networks Yoonseon Han, Sin-seok Seo, Jian Li, Jonghwan Hyun, Jae-Hyoung Yoo, James Won-Ki Hong Division of IT Convergence Engineering,
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
Broadcom Smart-NV Technology for Cloud-Scale Network Virtualization. Sujal Das Product Marketing Director Network Switching
Broadcom Smart-NV Technology for Cloud-Scale Network Virtualization Sujal Das Product Marketing Director Network Switching April 2012 Introduction Private and public cloud applications, usage models, and
Fiber Channel Over Ethernet (FCoE)
Fiber Channel Over Ethernet (FCoE) Using Intel Ethernet Switch Family White Paper November, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR
WHITE PAPER. Network Virtualization: A Data Plane Perspective
WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable
MPLS-TP. Future Ready. Today. Introduction. Connection Oriented Transport
MPLS-TP Future Ready. Today Introduction As data traffic started dominating telecom networks, there was a need for transport data networks, as opposed to transport TDM networks. Traditional transport technologies
Hadoop Technology for Flow Analysis of the Internet Traffic
Hadoop Technology for Flow Analysis of the Internet Traffic Rakshitha Kiran P PG Scholar, Dept. of C.S, Shree Devi Institute of Technology, Mangalore, Karnataka, India ABSTRACT: Flow analysis of the internet
Architecting Low Latency Cloud Networks
Architecting Low Latency Cloud Networks Introduction: Application Response Time is Critical in Cloud Environments As data centers transition to next generation virtualized & elastic cloud architectures,
20. Switched Local Area Networks
20. Switched Local Area Networks n Addressing in LANs (ARP) n Spanning tree algorithm n Forwarding in switched Ethernet LANs n Virtual LANs n Layer 3 switching n Datacenter networks John DeHart Based on
How To Orchestrate The Clouddusing Network With Andn
ORCHESTRATING THE CLOUD USING SDN Joerg Ammon Systems Engineer Service Provider 2013-09-10 2013 Brocade Communications Systems, Inc. Company Proprietary Information 1 SDN Update -
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012
VL2: A Scalable and Flexible Data Center Network 15744: Computer Networks, Fall 2012 Presented by Naveen Chekuri Outline Introduction Solution Approach Design Decisions Addressing and Routing Evaluation
SDN and NFV in the WAN
WHITE PAPER Hybrid Networking SDN and NFV in the WAN HOW THESE POWERFUL TECHNOLOGIES ARE DRIVING ENTERPRISE INNOVATION rev. 110615 Table of Contents Introduction 3 Software Defined Networking 3 Network
Extending Networking to Fit the Cloud
VXLAN Extending Networking to Fit the Cloud Kamau WangŨ H Ũ Kamau Wangũhgũ is a Consulting Architect at VMware and a member of the Global Technical Service, Center of Excellence group. Kamau s focus at
Software Defined Networking
Software Defined Networking Richard T. B. Ma School of Computing National University of Singapore Material from: Scott Shenker (UC Berkeley), Nick McKeown (Stanford), Jennifer Rexford (Princeton) CS 4226:
SDN and OpenFlow. Naresh Thukkani (ONF T&I Contributor) Technical Leader, Criterion Networks
SDN and OpenFlow Naresh Thukkani (ONF T&I Contributor) Technical Leader, Criterion Networks Open 2014 Open SDN Networking India Foundation Technology Symposium, January 18-19, 2015, Bangalore Agenda SDN
Leveraging Advanced Load Sharing for Scaling Capacity to 100 Gbps and Beyond
Leveraging Advanced Load Sharing for Scaling Capacity to 100 Gbps and Beyond Ananda Rajagopal Product Line Manager Service Provider Solutions Foundry Networks [email protected] Agenda 2 Why Load
Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26
Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26 1 Outline Cloud data center (CDC) Software Defined Network (SDN) Network Function Virtualization (NFV) Conclusion 2 Cloud Computing Cloud computing
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
The Next Generation of Wide Area Networking
The Next Generation of Wide Area Networking Introduction As pointed out in The 2014 State of the WAN Report 1, the vast majority of WAN traffic currently uses either the Internet or MPLS. Since the Internet
Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions
Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions Steve Gennaoui, Jianhua Yin, Samuel Swinton, and * Vasil Hnatyshin Department of Computer Science Rowan University
What s New in VMware vsphere 5.5 Networking
VMware vsphere 5.5 TECHNICAL MARKETING DOCUMENTATION Table of Contents Introduction.................................................................. 3 VMware vsphere Distributed Switch Enhancements..............................
Scalable High Resolution Network Monitoring
Scalable High Resolution Network Monitoring Open Cloud Day Wintherthur, 16 th of June, 2016 Georgios Kathareios, Ákos Máté, Mitch Gusat IBM Research GmbH Zürich Laboratory {ios, kos, mig}@zurich.ibm.com
What is VLAN Routing?
Application Note #38 February 2004 What is VLAN Routing? This Application Notes relates to the following Dell product(s): 6024 and 6024F 33xx Abstract Virtual LANs (VLANs) offer a method of dividing one
VMware. NSX Network Virtualization Design Guide
VMware NSX Network Virtualization Design Guide Table of Contents Intended Audience... 3 Overview... 3 Components of the VMware Network Virtualization Solution... 4 Data Plane... 4 Control Plane... 5 Management
