Brocade Data Center Fabric Architectures

Size: px
Start display at page:

Download "Brocade Data Center Fabric Architectures"

Transcription

1 WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center. TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building Blocks...3 Building Data Center Sites with Brocade VCS Fabric Technology...11 Building Data Center Sites with Brocade IP Fabric...16 Building Data Center Sites with Layer 2 and Layer 3 Fabrics...20 Based on the principles of the New IP, Brocade is building on the proven success of the Brocade VDX platform by expanding the Brocade cloudoptimized network and network virtualization architectures and delivering new automation innovations to meet customer demand for higher levels of scale, agility, and operational efficiency. The scalable and highly automated Brocade data center fabric architectures described in this white paper make it easy for infrastructure planners to architect, automate, and integrate with current and future data center technologies while they transition to their own cloud-optimized data center on their own time and terms. Scaling a Data Center Site with a Data Center Core...20 Control Plane and Hardware Scale Considerations...21 Choosing an Architecture for Your Data Center...22 Network Virtualization Options Turnkey and Programmable Automation...44 About Brocade...47 This paper helps network architects, virtualization architects, and network engineers to make informed design, architecture, and deployment decisions that best meet their technical and business objectives. The following topics are covered in detail: Network architecture options for scaling from tens to hundreds of thousands of servers Network virtualization solutions that include integration with leading controller-based and controller-less industry solutions Data Center Interconnect (DCI) options Server-based, open, and programmable turnkey automation tools for rapid provisioning and customization with minimal effort Evolution of Data Center Architectures Data center networking architectures have evolved with the changing requirements of the modern data center and cloud environments. Traditional data center networks were a derivative of the 3-tier architecture, prevalent in enterprise campus environments. (See Figure 1.) The tiers are defined as Access, Aggregation, and Core. The 3-tier topology was architected with the requirements of an enterprise campus in mind. A typical network access layer requirement of an enterprise campus is to provide connectivity to workstations. These enterprise workstations exchange traffic with either an enterprise data center for

2 business application access or with the Internet. As a result, most traffic in this network is traversing in and out through the tiers in the network. This traffic pattern is commonly referred to as north-south traffic. When compared to an enterprise campus network, the traffic patterns in a data center network are changing rapidly from north-south to east-west. Cloud applications are often multitiered and hosted at different endpoints connected to the network. The communication between these application tiers is a major contributor to the overall traffic in a data center. In fact, some of the very large data centers report that more than 90 percent of their overall traffic occurs between the application tiers. This traffic pattern is commonly referred to as east-west traffic. Core Agg Access Figure 1: Three-Tier Architecture: Ideal for North-South Traffic Patterns Commonly Found in Client- Server Compute Models. Traffic patterns are the primary reasons that data center networks need to evolve into scale-out architectures. These scaleout architectures are built to maximize the throughput for east-west traffic. (See Figure 2.) In addition to providing high east-west throughput, scale-out architectures provide a mechanism to add capacity to the network horizontally, without reducing the provisioned capacity between the existing endpoints. An example of scale-out architectures is a leaf-spine topology, which is described in detail in a later section of this paper. Core / Scale Out In recent years, with the changing economics of application delivery, a shift towards the cloud has occurred. Enterprises have looked to consolidate and host private cloud services. Meanwhile, application cloud services, as well as public service provider clouds, have grown at a rapid pace. With this increasing shift to the cloud, the scale of the network deployment has increased drastically. Advanced scale- Figure 2: Scale-Out Architecture: Ideal for East-West Traffic Patterns Commonly Found with Web- Based or Cloud-Based Application Designs. 2

3 out architectures allow networks to be deployed at many multiples of the scale of a leaf-spine topology (see Figure 3). In addition to traffic patterns, as server virtualization has become mainstream, newer requirements of the networking infrastructure are emerging. Because physical servers can now host several virtual machines (VM), the scale requirement for the control and data planes for MAC addresses, IP addresses, and Address Resolution Protocol (ARP) tables have multiplied. Also, large numbers of physical and virtualized endpoints must support much higher throughput than a traditional enterprise environment, leading to an evolution in Ethernet standards of 10 Gigabit Ethernet (GbE), 40 GbE, 100 GbE, and beyond. In addition, the need to extend Layer 2 domains across the infrastructure and across sites to support VM mobility is creating new challenges for network architects. For multitenant cloud environments, providing traffic isolation at the networking layers, enforcing security and traffic policies for the cloud tenants and applications is a priority. Cloud scale deployments also require the networking infrastructure to be agile in provisioning new capacity, tenants, and features, as well as making modifications and managing the lifecycle of the infrastructure. The remainder of this white paper describes data center networking architectures that meet the requirements for building cloud-optimized networks that address current and future needs for enterprises and service provider clouds. More specifically, this paper describes: Example topologies and deployment models demonstrating Brocade VDX switches in Brocade VCS fabric or Brocade IP fabric architectures Network virtualization solutions that include controller-based virtualization such as VMware NSX and controllerless virtualization using the Brocade Gateway Protocol Ethernet Virtual Private Network (BGP-EVPN) DCI solutions for interconnecting multiple data center sites Open and programmable turnkey automation and orchestration tools that can simplify the provisioning of network services Data Center Networks: Building Blocks This section discusses the building blocks that are used to build the appropriate network and virtualization architecture for a data center site. These building blocks consist of the various elements that fit into an overall data center site deployment. The goal is to build fairly independent elements that can be assembled together, depending on the scale requirements of the networking infrastructure. Internet DCI Super- WAN Edge 10GbE DC PoD 1 DC PoD N Figure 3: Example of an Advanced Scale-out Architecture Commonly Used in Today s Large-Scale Data Centers. 3

4 Networking Endpoints The first building blocks are the networking endpoints that connect to the networking infrastructure. These endpoints include the compute servers and storage devices, as well as network service appliances such as firewalls and load balancers. Figure 4 shows the different types of racks used in a data center infrastructure. as described below: Infrastructure and Management Racks: These racks host the management infrastructure, which includes any management appliances or software used to manage the infrastructure. Examples of this are server virtualization management software like VMware vcenter or Microsoft SCVMM, orchestration software like OpenStack or VMware vrealize Automation, network controllers like the Brocade SDN Controller or VMware NSX, and network management and automation tools like Brocade Network Advisor. Examples of infrastructure racks are IP physical or virtual storage appliances. Servers/Blades IP Storage Compute racks: Compute racks host the workloads for the data centers. These workloads can be physical servers, or they can be virtualized servers when the workload is made up of Virtual Machines (VMs). The compute endpoints can be single or can be multihomed to the network. Servers/Blades Management/Infrastructure Racks Compute Racks Figure 4: Networking Endpoints and Racks. Edge racks: The network services connected to the network are consolidated in edge racks. The role of the edge racks is to host the edge services, which can be physical appliances or VMs. These definitions of infrastructure/ management, compute racks, and edge racks are used throughout this white paper. Servers/Blades vlag Pair IP Storage Single-Tier Topology The second building block is a singletier network topology to connect endpoints to the network. Because of the existence of only one tier, all endpoints connect to this tier of the network. An example of a single-tier topology is shown in Figure 5. The single-tier switches are shown as a virtual Link Aggregation Group (vlag) pair. The topology in Figure 5 shows the management/infrastructure, compute racks, and edge racks connected to a pair of switches participating in multiswitch port channeling. This pair of switches is called a vlag pair. The single-tier topology scales the least among all the topologies described in this paper, but it provides the best choice for smaller deployments, as it reduces the Capital Expenditure (CapEx) costs for the network in terms of the size of the infrastructure deployed. It also reduces the optics and cabling costs for the networking infrastructure. Servers/Blades Management/Infrastructure Racks Compute Racks Figure 5: Ports on Demand with a Single Networking Tier. 4

5 Design Considerations for a Single-Tier Topology The design considerations for deploying a single-tier topology are summarized in this section. Oversubscription Ratios It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios at the vlag pair should be well understood and planned for. The north-south oversubscription at the vlag pair is described as the ratio of the aggregate bandwidth of all the downlinks from the vlag pair that are connected to the endpoints to the aggregate bandwidth of all the uplinks that are connected to the edge/core router (described in a later section). The north-south oversubscription dictates the proportion of traffic between the endpoints versus the traffic entering and exiting the data center site. It is also important to understand the bandwidth requirements for the inter-rack traffic. This is especially true for all north-south communication through the services hosted in the edge racks. All such traffic flows through the vlag pair to the edge racks and, if the traffic needs to exit, it flows back to the vlag switches. Thus, the aggregate ratio of bandwidth connecting the compute racks to the aggregate ratio of bandwidth connecting the edge racks is an important consideration. Another consideration is the bandwidth of the link that interconnects the vlag pair. In case of multihomed endpoints and no failure, this link should not be used for data plane forwarding. However, if there are link failures in the network, then this link may be used for data plane forwarding. The bandwidth requirement for this link depends on the redundancy design for link failures. For example, a design to tolerate up to two link failures has a 20 GbE interconnection between the Top of Rack/End of Row (ToR/EoR) switches. Port Density and Speeds for Uplinks and Downlinks In a single-tier topology, the uplink and downlink port density of the vlag pair determines the number of endpoints that can be connected to the network, as well as the north-south oversubscription ratios. Another key consideration for single-tier topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX Series switches support, 40 GbE, and 100 GbE interfaces, which can be used for uplinks and downlinks. The choice of platform for the vlag pair depends on the interface speed and density requirements. Scale and Future Growth A design consideration for single-tier topologies is the need to plan for more capacity in the existing infrastructure and more endpoints in the future. Adding more capacity between existing endpoints and vlag switches can be done by adding new links between them. Also, any future expansion in the number L2 Links Figure 6: - Topology. of endpoints connected to the singletier topology should be accounted for during the network design, as this requires additional ports in the vlag switches. Another key consideration is whether to connect the vlag switches to external networks through core/edge routers and whether to add a networking tier for higher scale. These designs require additional ports at the ToR/EoR. Multitier designs are described in a later section of this paper. Ports on Demand Licensing Ports on Demand licensing allows you to expand your capacity at your own pace, in that you can invest in a higher port density platform, yet license only a subset of the available ports on the Brocade VDX switch, the ports that you are using for current needs. This allows for an extensible and future-proof network architecture without the additional upfront cost for unused ports on the switches. You pay only for the ports that you plan to use. - Topology (Two-Tier) The two-tier leaf-spine topology has become the de facto standard for networking topologies when building medium-scale data center infrastructures. An example of leaf-spine topology is shown in Figure 6. 5

6 The leaf-spine topology is adapted from Clos telecommunications networks. This topology is also known as the 3-stage folded Clos, with the ingress and egress stages proposed in the original Clos architecture folding together at the spine to form the leaves. The role of the leaf is to provide connectivity to the endpoints in the network. These endpoints include compute servers and storage devices, as well as other networking devices like routers and switches, load balancers, firewalls, or any other networking endpoint physical or virtual. As all endpoints connect only to the leaves, policy enforcement including security, traffic path selection, Quality of Service (QoS) markings, traffic scheduling, policing, shaping, and traffic redirection are implemented at the leaves. The role of the spine is to provide interconnectivity between the leaves. Network endpoints do not connect to the spines. As most policy implementation is performed at the leaves, the major role of the spine is to participate in the control plane and data plane operations for traffic forwarding between the leaves. As a design principle, the following requirements apply to the leaf-spine topology: Each leaf connects to all the spines in the network. The spines are not interconnected with each other. The leaves are not interconnected with each other for data plane purposes. (The leaves may be interconnected for control plane operations such as forming a server-facing vlag.) These are some of the key benefits of a leaf-spine topology: Because each leaf is connected to every spine, there are multiple redundant paths available for traffic between any pair of leaves. Link failures cause other paths in the network to be used. Because of the existence of multiple paths, Equal-Cost Multipathing (ECMP) can be leveraged for flows traversing between pairs of leaves. With ECMP, each leaf has equal-cost routes, to reach destinations in other leaves, equal to the number of spines in the network. The leaf-spine topology provides a basis for a scale-out architecture. New leaves can be added to the network without affecting the provisioned east-west capacity for the existing infrastructure. The role of each tier in the network is well defined (as discussed previously), providing modularity in the networking functions and reducing architectural and deployment complexities. The leaf-spine topology provides granular control over subscription ratios for traffic flowing within a rack, traffic flowing between racks, and traffic flowing outside the leaf-spine topology. Design Considerations for a - Topology There are several design considerations for deploying a leaf-spine topology. This section summarizes the key considerations. Oversubscription Ratios It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios at each layer should be well understood and planned for. For a leaf switch, the ports connecting to the endpoints are defined as downlink ports, and the ports connecting to the spines are defined as uplink ports. The oversubscription ratio at the leaves is the ratio of the aggregate bandwidth for the downlink ports and the aggregate bandwidth for the uplink ports. For a spine switch in a leaf-spine topology, the east-west oversubscription ratio is defined per pair of leaf switches connecting to the spine switch. For a given pair of leaf switches connecting to the spine switch, the oversubscription ratio is the ratio of aggregate bandwidth of the links connecting to each leaf switch. In a majority of deployments, this ratio is 1:1, making the east-west oversubscription ratio at the spine nonblocking. Exceptions to the nonblocking eastwest oversubscriptions should be well understood and depend on the traffic patterns of the endpoints that are connected to the respective leaves. The oversubscription ratios described here govern the ratio of traffic bandwidth between endpoints connected to the same leaf switch and the traffic bandwidth between endpoints connected to different leaf switches. As an example, if the oversubscription ratio is 3:1 at the leaf and 1:1 at the spine, then the bandwidth of traffic between endpoints connected to the same leaf switch should be three times the bandwidth between endpoints connected to different leaves. From a network endpoint perspective, the network oversubscriptions should be planned so that the endpoints connected to the network have the required bandwidth for communications. Specifically, endpoints that are expected to use higher bandwidth 6

7 should be localized to the same leaf switch (or same leaf switch pair when endpoints are multihomed). The ratio of the aggregate bandwidth of all the spine downlinks connected to the leaves to the aggregate bandwidth of all the downlinks connected to the border leaves (described in the edge services and border switch section) defines the north-south oversubscription at the spine. The north-south oversubscription dictates the traffic destined to the services that are connected to the border leaf switches and that exit the data center site. and Scale Because the endpoints in the network connect only to the leaf switches, the number of leaf switches in the network depends on the number of interfaces required to connect all the endpoints. The port count requirement should also account for multihomed endpoints. Because each leaf switch connects to all the spines, the port density on the spine switch determines the maximum number of leaf switches in the topology. A higher oversubscription ratio at the leaves reduces the leaf scale requirements, as well. The number of spine switches in the network is governed by a combination of the throughput required between the leaf switches, the number of redundant/ ECMP paths between the leaves, and the port density in the spine switches. Higher throughput in the uplinks from the leaf switches to the spine switches can be achieved by increasing the number of spine switches or bundling the uplinks together in port channel interfaces between the leaves and the spines. Port Speeds for Uplinks and Downlinks Another consideration for leaf-spine topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX switches support, 40 GbE, and 100 GbE interfaces, which can be used for uplinks and downlinks. The choice of platform for the leaf and spine depends on the interface speed and density requirements. Scale and Future Growth Another design consideration for leafspine topologies is the need to plan for more capacity in the existing infrastructure and to plan for more endpoints in the future. Adding more capacity between existing leaf and spine switches can be done by adding spine switches or adding new interfaces between existing leaf and spine switches. In either case, the port density requirements for the leaf and the spine switches should be accounted for during the network design process. If new leaf switches need to be added to accommodate new endpoints in the network, then ports at the spine switches are required to connect the new leaf switches. In addition, you must decide whether to connect the leaf-spine topology to external networks through border leaf switches and also whether to add an additional networking tier for higher scale. Such designs require additional ports at the spine. These designs are described in another section of this paper. Ports on Demand Licensing Remember that Ports on Demand licensing allows you to expand your capacity at your own pace in that you can invest in a higher port density platform, yet license only the ports on the Brocade VDX switch that you are using for current needs. This allows for an extensible and future-proof network architecture without additional cost. Deployment Model The links between the leaf and spine can be either Layer 2 or Layer 3 links. If the links between the leaf and spine are Layer 2 links, the deployment is known as a Layer 2 (L2) leaf-spine deployment or a Layer 2 Clos deployment. You can deploy Brocade VDX switches in a Layer 2 deployment by using Brocade VCS Fabric technology. With Brocade VCS Fabric technology, the switches in the leaf-spine topology cluster together and form a fabric that provides a single point for management, distributed control plane, embedded automation, and multipathing capabilities from Layers 1 to 3. The benefits of deploying a VCS fabric are described later in this paper. If the links between the leaf and spine are Layer 3 links, the deployment is known as a Layer 3 (L3) leaf-spine deployment or a Layer 3 Clos deployment. You can deploy Brocade VDX switches in a Layer 3 deployment by using Brocade IP fabrics. Brocade IP fabrics provide a highly scalable, programmable, standardsbased, and interoperable networking infrastructure. The benefits of Brocade IP fabrics are described later in this paper. 7

8 Data Center Points of Delivery Figure 7 shows a building block for a data center site. This building block is called a data center point of delivery (PoD). The data center PoD consists of the networking infrastructure in a leafspine topology along with the endpoints grouped together in management/ infrastructure and compute racks. The idea of a PoD is to create a simple, repeatable, and scalable unit for building a data center site at scale. Optimized 5-Stage Folded Clos Topology (Three Tiers) Multiple leaf-spine topologies can be aggregated together for higher scale in an optimized 5-stage folded Clos topology. This topology adds a new tier to the network, known as the superspine. The role of the super-spine is to provide connectivity between the spine switches across multiple data center PoDs. Figure 8 on the following page shows four super-spine switches connecting the spine switches across multiple data center PoDs. The connection between the spines and the super-spines follow the Clos principles: Each spine connects to all the superspines in the network. Neither the spines nor the super-spines are interconnected with each other. Similarly, all the benefits of a leaf-spine topology namely, multiple redundant paths, ECMP, scale-out architecture and control over traffic patterns are realized in the optimized 5-stage folded Clos topology as well. With an optimized 5-stage Clos topology, a PoD is a simple and replicable unit. Each PoD can be managed independently, including firmware versions and network configurations. This topology also allows the data center site capacity to scale up by adding new PoDs or scale down by removing existing PoDs without affecting the existing infrastructure providing elasticity in scale and isolation of failure domains. This topology also provides a basis for interoperation of different deployment models of Brocade VCS fabrics and IP fabrics. This is described later in this paper. Design Considerations for Optimized 5-Stage Clos Topology The design considerations of oversubscription ratios, port speeds and density, spine and super-spine scale, planning for future growth, and Brocade Ports on Demand licensing, which were described for the leaf-spine topology, apply to the optimized 5-stage folded Clos topology as well. Some key considerations are highlighted below. Controller Management SW Servers/Blades IP Storage Servers/Blades Servers/Blades Management/Infrastructure Racks Compute Racks Figure 7: A Data Center PoD. 8

9 Super- 1 0bEG DC PoD 1 DC PoD N Figure 8: An Optimized 5-Stage Folded Clos with Data Center PoDs. Oversubscription Ratios Because the spine switches now have uplinks connecting to the superspine switches, the north-south oversubscription ratios for the spine switches dictate the ratio of aggregate bandwidth of traffic switched east-west within a data center PoD to the aggregate bandwidth of traffic exiting the data center PoD. This is a key consideration from the perspective of network infrastructure and services placement, application tiers, and (in the case of service providers) tenant placement. In cases of north-south oversubscription at the spines, endpoints should be placed to optimize traffic within a data center PoD. At the super-spine switch, the east-west oversubscription defines the ratio of bandwidth of the downlink connections for a pair of data center PoDs. In most cases, this ratio is 1:1. The ratio of the aggregate bandwidth of all the super-spine downlinks connected to the spines to the aggregate bandwidth of all the downlinks connected to the border leaves (described in the section of this paper on edge services and border switches) defines the north-south oversubscription at the super-spine. The north-south oversubscription dictates the traffic destined to the services connected to the border leaf switches and exiting the data center site. Deployment Model Because of the existence of the Layer 3 boundary either at the leaf or at the spine (depending on the Layer 2 or Layer 3 deployment model in the leaf-spine topology of the data center PoD), the links between the spines and super-spines are Layer 3 links. The routing and overlay protocols are described later in this paper. Layer 2 connections between the spines and super-spines is an option for smaller scale deployments, due to the inherent scale limitations of Layer 2 networks. These Layer 2 connections would be IEEE 802.1q based optionally over Link Aggregation Control Protocol (LACP) aggregated links. However, this design is not discussed in this paper. Edge Services and Switches For two-tier and three-tier data center topologies, the role of the border switches in the network is to provide external connectivity to the data center site. In addition, as all traffic enters and exits the data center through the border leaf switches, they present the ideal location in the network to connect network services like firewalls, load-balancers, and edge VPN routers. 9

10 Firewall SW Firewall Servers/Blades Load Balancer SW VPN SW Router Figure 9:. The topology for interconnecting the border switches depends on the number of network services that need to be attached, as well as the oversubscription ratio at the border switches. Figure 9 shows a simple topology for border switches, where the service endpoints connect directly to the border switches. switches in this simple topology are referred to as border leaf switches because the service endpoints connect to them directly. More scalable border switch topologies are possible, if a greater number of service endpoints need to be connected. These topologies include a leaf-spine topology for the border switches with border spines and border leaves. This white paper demonstrates only the border leaf variant for the border switch topologies, but this is easily expanded to a leafspine topology for the border switches. The border switches with the edge racks together form the edge services PoD. Design Considerations for Switches The following section describes the design considerations for border switches. Oversubscription Ratios The border leaf switches have uplink connections to spines in the leaf-spine topology and to super-spines in the 3-tier topology. They also have uplink connections to the data center core/wide- Area Network (WAN) edge routers as described in the next section. These data center site topologies are discussed in detail later in this paper. The ratio of the aggregate bandwidth of the uplinks connecting to the spines/ super-spines to the aggregate bandwidth of the uplink connecting to the core/edge routers determines the oversubscription ratio for traffic exiting the data center site. The north-south oversubscription ratios for the services connected to the border leaves is another consideration. Because many of the services connected to the border leaves may have public interfaces facing external entities like core/edge routers and internal interfaces facing the internal network, the northsouth oversubscription for each of these connections is an important design consideration. Data Center Core/WAN Edge Handoff The uplinks to the data center core/wan edge routers from the border leaves carry the traffic entering and exiting the data center site. The data center core/wan edge handoff can be Layer 2 and/or Layer 3 in combination with overlay protocols. The handoff between the border leaves and the data center core/wan edge may provide domain isolation for the control and data plane protocols running in the internal network and built using onetier, two-tier, or three-tier topologies. This helps in providing independent 10

11 administrative, fault isolation, and control plane domains for isolation, scale, and security between the different domains of a data center site. The handoff between the data center core/wan edge and border leaves is explored in brief elsewhere in this paper. Data Center Core and WAN Edge Routers The border leaf switches connect to the data center core/wan edge devices in the network to provide external connectivity to the data center site. Figure 10 shows an example of the connectivity between border leaves, a collapsed data center core/wan edge tier, and external networks for Internet and DCI options. The data center core routers might provide the interconnection between data center PoDs built as single-tier, leaf-spine, or optimized 5-stage Clos deployments within a data center site. For enterprises, the core router might also provide connections to the enterprise campus networks through campus core routers. The data center core might also connect to WAN edge devices for WAN and interconnect connections. Note that border leaves connecting to the data center core provide the Layer 2 or Layer 3 handoff, along with any overlay control and data planes. The WAN edge devices provide the interfaces to the Internet and DCI solutions. Specifically for DCI, these devices function as the Provider Edge (PE) routers, enabling connections to other data center sites through WAN technologies like Multiprotocol Label Switching (MPLS) VPN, Virtual Private LAN Services (VPLS), Provider Backbone Bridges (PBB), Dense Wavelength Division Multiplexing (DWDM), and so forth. These DCI solutions are described in a later section.. Building Data Center Sites with Brocade VCS Fabric Technology Brocade VCS fabrics are Ethernet fabrics built for modern data center infrastructure needs. With Brocade VCS Fabric technology, up to 48 Brocade VDX switches can participate in a VCS fabric. The data plane of the VCS fabric is based on the Transparent Interconnection of Lots of Links (TRILL) standard, supported by Layer 2 routing protocols that propagate topology information within the fabrics. This ensures that there are no loops in the fabrics, and there is no need to run Spanning Tree Protocol (STP). Also, none of the links are blocked. Brocade Internet DCI Data Center Core / WAN Edge Figure 10: Collapsed Data Center Core and WAN Edge Routers Connecting Internet and DCI Fabric to the in the Data Center Site. 11

12 VCS Fabric technology provides a compelling solution for deploying a Layer 2 Clos topology. Brocade VCS Fabric technology provides these benefits: Single point of management: With all the switches in a VCS fabric participating in a logical chassis, the entire topology can be managed as a single switch chassis. This drastically reduces the management complexity of the solution. Distributed control plane: Control plane and data plane state information is shared across devices in the VCS fabric, which enables fabric-wide MAC address learning, multiswitch port channels (vlag), Distributed Spanning Tree (DiST), and gateway redundancy protocols like Virtual Router Redundancy Protocol Extended (VRRP-E) and Fabric Virtual Gateway (FVG), among others. These enable the VCS fabric to function like a single switch to interface with other entities in the infrastructure. TRILL-based Ethernet fabric: Brocade VCS Fabric technology, which is based on the TRILL standard, ensures that no links are blocked in the Layer 2 network. Because of the existence of a Layer 2 routing protocol, STP is not required. Multipathing from Layers 1 to 3: Brocade VCS Fabric technology provides efficiency and resiliency through the use of multipathing from Layers 1 to 3: --At Layer 1, Brocade trunking (BTRUNK) enables frame-based load balancing between a pair of switches that are part of the VCS fabric. This ensures that thick, or elephant flows do not congest an Inter-Switch Link (ISL). --Because of the existence of a Layer 2 routing protocol, Layer 2 ECMP is performed between multiple next hops. This is critical in a Clos topology, where all the spines are ECMP next hops for a leaf that sends traffic to an endpoint connected to another leaf. The same applies for ECMP traffic from the spines that have the superspines as the next hops. --Layer 3 ECMP using Layer 3 routing protocols ensures that traffic is load balanced between Layer 3 next hops. Embedded automation: Brocade VCS Fabric technology provides embedded turnkey automation built into Brocade Network OS. These automation features enable zero-touch provisioning of new switches into an existing fabric. Brocade VDX switches also provide multiple management methods, including the Command Line Interface (CLI), Simple Network Management Protocol (SNMP), REST, and Network Configuration Protocol (NETCONF) interfaces. Multitenancy at Layers 2 and 3: With Brocade VCS Fabric technology, multitenancy features at Layers 2 and 3 enable traffic isolation and segmentation across the fabric. Brocade VCS Fabric technology allows an extended range of up to 8000 Layer 2 domains within the fabric, while isolating overlapping IEEE 802.1q-based tenant networks into separate Layer 2 domains. Layer 3 multitenancy using Virtual Routing and Forwarding (VRF) protocols, multi-vrf routing protocols, as well as BGP-EVPN, enables largescale Layer 3 multitenancy. Ecosystem integration and virtualization features: Brocade VCS Fabric technology integrates with leading industry solutions and products like OpenStack, VMware products like vsphere, NSX, and vrealize, common infrastructure programming tools like Python, and Brocade tools like Brocade Network Advisor. Brocade VCS Fabric technology is virtualization-aware and helps dramatically reduce administrative tasks and enable seamless VM migration with features like Automatic Migration of Port Profiles (AMPP), which automatically adjusts port profile information as a VM moves from one server to another. Advanced storage features: Brocade VDX switches provide rich storage protocols and features like Fibre Channel over Ethernet (FCoE), Data Center Bridging (DCB), Monitoring and Alerting Policy Suite (MAPS), and AutoNAS (Network Attached Storage), among others, to enable advanced storage networking. The benefits and features listed simplify Layer 2 Clos deployment by using Brocade VDX switches and Brocade VCS Fabric technology. The next section describes data center site designs that use Layer 2 Clos built with Brocade VCS Fabric technology. 12

13 Data Center Site with - Topology Figure 11 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. The data center PoD shown here was built using a VCS fabric, and the border leaves in the edge services PoD was built using a separate VCS fabric. The border leaves are connected to the spine switches in the data center PoD and also to the data center core/wan edge routers. These links can be either Layer 2 or Layer 3 links, depending on the requirements of the deployment and the handoff required to the data center core/wan edge routers. There can be more than one edge services PoD in the network, depending on the service needs and the bandwidth requirement for connecting to the data center core/wan edge routers. As an alternative to the topology shown in Figure 11, the border leaf switches in the edge services PoD and the data center PoD can be part of the same VCS fabric, to extend the fabric benefits to the entire data center site. Scale Table 1 summarizes scale numbers with key combinations of Brocade VDX platforms at the Places in the Network (PINs) for edge ports and racks for a leaf-spine topology. The following assumptions are made: 48 switches in a VCS fabric with 4 spines in a leaf-spine topology 2 border leaves used in the topology 40 GbE links between the leaves and the spines: 4 40 GbE uplink ports on each leaf that connect to each of the 4 spines 40 GbE links between the border leaves and the spines: 4 40 GbE uplink ports on each border leaf that connect to each of the 4 spines 40 GbE interface breakout to 4 interfaces where available on the Brocade VDX 6740 Switch and Brocade VDX 6940 Switch platforms for endpoint connections Brocade VDX 8770 Switch platforms that use GbE line cards with 40 GbE interfaces L2 Links Internet DCI Data Center Core/ WAN Edge DC PoD Figure 11: Data Center Site Built with a - Topology and Brocade VCS Fabric Technology. 13

14 Table 1: Scale Numbers for a Data Center Site with a - Topology Implemented with Brocade VCS Fabric Technology. Switch Count Switch Port Count for Data Center Site Q S Q S S Q Q Q Q Q Q Scaling the Data Center Site with an Optimized 5-Stage Folded Clos If multiple VCS fabrics are needed at a data center site, then the optimized 5-stage Clos topology is used to increase scale by interconnecting the data center PoDs built using leaf-spine topology with Brocade VCS Fabric technology. This deployment architecture is referred to as a multifabric topology using VCS fabrics. An example topology is shown in Figure 12. In a multifabric topology using VCS fabrics, individual data center PoDs resemble a leaf-spine topology deployed using Brocade VCS Fabric technology. However, the new super-spine tier is used to interconnect the spine switches in the data center PoD. In addition, the border leaf switches are also connected to the super-spine switches. Note that the superspines do not participate in a VCS fabric, and the links between the super-spines, spine, and border leaves are Layer 3 links. Internet DCI L2 Links L3 Links Super- Data Center Core/ WAN Edge DC PoD 1 DC PoD N Figure 12: Data Center Site Built with a Optimized 5-Stage Folded Clos Topology and Brocade VCS Fabric Technology. 14

15 Figure 12 shows only one edge services PoD, but there can be multiple such PoDs depending on the edge service endpoint requirements, the oversubscription for traffic that is exchanged with the data center core/wan edge, and the related handoff mechanisms. Scale Table 2 summarizes scale numbers with key combinations of Brocade VDX platforms at the PINs for edge ports and racks for an optimized 5-stage Clos topology. The following assumptions are made: 48 switches in each data center PoD in a leaf-spine topology 4 super-spines and 2 border leaves used in the topology 40 GbE links between the leaves and the spines: 4 40 GbE uplink ports on each leaf that connect to each of the 4 spines: 4 40 GbE interfaces used as uplinks on each leaf 40 GbE links between the spines and the super-spines: 4 40 GbE uplink ports on each spine that connect to each of the 4 super-spines 40 GbE links between the border leaves and the super-spines: 4 40 GbE uplink ports on each border leaf that connect to each of the 4 super-spines 40 GbE interface breakout to 4 interfaces where available on the Brocade 6740 and 6940 platforms for endpoint connections Brocade 8770 platforms that use GbE line cards with 40 GbE interfaces Table 2: Scale Numbers for a Data Center Site Built as a Multifabric Topology Using Brocade VCS Fabric Technology. Switch Switch Super- Switch Count per Data Center PoD Number of Data Center PoDs Port Count for Data Center Site Q Q S Q Q Q Q Q Q Q Q Q Q S Q Q Q Q Q Q S Q Q Q Q Q S Q

16 Building Data Center Sites with Brocade IP Fabric The Brocade IP fabric provides a Layer 3 Clos deployment architecture for data center sites. With Brocade IP fabric, all the links in the Clos topology are Layer 3 links. The Brocade IP fabric includes the networking architecture, the protocols used to build the network, turnkey automation features used to provision, manage, and monitor the networking infrastructure and the hardware differentiation with Brocade VDX switches. The following sections describe these aspects of building data center sites with Brocade IP fabrics. Because the infrastructure is built on IP, advantages like loop-free communication using industry-standard routing protocols, ECMP, very high solution scale, and standards-based interoperablility are leveraged. These are some of the key benefits of deploying a data center site with Brocade IP fabrics: Highly scalable infrastructure: Because the Clos topology is built using IP protocols, the scale of the infrastructure is very high. These port and rack scales are documented with descriptions of the Brocade IP fabric deployment topologies. Standards-based and interoperable protocols: The Brocade IP fabric is built using industry-standard protocols like the Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols are well understood and provide a solid foundation for a highly scalable solution. In addition, industrystandard overlay control and data plane protocols like BGP-EVPN and Virtual Extensible Local Area Network (VXLAN) are used to extend Layer 2 domain and extend tenancy domains by enabling Layer 2 communications and VM mobility. Active-active vlag pairs: By supporting vlag pairs on leaf switches, dual-homing of the networking endpoints are supported. This provides higher redundancy. Also, because the links are active-active, vlag pairs provide higher throughput to the endpoints. vlag pairs are supported for all, 40 GbE, and 100 GbE interface speeds, and up to 32 links can participate in a vlag. Layer 2 extensions: In order to enable Layer 2 domain extension across the Layer 3 infrastructure, VXLAN protocol is leveraged. The use of VXLAN provides a very large number of Layer 2 domains to support large-scale multitenancy over the infrastructure. In addition, Brocade BGP-EVPN network virtualization provides the control plane for the VXLAN, providing enhancements to the VXLAN standard by reducing the Broadcast, Unknown unicast, Multicast (BUM) traffic in the network through mechanisms like MAC address reachability information and ARP suppression. Multitenancy at Layers 2 and 3: Brocade IP fabric provides multitenancy at Layers 2 and 3, enabling traffic isolation and segmentation across the fabric. Layer 2 multitenancy allows an extended range of up to 8000 Layer 2 domains to exist at each ToR switch, while isolating overlapping 802.1q tenant networks into separate Layer 2 domains. Layer 3 multitenancy using VRFs, multi- VRF routing protocols, and BGP-EVPN allows large-scale Layer 3 multitenancy. Specifically, Brocade BGP-EVPN Network Virtualization leverages BGP-EVPN to provide a control plane for MAC address learning and VRF routing for tenant prefixes and host routes, which reduces BUM traffic and optimizes the traffic patterns in the network. Support for unnumbered interfaces: Using Brocade Network OS support for IP unnumbered interfaces, only one IP address per switch is required to configure the routing protocol peering. This significantly reduces the planning and use of IP addresses and simplifies operations. Turnkey automation: Brocade automated provisioning dramatically reduces the deployment time of network devices and network virtualization. Prepackaged, server-based automation scripts provision Brocade IP fabric devices for service with minimal effort. Programmable automation: Brocade server-based automation provides support for common industry automation tools such as Python Ansible, Puppet, and YANG modelbased REST and NETCONF APIs. Prepackaged PyNOS scripting library and editable automation scripts execute predefined provisioning tasks, while allowing customization for addressing unique requirements to meet technical or business objectives when the enterprise is ready. Ecosystem integration: The Brocade IP fabric integrates with leading industry solutions and products like VMware vsphere, NSX, and vrealize. Cloud orchestration and control are provided through OpenStack and OpenDaylightbased Brocade SDN Controller support. Data Center Site with - Topology A data center PoD built with IP fabrics supports dual-homing of network endpoints using multiswitch port channel interfaces formed between a pair of switches participating in a vlag. This pair of leaf switches is called a vlag pair. See Figure 13 on the following page. 16

17 L3 Links Controller Management SW Servers/Blades IP Storage Servers/Blades Servers/Blades Management/Infrastructure Racks Compute Racks Figure 13: An IP Fabric Data Center PoD Built with - Topology and a vlag Pair for Dual-Homed Network Endpoints. The switches in a vlag pair have a link between them for control plane purposes, to create and manage the multiswitch port channel interfaces. These links also carry switched traffic in case of downlink failures. In most cases these links are not configured to carry any routed traffic upstream, however, the vlag pairs can peer using a routing protocol if upstream traffic needs to be carried over the link, in cases of uplink failures on a vlag switch. Oversubscription of the vlag link is an important consideration for failure scenarios. Figure 14 shows a data center site deployed using a leaf-spine topology and IP fabric. Here the network endpoints L3 Links Internet DCI Data Center Core/ WAN Edge DC PoD Figure 14: Data Center Site Built with - Topology and an IP Fabric PoD. 17

18 are illustrated as single-homed, but dual homing is enabled through vlag pairs where required. The links between the leaves, spines, and border leaves are all Layer 3 links. The border leaves are connected to the spine switches in the data center PoD and also to the data center core/wan edge routers. The uplinks from the border leaf to the data center core/wan edge can be either Layer 2 or Layer 3, depending on the requirements of the deployment and the handoff required to the data center core/ WAN edge routers. There can be more than one edge services PoD in the network, depending on service needs and the bandwidth requirement for connecting to the data center core/wan edge routers. Scale Table 3 summarizes scale numbers with key combinations of Brocade VDX platforms at the PINs for edge ports and racks for a leaf-spine topology. The following assumptions are made: 4 spines in the data center PoD 2 border leaves used in the topology 40 GbE links between the leaves and the spines: 4 40 GbE uplink ports on each leaf that connects to each of the 4 spines 40 GbE links between the border leaves and the spines: 4 40 GbE uplink ports on each border leaf that connects to each of the 4 spines 40 GbE interface breakout to 4 interfaces where available on the Brocade 6740 and 6940 platforms for endpoint connections Brocade 8770 platforms that use GbE line cards with 40 GbE interfaces for connections to the border leaves and the leaves when used as a spine switch 4 breakouts that are used with the GbE line cards when the Brocade 8770 is used as a leaf switch Table 3. Scale Numbers for a - Topology with Brocade IP Fabrics in a Data Center Site Switch Count Switch Port Count for Data Center Site Q S Q S Q Q Q Q Q Scaling the Data Center Site with an Optimized 5-Stage Folded Clos If a higher scale is required, then the optimized 5-stage Clos topology is used to interconnect the data center PoDs built using Layer 3 leaf-spine topology. An example topology is shown in Figure 15 on the following page. Figure 15 shows only one edge services PoD, but there can be multiple such PoDs, depending on the edge service endpoint requirements, the amount of oversubscription for traffic exchanged with the data center core/wan edge, and the related handoff mechanisms. Scale Table 4 summarizes scale numbers with key combinations of Brocade VDX platforms at the PINs for edge ports and racks for an optimized 5-stage Clos topology. The following assumptions are made: 4 super-spines and 2 border leaves used in the topology 40 GbE links between the leaves and the spines: 4 40 GbE uplink ports on each leaf that connects to each of the 4 spines; 4 40 GbE interfaces are used as uplinks on each leaf 40 GbE links between the spines and the super-spines: 4 40 GbE uplink ports on each spine that connect to each of the 4 super-spines 40 GbE links between the border leaves and the super-spines: 4 40 GbE uplink ports on each border leaf that connect to each of the 4 super-spines 40 GbE interface breakout to 4 interfaces where available on the Brocade 6740 and 6940 platforms for endpoint connections Brocade 8770 platforms that use GbE line cards with 40 GbE interfaces 18

Brocade Data Center Fabric Architectures

Brocade Data Center Fabric Architectures WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building

More information

Scalable Approaches for Multitenant Cloud Data Centers

Scalable Approaches for Multitenant Cloud Data Centers WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Simplify Your Data Center Network to Improve Performance and Decrease Costs Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should

More information

Ethernet Fabrics: An Architecture for Cloud Networking

Ethernet Fabrics: An Architecture for Cloud Networking WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (meclavea@brocade.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (jallen@llnw.com) Senior Architect, Limelight

More information

Multitenancy Options in Brocade VCS Fabrics

Multitenancy Options in Brocade VCS Fabrics WHITE PAPER DATA CENTER Multitenancy Options in Brocade VCS Fabrics As cloud environments reach mainstream adoption, achieving scalable network segmentation takes on new urgency to support multitenancy.

More information

VMware and Brocade Network Virtualization Reference Whitepaper

VMware and Brocade Network Virtualization Reference Whitepaper VMware and Brocade Network Virtualization Reference Whitepaper Table of Contents EXECUTIVE SUMMARY VMWARE NSX WITH BROCADE VCS: SEAMLESS TRANSITION TO SDDC VMWARE'S NSX NETWORK VIRTUALIZATION PLATFORM

More information

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS 全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!

More information

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure White Paper Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure What You Will Learn The new Cisco Application Centric Infrastructure

More information

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers WHITE PAPER www.brocade.com Data Center Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers At the heart of Brocade VDX 6720 switches is Brocade Virtual Cluster

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is

More information

Core and Pod Data Center Design

Core and Pod Data Center Design Overview The Core and Pod data center design used by most hyperscale data centers is a dramatically more modern approach than traditional data center network design, and is starting to be understood by

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports

More information

The Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

The Impact of Virtualization on Cloud Networking Arista Networks Whitepaper Virtualization takes IT by storm The Impact of Virtualization on Cloud Networking The adoption of virtualization in data centers creates the need for a new class of networking designed to support elastic

More information

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based

More information

Data Center Use Cases and Trends

Data Center Use Cases and Trends Data Center Use Cases and Trends Amod Dani Managing Director, India Engineering & Operations http://www.arista.com Open 2014 Open Networking Networking Foundation India Symposium, January 31 February 1,

More information

Brocade SDN 2015 NFV

Brocade SDN 2015 NFV Brocade 2015 SDN NFV BROCADE IP Ethernet SDN! SDN illustration 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. INTERNAL USE ONLY 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. INTERNAL USE ONLY Brocade ICX (campus)

More information

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage David Schmeichel Global Solutions Architect May 2 nd, 2013 Legal Disclaimer All or some of the products detailed in this presentation

More information

Juniper Networks QFabric: Scaling for the Modern Data Center

Juniper Networks QFabric: Scaling for the Modern Data Center Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications

More information

An Introduction to Brocade VCS Fabric Technology

An Introduction to Brocade VCS Fabric Technology WHITE PAPER www.brocade.com DATA CENTER An Introduction to Brocade VCS Fabric Technology Brocade VCS Fabric technology, which provides advanced Ethernet fabric capabilities, enables you to transition gracefully

More information

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE EXECUTIVE SUMMARY This application note proposes Virtual Extensible LAN (VXLAN) as a solution technology to deliver departmental segmentation, business

More information

Non-blocking Switching in the Cloud Computing Era

Non-blocking Switching in the Cloud Computing Era Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information

Enterasys Data Center Fabric

Enterasys Data Center Fabric TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed

More information

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17

More information

Simplify the Data Center with Junos Fusion

Simplify the Data Center with Junos Fusion Simplify the Data Center with Junos Fusion Juniper Networks Fabric Technology 1 Table of Contents Executive Summary... 3 Introduction: Network Challenges in the Data Center... 3 Introducing Juniper Networks

More information

Avaya VENA Fabric Connect

Avaya VENA Fabric Connect Avaya VENA Fabric Connect Executive Summary The Avaya VENA Fabric Connect solution is based on the IEEE 802.1aq Shortest Path Bridging (SPB) protocol in conjunction with Avaya extensions that add Layer

More information

An Introduction to Brocade VCS Fabric Technology

An Introduction to Brocade VCS Fabric Technology WHITE PAPER www.brocade.com DATA CENTER An Introduction to Brocade VCS Fabric Technology Brocade VCS Fabric technology, which provides advanced Ethernet fabric capabilities, enables you to transition gracefully

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

TRILL Large Layer 2 Network Solution

TRILL Large Layer 2 Network Solution TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network

More information

Connecting Physical and Virtual Networks with VMware NSX and Juniper Platforms. Technical Whitepaper. Whitepaper/ 1

Connecting Physical and Virtual Networks with VMware NSX and Juniper Platforms. Technical Whitepaper. Whitepaper/ 1 Connecting Physical and Virtual Networks with VMware NSX and Juniper Platforms Technical Whitepaper Whitepaper/ 1 Revisions Date Description Authors 08/21/14 Version 1 First publication Reviewed jointly

More information

Building Tomorrow s Data Center Network Today

Building Tomorrow s Data Center Network Today WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,

More information

VMware. NSX Network Virtualization Design Guide

VMware. NSX Network Virtualization Design Guide VMware NSX Network Virtualization Design Guide Table of Contents Intended Audience... 3 Overview... 3 Components of the VMware Network Virtualization Solution... 4 Data Plane... 4 Control Plane... 5 Management

More information

智 慧 應 用 服 務 的 資 料 中 心 與 底 層 網 路 架 構

智 慧 應 用 服 務 的 資 料 中 心 與 底 層 網 路 架 構 智 慧 應 用 服 務 的 資 料 中 心 與 底 層 網 路 架 構 3 rd Platform for IT Innovation 2015 Brocade Communications Systems, Inc. Company Proprietary Information 8/3/2015 2 M2M - new range of businesses Third Platform Transforms

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

Chapter 1 Reading Organizer

Chapter 1 Reading Organizer Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small

More information

BUILDING A NEXT-GENERATION DATA CENTER

BUILDING A NEXT-GENERATION DATA CENTER BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking

More information

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...

More information

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Enabling automatic migration of port profiles under Microsoft Hyper-V with Brocade Virtual Cluster Switching technology

More information

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric Stretched Active- Active Application Centric Infrastructure (ACI) Fabric May 12, 2015 Abstract This white paper illustrates how the Cisco Application Centric Infrastructure (ACI) can be implemented as

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

Technology Overview for Ethernet Switching Fabric

Technology Overview for Ethernet Switching Fabric G00249268 Technology Overview for Ethernet Switching Fabric Published: 16 May 2013 Analyst(s): Caio Misticone, Evan Zeng The term "fabric" has been used in the networking industry for a few years, but

More information

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions Sponsored by Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions In This Paper Traditional network infrastructures are often costly and hard to administer Today s workloads

More information

Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

Impact of Virtualization on Cloud Networking Arista Networks Whitepaper Overview: Virtualization takes IT by storm The adoption of virtualization in datacenters creates the need for a new class of networks designed to support elasticity of resource allocation, increasingly

More information

DEDICATED NETWORKS FOR IP STORAGE

DEDICATED NETWORKS FOR IP STORAGE DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores

More information

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013 Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center

More information

Cloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD.

Cloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD. Cloud Fabric Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD. Huawei Cloud Fabric - Cloud Connect Data Center Solution Enable Data Center Networks to Be More Agile for

More information

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center A NEW NETWORK PARADIGM What do the following trends have in common? Virtualization Real-time applications

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

CENTER I S Y O U R D ATA

CENTER I S Y O U R D ATA I S Y O U R D ATA CENTER R E A DY F O R S D N? C R I T I C A L D ATA C E N T E R C O N S I D E R AT I O N S FOR SOFT WARE-DEFINED NET WORKING Data center operators are being challenged to be more agile

More information

WHITE PAPER. Network Virtualization: A Data Plane Perspective

WHITE PAPER. Network Virtualization: A Data Plane Perspective WHITE PAPER Network Virtualization: A Data Plane Perspective David Melman Uri Safrai Switching Architecture Marvell May 2015 Abstract Virtualization is the leading technology to provide agile and scalable

More information

Architecting Data Center Networks in the era of Big Data and Cloud

Architecting Data Center Networks in the era of Big Data and Cloud Architecting Data Center Networks in the era of Big Data and Cloud Spring Interop May 2012 Two approaches to DC Networking THE SAME OLD Centralized, Scale-up Layer 2 networks Monstrous chassis es TRILL

More information

NSX TM for vsphere with Arista CloudVision

NSX TM for vsphere with Arista CloudVision ARISTA DESIGN GUIDE NSX TM for vsphere with Arista CloudVision Version 1.0 August 2015 ARISTA DESIGN GUIDE NSX FOR VSPHERE WITH ARISTA CLOUDVISION Table of Contents 1 Executive Summary... 4 2 Extending

More information

Software Defined Cloud Networking

Software Defined Cloud Networking Introduction Ethernet networks have evolved significantly since their inception back in the 1980s, with many generational changes to where we are today. Networks are orders of magnitude faster with 10Gbps

More information

Solving Scale and Mobility in the Data Center A New Simplified Approach

Solving Scale and Mobility in the Data Center A New Simplified Approach Solving Scale and Mobility in the Data Center A New Simplified Approach Table of Contents Best Practice Data Center Design... 2 Traffic Flows, multi-tenancy and provisioning... 3 Edge device auto-attachment.4

More information

VMware Virtual SAN 6.2 Network Design Guide

VMware Virtual SAN 6.2 Network Design Guide VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...

More information

WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter

WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter Sponsored by: Brocade Communications Systems Inc. Lucinda Borovick March 2011 Global Headquarters: 5 Speen

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

The Future of Cloud Networking. Idris T. Vasi

The Future of Cloud Networking. Idris T. Vasi The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable

More information

SDN and Data Center Networks

SDN and Data Center Networks SDN and Data Center Networks 10/9/2013 1 The Rise of SDN The Current Internet and Ethernet Network Technology is based on Autonomous Principle to form a Robust and Fault Tolerant Global Network (Distributed)

More information

TRILL for Data Center Networks

TRILL for Data Center Networks 24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: wuhuajun@huawei.com Tel: 0041-798658759 Agenda 1 TRILL Overview

More information

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership June 4, 2012 Introduction As data centers are forced to accommodate rapidly growing volumes of information,

More information

Brocade VCS Fabrics: The Foundation for Software-Defined Networks

Brocade VCS Fabrics: The Foundation for Software-Defined Networks WHITE PAPER DATA CENTER Brocade VCS Fabrics: The Foundation for Software-Defined Networks Software-Defined Networking (SDN) offers significant new opportunities to centralize management and implement network

More information

Introducing Brocade VCS Technology

Introducing Brocade VCS Technology WHITE PAPER www.brocade.com Data Center Introducing Brocade VCS Technology Brocade VCS technology is designed to revolutionize the way data center networks are architected and how they function. Not that

More information

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud Introduction Cloud computing is one of the most important topics in IT. The reason for that importance

More information

A 10 GbE Network is the Backbone of the Virtual Data Center

A 10 GbE Network is the Backbone of the Virtual Data Center A 10 GbE Network is the Backbone of the Virtual Data Center Contents... Introduction: The Network is at the Epicenter of the Data Center. 1 Section II: The Need for 10 GbE in the Data Center 2 Section

More information

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure W h i t e p a p e r VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure Table of Contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure

More information

Data Center Fabrics What Really Matters. Ivan Pepelnjak (ip@ioshints.info) NIL Data Communications

Data Center Fabrics What Really Matters. Ivan Pepelnjak (ip@ioshints.info) NIL Data Communications Data Center Fabrics What Really Matters Ivan Pepelnjak (ip@ioshints.info) NIL Data Communications Who is Ivan Pepelnjak (@ioshints) Networking engineer since 1985 Technical director, later Chief Technology

More information

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26 Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26 1 Outline Cloud data center (CDC) Software Defined Network (SDN) Network Function Virtualization (NFV) Conclusion 2 Cloud Computing Cloud computing

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...

More information

NETWORKING FOR DATA CENTER CONVERGENCE, VIRTUALIZATION & CLOUD. Debbie Montano, Chief Architect dmontano@juniper.net

NETWORKING FOR DATA CENTER CONVERGENCE, VIRTUALIZATION & CLOUD. Debbie Montano, Chief Architect dmontano@juniper.net NETWORKING FOR DATA CENTER CONVERGENCE, VIRTUALIZATION & CLOUD Debbie Montano, Chief Architect dmontano@juniper.net DISCLAIMER This statement of direction sets forth Juniper Networks current intention

More information

SOFTWARE DEFINED NETWORKING: INDUSTRY INVOLVEMENT

SOFTWARE DEFINED NETWORKING: INDUSTRY INVOLVEMENT BROCADE SOFTWARE DEFINED NETWORKING: INDUSTRY INVOLVEMENT Rajesh Dhople Brocade Communications Systems, Inc. rdhople@brocade.com 2012 Brocade Communications Systems, Inc. 1 Why can t you do these things

More information

VMware EVO SDDC. General. Q. Is VMware selling and supporting hardware for EVO SDDC?

VMware EVO SDDC. General. Q. Is VMware selling and supporting hardware for EVO SDDC? FREQUENTLY ASKED QUESTIONS VMware EVO SDDC General Q. What is VMware A. VMware EVO SDDC is the easiest way to build and run an SDDC private cloud on an integrated system. Based on an elastic, highly scalable,

More information

Data Center Interconnects. Tony Sue HP Storage SA David LeDrew - HPN

Data Center Interconnects. Tony Sue HP Storage SA David LeDrew - HPN Data Center Interconnects Tony Sue HP Storage SA David LeDrew - HPN Gartner Data Center Networking Magic Quadrant 2014 HP continues to lead the established networking vendors with respect to SDN with its

More information

Agility has become a key initiative for business leaders. Companies need the capability

Agility has become a key initiative for business leaders. Companies need the capability A ZK Research White Paper Influence and insight through social media Prepared by Zeus Kerravala March 2014 A Guide To Network Virtualization ZK Research Zeus Kerravala A Guide to BYOD Network And Virtualization

More information

Building Scalable Multi-Tenant Cloud Networks with OpenFlow and OpenStack

Building Scalable Multi-Tenant Cloud Networks with OpenFlow and OpenStack Building Scalable Multi-Tenant Cloud Networks with OpenFlow and OpenStack Dave Tucker Hewlett-Packard April 2013 1 About Me Dave Tucker WW Technical Marketing HP Networking dave.j.tucker@hp.com Twitter:

More information

Virtualization, SDN and NFV

Virtualization, SDN and NFV Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,

More information

Towards an Open Data Center with an Interoperable Network (ODIN) Volume 1: Transforming the Data Center Network Last update: May 2012

Towards an Open Data Center with an Interoperable Network (ODIN) Volume 1: Transforming the Data Center Network Last update: May 2012 Towards an Open Data Center with an Interoperable Network (ODIN) Volume 1: Transforming the Data Center Network Last update: May 2012 The ODIN reference architecture describes best practices for creating

More information

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers SOLUTION BRIEF Enterprise Data Center Interconnectivity Increase Simplicity and Improve Reliability with VPLS on the Routers Challenge As enterprises improve business continuity by enabling resource allocation

More information

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center info@globalknowledge.net www.globalknowledge.net Planning for the Redeployment of

More information

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation Cisco Cloud Essentials for EngineersV1.0 LESSON 1 Cloud Architectures TOPIC 1 Cisco Data Center Virtualization and Consolidation 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential

More information

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Simplifying the Data Center Network to Reduce Complexity and Improve Performance SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable

More information

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more

More information

Advanced Computer Networks. Datacenter Network Fabric

Advanced Computer Networks. Datacenter Network Fabric Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information

Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures

Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures White Paper Cisco Virtual Topology System: Data Center Automation for Next-Generation Cloud Architectures 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

More information

EVOLVED DATA CENTER ARCHITECTURE

EVOLVED DATA CENTER ARCHITECTURE EVOLVED DATA CENTER ARCHITECTURE A SIMPLE, OPEN, AND SMART NETWORK FOR THE DATA CENTER DAVID NOGUER BAU HEAD OF SP SOLUTIONS MARKETING JUNIPER NETWORKS @dnoguer @JuniperNetworks 1 Copyright 2014 Juniper

More information

JUNIPER. One network for all demands MICHAEL FRITZ CEE PARTNER MANAGER. 1 Copyright 2010 Juniper Networks, Inc. www.juniper.net

JUNIPER. One network for all demands MICHAEL FRITZ CEE PARTNER MANAGER. 1 Copyright 2010 Juniper Networks, Inc. www.juniper.net JUNIPER One network for all demands MICHAEL FRITZ CEE PARTNER MANAGER 1 Copyright 2010 Juniper Networks, Inc. www.juniper.net 2-3-7: JUNIPER S BUSINESS STRATEGY 2 Customer Segments 3 Businesses Service

More information

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000

VMware NSX Network Virtualization Design Guide. Deploying VMware NSX with Cisco UCS and Nexus 7000 VMware NSX Network Virtualization Design Guide Deploying VMware NSX with Cisco UCS and Nexus 7000 Table of Contents Intended Audience... 3 Executive Summary... 3 Why deploy VMware NSX on Cisco UCS and

More information

Analysis of Network Segmentation Techniques in Cloud Data Centers

Analysis of Network Segmentation Techniques in Cloud Data Centers 64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology

More information

Arista 7060X and 7260X series: Q&A

Arista 7060X and 7260X series: Q&A Arista 7060X and 7260X series: Q&A Product Overview What are the 7060X & 7260X series? The Arista 7060X and 7260X series are purpose-built 40GbE and 100GbE data center switches in compact and energy efficient

More information

The Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers

The Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers The Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers Table of Contents Enter Avaya Fabric Connect. 2 A typical data center architecture with Avaya SDN Fx... 3 A new way:

More information