DATA CENTER. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1

Size: px
Start display at page:

Download "DATA CENTER. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1"

Transcription

1 Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1

2 CONTENTS Introduction...4 Building a Multi-Node VCS Fabric...4 Design Considerations...4 Topology...4 Clos Fabrics...4 Mesh Fabrics...5 VCS-to-VCS Connectivity...6 Switch Platform Considerations...7 Oversubscription Ratios...7 Scalability...8 Implementation...8 VCS Nuts and Bolts (Within the Fabric)...10 Deciding Which Ports to Use for ISLs...10 ISL Trunking...10 Brocade ISL Trunk...12 Brocade Long Distance ISL...12 ECMP Load Balancing...12 Configurable Load Balancing...12 Operational Considerations...12 Brocade VDX Layer 2 Features (External to Fabric)...13 Active-Standby Connectivity...13 Active-Active Connectivity...13 vlag Enhancements...13 vlag Configuration Guidelines with VMware ESX Server and Brocade VDX...14 vlag Minimum Links...14 LACP SID and Selection Logic...14 LACP SID Assignment...14 LACP Remote Partner Validation...14 Operational Consideration...15 show lacp sys-id:...15 Edge Loop Detection (ELD)...16 Connecting the Fabric to Uplinks...16 Upstream Switches with MCT...16 Upstream Switches Without MCT...16 Connecting the Servers to Fabric...18 Rack Mount Servers...18 Blade Servers...18 Manual Port Profiles...18 Dynamic Port Profile with VM Aware Network Automation...18 Data Center Network and vcenter...18 Network OS Virtual Asset Discovery Process...18 VM-Aware Network Automation MAC Address Scaling...19 Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

3 Authentication...19 Port Profile Management...19 Usage Restriction and Limits...19 Third-Party Software...20 User Experience...20 Building a 2-Switch ToR VCS Fabric...21 Design Considerations...21 Topology...21 Licensing...21 Implementation...21 Building a 2-switch Aggregation Layer Using VCS...23 Design Considerations...23 Topology...23 Licensing...23 Implementation...23 Building the Fabric Switch VCS Reference Architecture...25 Appendix A: VCS Use Cases...26 VCS Fabric Technology in the Access Layer...26 VCS Fabric Technology in the Collapsed Access/Aggregation Layer...27 VCS Fabric Technology in a Virtualized Environment...28 VCS Fabric technology in Converged Network Environments...29 Brocade VDX 6710 Deployment Scenarios...30 Glossary...31 Related Documents...31 About Brocade...32 Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

4 INTRODUCTION This document describes and provides high-level design considerations for deploying Brocade VCS Fabric technology using the Brocade VDX series switches. It explains the steps and configurations needed to deploy the following: A 6-switch VCS Fabric topology providing physical or virtual server connectivity with iscsi/nas A 2-switch Top of Rack (TOR) topology A 2-switch VCS topology in Aggregation, aggregating 1 GbE switches A 24-switch VCS topology The target audience for this document includes sales engineers, field sales, partners, and resellers who want to deploy VCS Fabric technology in a data center. It is assumed the reader of this document is already aware of the VCS Fabric technology, terms, and nomenclatures. Explaining the VCS nomenclature is beyond the scope of this document, and the reader is advised to peruse the publically available documents to become familiar with Brocade VCS Fabric technology. BUILDING A MULTI-NODE VCS FABRIC Design Considerations Topology Please note that for all illustrations, the Brocade MLX is shown as the aggregation switch of choice, which is the current Brocade recommendation. However, Brocade VCS is fully standards-compliant and can be deployed with any standards-compliant third-party switch. Multi-node fabric design is a function of the number of devices connected to the fabric, type of server/storage connectivity (1 GbE/10 GbE), oversubscription and target latency. There are always tradeoffs that need to be made in order to find the right balance between these four variables. Knowing the application communication pattern will help you tailor the network architecture to maximize network performance. When designing a fabric with VCS technology, which is topology agnostic, it is important to decide early in the process what the performance goal of the fabric is. If the goal is to provide a low-latency fabric with the most path availability, and scalability is not a priority, then full mesh is the most appropriate topology. However, if a balance between latency, scalability, and availability is the goal, then a Clos topology is more appropriate. There are multiple combinations possible in between these two, including partial mesh, ring, star, or other hybrid networks, which can be designed to handle the tradeoffs between availability and latency. However, Brocade recommends that either a full-mesh or a Clos topology be used to design VCS Fabrics to provide resilient, scalable, and highly available Layer 2 fabrics. Multi-node fabrics have several use case models. Appendix 1 provides details of various such models for which multi-node fabrics are targeted. Up to and including Network OS v2.0.1, direct connectivity between two separate VCS fabrics is not supported. It is currently required that you place a L2/L3 switch as a hub with multiple VCS fabrics in a spoke, to prevent loops in a network. Clos Fabrics Figure 1 shows a two-tier Clos Fabric. Generally, the top row of switches acts as core switches and provides connectivity to the edge switches. A Clos Fabric is a scalable architecture with a consistent hop count (3 maximum) for port-to-port connectivity. It is very easy to scale a Clos topology by adding additional switches in either the core or edge. In addition, as a result of the introduction of routing at Layer 2 with VCS technology, traffic load is equally distributed among all equal cost multipaths (ECMP). There are two or more paths between any two edge switches in a resilient core-edge topology. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

5 Figure 1: 2-Tier Clos Fabric Mesh Fabrics Figure 2 shows a full-mesh fabric built with six switches. In a full-mesh topology, every switch is connected directly to every other switch. A full-mesh topology is a resilient architecture with a consistently low number of hop counts (2 hops) between any two ports. A full mesh is the best choice when a minimum number of hops is needed and a future increase in fabric scale is not anticipated, since a change in mesh size can be disruptive to the entire fabric. For a mesh to be effective, traffic patterns should be evenly distributed with low overall bandwidth consumption. Figure 2: Full-Mesh Fabric Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

6 VCS-to-VCS Connectivity VCS-to-VCS connectivity is supported in Network OS v2.1.0 release. VCS-to-VCS connectivity is supported for only a certain set of topologies, due to the lack of a loop detection mechanism. It is highly recommended that VCS-to-VCS connectivity be restricted to the following topologies only: ELD, which is available in Network OS v2.1.1, can be used as a loop detection mechanism between VCS fabrics. Prior to Network OS v2.1.1 the topology could not have any loops. Also, prior to Network OS v2.1.1, any local loop even within a single cluster caused broadcast storms and brought down the network. A local loop in one cluster impaced the other cluster. Two VCS clusters can be directly connected to each other one at the access layer and the other at the aggregation layer. Figure 3: VCS-to-VCS Cluster One VCS cluster at the aggregation layer can be directly connected to up to 16 VCS clusters at the access layer; however, access clusters must not be connected to each other. All links connecting to the two clusters must be a part of a single vlag, and all multicast control traffic and data traffic should be limited to 10 Gbps within the vlag. This limitation is due to the fact that there is no distribution of multicast traffic multicast traffic is always sent out on a primary link. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

7 Figure 4: VCS-to-VCS Cluster Switch Platform Considerations Brocade VCS Fabrics can be designed with the Brocade VDX (16/24 port), VDX (40/50/60 port) switches, VDX 6710 (48 1 GbE Copper GbE SFP+), VDX (24 10 GbE SFP+, 8 8 GbE FC ports), and VDX (60 10 GbE SFP+, 16 8 GbE FC ports). The Brocade VDX switch provides a single ASICbased architecture delivering constant latency of ~600ns port-to-port. The Brocade VDX switch is multi- ASIC based, with latencies ranging from 600 ns to 1.8 us. When designing a Clos topology, using the higher port count switches in the core enables greater scalability. The Brocade VDX 6710 provides cost-effective VCS fabric connectivity to 1 GbE servers, while the Brocade VDX 6730 connects the VCS fabric to the Fibre Channel (FC) Storage Area Network (SAN). In addition, it is important to consider the airflow direction of the switches. The Brocade VDX is available in both port-side exhaust and port-side intake configurations. Depending upon the hot-aisle, cold-aisle considerations, you can choose the appropriate airflow. Please note that the Brocade VDX 6710 does not require Port On Demand (POD) or Fibre Channel over Ethernet (FCoE) License, and the VCS fabric must be through 10 GbE ports. However, 10 GbE ports may be used to connect to servers. These server-facing ports will not support Data Center Bridging (DCB). Lastly, VCS fabric is supported only with Brocade VDX Oversubscription Ratios Brocade VDX switches do not have dedicated uplinks. Any of the available 10 GbE ports can be used as uplinks to provide desired oversubscription ratios. When designing a mesh network, the oversubscription ratio is directly dependent on the number of uplinks and downlinks. For example, a 120-port, non-blocking mesh can be designed with four 60-port Brocade VDX switches. Each Brocade VDX switch has 30 downstream ports and 30 upstream ports (10 connected to each of three Brocade VDX switches). In this case, the oversubscription ratio is 1:1, as there are 30 upstream ports serving 30 downstream ports. In the case of a two-tier Clos topology, there are two levels of oversubscription, if the fabric uplinks are connected to the core layer. For North-South traffic, oversubscription is the product of the oversubscription of core switches and edge switches. For example, if the core switch with 60 ports Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

8 has 20 uplinks and 40 downlinks, then it has an oversubscription ratio of 2:1 (40:20). Furthermore, if the edge switch also has the same oversubscription, then the fabric oversubscription for North-South traffic is 4:1. For East- West traffic, or if the uplinks are also connected at the edge, the oversubscription is dependent only on the oversubscription of the edge switches, in other words, 2:1 in this example. Scalability When designing the fabric, it is important to consider the current scalability limits mentioned in the release notes of the software running on the switches. These scalability numbers will be improved in future software releases without requiring any hardware upgrades. This can be referenced in the release notes. The Brocade VDX series of switches allows for the creation of arbitrary network topologies. Because of this flexibility, it is not possible to cover all architectural scenarios. Therefore, the scope of this document is to provide a baseline of architectural models to give the reader a sense of what can be built. Implementation Now that the topology, switch, oversubscription, and other variables have been decided, you can decide how to build this network. As mentioned earlier, the Brocade VDX series of switches allows for the creation of arbitrary network topologies. Because of this flexibility, it is not possible to cover all architectural scenarios. Therefore, the scope of this document is to provide a baseline of architectural models, to give the reader a sense of what can be built. This document describes one such model how to build a core-edge network using 4 24 port edge switches and 2 24 port switches in core with a 4:1 oversubscription ratio. This document discusses how to connect various types of servers and storage to this fabric and how to connect the fabric to upstream devices. Figure 5 shows the basic design of this topology, and Table 1 lists the equipment required for this design. Figure 5: Topology for Reference Architecture Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

9 Hardware Quantity Comments BR-VDX (-F or R) 6 Network OS v G-SFPP-TWX (6x4) For ISLs 10G-SFPP-TWX Depends on number of servers connected 10G-SFPP-SR 32 (4x8) For Brocade VDX side, assuming that Brocade MLX already has connectivity and fiber cables BR-VDX VCS-01 6 BR-VDX FCOE-01 6 iscsi Initiators iscsi Targets FCoE Initiators FCoE Targets As needed As needed As needed As needed Table 1: Equipment required for a 6-Switch VCS Fabric Solution The topology used in the above example is a sample topology. Brocade VCS fabrics can be built with a mix and match of any number of switches within the scalability limits of the software version running. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

10 VCS NUTS AND BOLTS (WITHIN THE FABRIC) Deciding Which Ports to Use for ISLs Any port can be used to form an Inter-Switch Link (ISL) on the Brocade VDX series of switches. No special configuration is required to create an ISL. There are two port types supported on the Brocade VDX switches edge ports and fabric ports. An edge port is used for any non-vdx-to-vdx connectivity regardless of whether that is a network switch external to the Brocade VDX when running in VCS mode or whether the port is used for end-device connectivity. A fabric port is used to form an ISL to another Brocade VDX switch and can participate within a Brocade VDX ISL Trunk Group. ISL Trunking When determining which ports to use for ISL Trunking, it is important to understand the concept of port groups on Brocade VDX switches, as shown in Figure 6. ISL Trunks can only be formed between ports of the same port group. In addition, the cable length should be the same to connect the ports forming the ISLs. Figure 6: Port Groups on the Brocade VDX Figure 7: Port Groups on the Brocade VDX Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

11 Figure 8: Port Groups on the Brocade VDX Figure 9: Port Groups on the Brocade VDX Figure 10: Port Groups on the Brocade VDX When building a fabric, it is very important to think in terms of ISL bandwidth. Brocade ISL Trunks can be formed using up to 8 links providing up to 80 Gbps of bandwidth. The throughput of ISLs are, however, limited to 80 mpps (million packets/sec), which results in lower bandwidth for small-size packets. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

12 Once it has been decided which ports you will use to form an ISL, VCS needs to be enabled, and the RBridge ID must be defined. The VCS ID needs to be assigned for each switch that will become part of the fabric. The default VCS ID is 1, and the default RBridge ID is 1. Keep in mind that it is disruptive to change these parameters, and a switch reboot will be required. During the reboot process, if there is no predefined fabric configuration, the default fabric configuration will be used upon switch bring-up. Once the switches are in VCS mode, connect the ISLs and the VCS fabric will form. Lastly, please also reference the section on Upgrade/Downgrade Considerations VCS Fabric Functionality. Brocade ISL Trunk A Brocade Trunk is a hardware-based Link Aggregation Group (LAG). Brocade Trunks are dynamically formed between two adjacent switches. The trunk formation, which is not driven by Link Aggregation Control Protocol (LACP), is instead controlled by the same FC trunking protocol that controls the trunk formation on FC switches that use the Brocade Fabric OS (FOS). When connecting links between two adjacent Brocade VDX 6720s, Brocade Trunks are enabled automatically, without requiring any additional configuration. Brocade Trunking operates at Layer 2 and is a vastly superior technology compared to the software-based hashing used in standard LAG, which operates at Layer 2. Brocade Trunking evenly distributes traffic across the member links on a frame-by-frame basis, without the need for hashing algorithms, and it can coexist with standard IEEE 802.3ad LAGs. Brocade Long Distance ISL Normally, an ISL port with Priority Flow Control (PFC) is supported for a 200 m distance on eanvil-based platforms. The longdistance-isl command extends that support up to a distance of 10 km, including 2 km and 5 km links. For a 10 km ISL link, no other ISL links are allowed on the same ASIC. For 5 km and 2 km ISL links, another shortdistance ISL link can be configured. A maximum of three PFCs (per Priority Flow Control) can be supported on a longdistance ISL port. To configure a long-distance ISL port, use the long-distance-isl command in interface configuration mode. Please refer to the Network OS Command Reference and Network OS Administrator s Guide, v2.1.1 for more information on long-distance (LD) ISL. ECMP Load Balancing Configurable Load Balancing Load balancing allows traffic distribution on static and dynamic LAGs and vlag. Although it is not common, some traffic patterns fail to distribute, leading to only one ECMP path for the entire traffic. This causes underutilization of ECMP paths, resulting in a loss of data traffic, even though one or more ECMP paths are available to offload the traffic. In Network OS v2.1.0, a new command is introduced to configure ECMP load balancing parameters, in order to offer more flexibility to end users. This command allows users to select various parameters, which can then be used to create a hashing scheme. Please refer to the Network OS Administrator s Guide, v2.1.1 for more information on hashing scheme and usage guidelines. Operational Considerations The ECMP hash value is used to add more randomness in selecting the ECMP paths. The default value of ECMP hash is a random number, set during boot time. The default ECMP load balance hashing scheme is based on source and destination IP, MAC address, VLAN ID (VID), and TCP/UDP port. In the presence of a large number of traffic streams, load balancing can be achieved without any additional ECMP-related configuration. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

13 VDX6720_1(config-rbridge-id-2)# fabric ecmpload-balance? Possible completions: dst-mac-vid Destination MAC address and VID based load balancing src-dst-ip Source and Destination IP address based load balancing src-dst-ip-mac-vid Source and Destination IP and MAC address and VID based load balancing src-dst-ip-mac-vid-port Source and Destination IP, MAC address, VID and TCP/UDP port based load balancing src-dst-ip-port Source and Destination IP and TCP/UDP port based load balancing src-dst-mac-vid Source and Destination MAC address and VID based load balancing src-mac-vid Source MAC address and VID based load balancing VDX6720_2(config)# rbridge-id 2 VDX6720_2(config-rbridge-id-2)# fabric ecmp load-balance dst-mac-vid VDX6720_2(config-rbridge-id-2)# VDX6720_2(config-rbridge-id-2)# fabric ecmp load-balance-hash-swap VDX6720_2# show fabric ecmpload-balance Fabric EcmpLoad Balance Information Rbridge-Id : 2 BROCADE VDX LAYER 2 FEATURES (EXTERNAL TO FABRIC) Active-Standby Connectivity Active/Standby connectivity is used to provide redundancy at the link layer in a network. The most common protocol used at Layer 2 to provide Active/Standby connectivity is Spanning Tree Protocol (STP). The use of STP was historically acceptable in traditional data center networks, but as the densities of servers in data centers increase, there is a requirement to improve the bandwidth availability of the links by fully utilizing the available link capacity in the networks. Active-Active Connectivity MCT and vlag Multi-Chassis Trunking (MCT) is an industry-accepted solution to eliminate spanning tree in L2 topologies. Link Aggregation Group (LAG)-based MCT is a special case of LAG that is covered in IEEE 802.3ad, in which the LAG ends terminate on two separate chassis that are directly connected to each other. Virtual LAG (vlag), a Brocade innovation, further extends the concept of LAG by allowing its formation across four physical switches that may not be directly connected to each other (but that participate in the same VCS fabric). vlag Enhancements In Brocade Network OS v2.1, the vlag feature is enhanced to remove several usage restrictions imposed in the previous Network OS releases. These are highlights of vlag enhancement in Network OS v2.1: The ability to specify a minimum number of links that must be active on a vlag before it can form aggregation is now supported in VCS mode. This was supported earlier only in standalone mode. The existing minimum-links Command Line Interface (CLI) under the port channel is now available in VCS mode. The ability to validate a remote partner for dynamic vlag is also added. The maximum number of RBridges participating in a vlag is now increased to 4. The maximum number of ports participating in a vlag is 32, with 16 from a single RBridge. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

14 vlag Configuration Guidelines with VMware ESX Server and Brocade VDX VMware recommends configuring an EtherChannel when the admin chooses IP-based hashing as Network Interface Card (NIC) teaming. All other options of NIC teaming should use regular switch port configuration. vlag Minimum Links The Minimum Links feature allows the user to specify the minimum number of links that must be active on the vlag before the links can be aggregated. Until the minimum number of links is reached, the port channel shows protocol down. The default minimum number of links to form an aggregator is 1. The minimum link configuration allows the user to create vlags with a strict lower limit on active participating port members. As with all other port-channel configuration commands executed on a Brocade VCS operating in Fabric Cluster mode, the user is required to configure the new minimum link value on each RBridge participating in the vlag. The port channel will be operationally down, if the number of active links in a vlag falls below the minimum number of links required to keep the port channel logically up. Similarly, the port channel is operationally up when the number of active links in the vlag becomes equal to the minimum number of links required to bring the port channel logically up. The events that trigger a link active count change are as follows: an active link going up or down, a link added to or deleted from a vlag, or a participating RBridge coming back up or going down. Please note that only ports with the same speed are aggregated. LACP SID and Selection Logic As part of establishing dynamic LACP LAGs (or port channels), LACP PDU frames are exchanged between the switch and the end devices. The exchange includes a unique identifier for both the switch and the device that is used to determine which port channel a link should be associated with. In Network OS v2.1, additional support is added for selecting a unique and consistent Local SID (LACP System ID), which is shared between all RBridges that are connected to the same vlag. This SID is unique for each switch in the VCS. During vlag split and rejoin events, when the member RBridge leaves from and joins into the cluster, SID reselection logic is enhanced to support a knob to control split recovery. Split Recovery: In Network OS v2.1.0, with no-ignore-split, SID is derived using the actual MAC address of one of the participating RBridges (SID Master). So when a vlag is formed, the SID from the first RBridge configured into the vlag is used. If the RBridge that owns the SID of the vlag leaves the cluster, a new SID is selected from the MAC of one of the other Rbridges, and vlag converges again. No Split Recovery: In this mode, a VSID (Virtual SID) which is a unique identifier derived using the VCS ID, similar to Network OS 2.0 is used as the LACP SID for the vlag. Upon split, all RBridges continue to advertise the same VSID as their LACP SID. No reconvergence is needed when nodes leave or (re)join the vlag in the VCS mode. LACP SID Assignment In Network OS v2.1.0, SID is no longer a virtual MAC address derived using the VCS ID; instead, it is the actual MAC address of the participating RBridges. This allows scaling of the maximum number of RBridges that can participate in a VLAG, from 2 to 16. LACP Remote Partner Validation A vlag member s links in different RBridges sometimes may be connected to other standalone switches. In the previous Network OS releases, there was no validation of the remote partner information for a dynamic vlag across the RBridges. The user had the responsibility to assure that the links in the dynamic vlag were all connected to the same remote device and key. In Network OS v2.1.0, remote partner validation support is added, which ensures that the first received partner information is sent out to all member R0Bridges. If the vlag has a member in another RBridge, the aggregation fails, and vlag is not formed. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

15 Operational Consideration To ensure that the RBridges that join the fabric (RB3 in Figure 11) pick up the partner state, the remote SID state for vlags is included in the local database exchange. The following show command provides information on the SID Master. show lacp sys-id: Port-channel Po 10 - System ID: 0x8000, d-0e-25 - SID Master: rbridge-id 1 (ignore-split Disabled) Port-channel Po 20 - System ID: 0x8000,01-e SID Master: N/A (ignore-split Enabled Default) Figure 11: SID Update on LACP Join If both RBridges are connected to the same remote device, the remote SID should match. Figure 12: LACP SID Assignment Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

16 If the RBridges are configured for the same vlag but are connected to different remote devices, the remote SID values do not match. Since the local SID state is forced to synchronize between the connecting RBridges, the side whose Local SID is forced to change ends up with disabled links. Edge Loop Detection (ELD) Edge Loop Detection (ELD) is a Brocade Layer 2 loop detection mechanism. It uses PDUs to detect loops in the network. This protocol is mainly intended for VCS-to-VCS loop prevention operation, but it can also be used in the VCS-to-standalone mode of networks. The ELD feature is implemented to block redundant links between two VCS fabrics: when a device detects a loop by receiving packets originated from it, it should disable all redundant links in that network. This is to prevent packet storm created due to loops caused by misconfiguration. The primary purpose of ELD is to block a Layer 2 loop caused by misconfiguration. ELD should be used as a tool to detect any loops in the network, rather than using it to replace Layer 2 protocols such as xstp, Metro Ring Protocol (MRP), and so on. The basic ELD functionality is as follows: ELD is enabled on a specific port and VLAN combination. Each ELD-enabled interface on an RBridge sends an ELD PDU. This PDU contains information about the VCS ID and RBridge ID of the node it is sending from, the VLAN associated with this interface on which ELD is enabled, the ELD port priority parameter, and so on. ELD can be configured on Access mode ports and Trunk mode ports, and PDUs follow port configuration for tagging. The port priority parameter decides which port will be shut down in case ELD detects a loop. These PDUs are transmitted from every ELD-enabled interface at the configured hello interval rate. When these PDUs are received back at the originating VCS, ELD detects a loop; this results in shutting down redundant interfaces based on the port parameter of the redundant links. If the port priority value is the same, then the decision is based on the port number. ELD uses the system MAC address of the primary RBridge of VCS with the multicast (bit 8) and local (bit 7) on. For example, if the base MAC address of the primary RBridge of VCS is 00e , then the destination MAC address will be 03e Please refer to the Network OS Administrator s Guide, v2.1.1 for more information on ELD. Connecting the Fabric to Uplinks Upstream Switches with MCT In order to connect VCS to upstream servers with MCT, set up a normal LACP-based LAG on the MLX side (or equivalent core router), and vlag will form automatically on the VDX side. Upstream Switches Without MCT When the upstream devices are not running MCT or cannot support MCT, there are two ways that fabric uplinks can be connected. Option 1: Form an EtherChannel between the Brocade VDX and upstream devices, as shown in Figure 13. In this case, standard IEEE 802.3ad-based port trunks are formed between the fabric and upstream network devices. This provids link level redundancy, but no node level redundancy. If a core Brocade VDX fails, all of the flows through the MLX/RX (Brocade BigIron RX Series) that are connected to that device have no path to reach the fabric. Option 2: Run STP on upstream devices with Active/Standby connections, as shown in Figure 14. Since VCS tunnels all the STP Bridge Protocol Data Units (BPDUs) through, this topology is ideal to provide both node level and link level redundancy. An important caveat to keep in mind here is that if there are more than two upstream switches connected to the fabric, Rapid STP (RSTP) will default to STP, since RSTP requires point-to-point connectivity which is not provided in the fabric. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

17 Figure 13: Connecting Fabric Uplinks to Upstream Devices, Option 1 Figure 14: Connecting Fabric Uplinks to Upstream Devices, Option 2 Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

18 Connecting the Servers to Fabric Rack Mount Servers This section describes how rack mount servers, either physical or virtual, can be connected to the Brocade VCS Fabric. When connecting to the fabric in DCB mode, it is important that flow control is enabled on the Converged Network Adapters CNAs. Blade Servers When connecting blade servers to a VCS Fabric, there are two connectivity options embedded switches and passthrough modules. Please consult Brocade support to validate the supported solution. Not all qualified solutions have been published as of the release of this document. Manual Port Profiles Automatic Migration of Port Profiles (AMPP) is a Brocade innovation that allows seamless movement of virtual machines within the Ethernet fabric. In today s networks, network parameters such as security or VLAN settings need to be physically provisioned before a machine is moved from one physical server to another within the same Layer 2 domain. Using the distributed intelligence feature, a VCS fabric is aware of the port profiles associated with the MAC address of a Virtual Machine (VM) and automatically applies those policies to the new access port that the VM has moved to. Since AMPP is MAC address based, it is hypervisor-agnostic and works with all third-party hypervisors. In Network OS v2.1.0, AMPP is supported over vlag. Dynamic Port Profile with VM Aware Network Automation Server virtualization (VMs, and so forth) is used extensively in current datacenter environments, with VMware being the dominant player in datacenter virtualization. The server hosts (VMware ESXs) are connected directly to the physical switches through switch-ports (or edge-ports in the case of VCS). Many of these server hosts implement an internal switch, called vswitch, which is created to provide internal connectivity to the VMs and a distributed switch that spans across multiple Server-Hosts. A new layer called the Virtual Access Layer (VAL) virtualizes connectivity between the physical switch and virtual machine via vswitch or dvswitch. VAL is not visible to the physical switch; thus, these VMs and other virtual assets remain hidden to the network administrator. The Brocade VM-aware network automation provides the ability to discover these virtual assets. With VM-aware network automation, a Brocade VDX 67XX switch can dynamically discover virtual assets and offer unprecedented visibility of dynamically created virtual assets. VM-aware network automation also allows network administrators to view these virtual assets using the Network OS CLI. VM-aware network automation is supported on all the Brocade VDX platforms and can be configured in both VCS and standalone mode. This feature is currently supported on VMware vcenter 4.0 and 4.1. Data Center Network and vcenter In current datacenter environments, vcenter is primarily used to manage Vmware ESX hosts. VMs are instantiated using the vcenter user interface. In addition to creating these VMs the server administrator also associates these with VSs (Virtual Switches), PGs (Port Groups), and DVPGs (Distributed Virtual Port Groups). VMs and their network properties are primarily configured and managed on the vcenter/vsphere. Many VM properties such as MAC addresses are automatically configured by the vcenter, while some properties such as VLAN and bandwidth are assigned by vcenter through VAL. Network OS Virtual Asset Discovery Process The Brocade switch that is connected to hosts/vms needs to be aware of network policies in order to allow or disallow traffic. In Network OS v2.1.0, the discovery process starts upon boot-up, when the switch is preconfigured with the relevant vcenters that exist in its environment. The discovery process entails making appropriate queries to the vcenter. The queries are in the form of Simple Object Access Protocol (SOAP) requests/responses sent to the vcenter. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

19 Figure 15: Virtual Asset Discovery Process VM-Aware Network Automation MAC Address Scaling In Network OS v2.1.1, the VM-aware network automation feature is enhanced to support 8000 VM MAC addresses. VM-aware network automation is now capable of detecting up to 8000 VM MACs and supporting VM mobility of this scale within a VCS fabric. Authentication In Network OS v2.1.1, before any discovery transactions are initiated, the first order of transactions involves authentication with the vcenter. In order to authenticate with a specific vcenter, the following vcenter properties are configured at the switch URL, Login, and Password. A new CLI is added to support this configuration. Port Profile Management After discovery, the switch/network OS enters the port profile creation phase, where it creates port profiles on the switch based on discovered DVPGs and port groups. The operation creates port profiles in the running-config of the switch. Additionally, Network OS also automatically creates the interface/vlans that are configured in the port profiles, which also end up in the running-config. The AMPP mechanism built into Brocade switches may provide a faster way to correlate the MAC address of a VM to the port it is associated with. Network OS continues to allow this mechanism to learn the MAC address and associate the port profile with the port. The The vcenter automation process simply enhances this mechanism by providing automatic creation of port profiles or preassociating the MAC addresses before the VM is powered up. Usage Restriction and Limits Network OS creates port profiles automatically based on discovered DVPGs or port groups. The port profiles created in this way contain the prefix auto- in their names. The user is urged not to modify the policies within these port groups. If the user modifies these port profiles, the discovery mechanism at some point may overwrite the changes Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

20 the user made. The user is also urged never to create port profiles whose names begin with auto- via CLI or from a file replay. The maximum number of VLANs that can be created on the switch is 3583, and port profiles are limited to 256 profiles per switch. A vcenter configuration that exceeds this limit leads to an error generated by Network OS. Third-Party Software To support the integration of vcenter and Network OS, the following third-party software is added in Network OS v2.1.0: Open Source PHP Module Net-cdp-0.09 Libnet User Experience sw0# show vnetwork hosts Host VMNic Name Associated MAC (d)vswitch Switch- Iface ============================== ============= ================= ============ ============ esx englab.brocade.com vmnic2 5c:f3:fc:0c:d9:f4 dvswitch-1 0/1 esx englab.brocade.com vmnic2 5c:f3:fc:0c:d9:f6 dvswitch-2 0/2 sw0# show vnetwork vms VirtualMachine Associated MAC IP Addr Host =============== ================= =========== ================================= RH :50:56:81:2e:d esx englab.brocade.com RH :50:56:81:08:3c - esx englab.brocade.com Please refer to the Network OS Administrator s Guide, v2.1.1 for more information on VM-Aware Network Automation. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

21 BUILDING A 2-SWITCH TOR VCS FABRIC Traditionally, at the access layer in the data center, servers have been configured with standby links to Top of Rack (ToR) switches running STP or other link level protocols to provide resiliency. As server virtualization increases the density of servers in a rack, the demand for active/active server connectivity, link-level redundancy, and multichassis EtherChannel for node-level redundancy is increasing. A 2-switch, ToR VCS Fabric satisfies each of these conditions with minimal configuration and setup. Design Considerations Topology The variables that affect a 2-switch ToR design are oversubscription, the number of ports required for server/storage connections, and bandwidth (1/10 GbE). Latency is not a consideration here, as only a single switch is traversed (under normal operating conditions), as opposed to multiple switches. Oversubscription, in a 2-switch topology, requires a simple ratio of uplinks to downlinks. In a 2 60 port switch ToR with 4 ISL links, 112 usable ports remain. Of these 112 ports, if 40 are used for uplink and 80 for downlink, oversubscription will be 2:1. However, if the servers are dual-homed in an active/active topology, there only 40 servers will be connected, with 1:1 oversubscription. Licensing VCS will operate in a 2-switch topology without the need to purchase additional software licenses. However, if FCoE support is needed, a separate FCoE license must be purchased. For VCS configurations that exceed two switches, VCS licenses are required to form a VCS fabric. In addition, if FCoE is required, an FCoE license in addition to a VCS license is required. Implementation Figure 16 shows a sample topology using a 2 60 switch Brocade VDX 6720 configuration. This topology provides 2.5:1 oversubscription and 80 server ports to provide active/active connectivity for a rack of 50 servers and/or storage elements. Table 2 shows the Bill of Materials (BOM). Hardware Quantity Comments BR-VDX (-F or R) 2 Network OS v G-SFPP-TWX For ISLs 10G-SFPP-SR 10 For uplinks on the VDX side BR-VDX FCOE-01 2 N/A iscsi Initiators iscsi Targets FCoE Initiators As needed As needed As needed FCoE Targets As needed Table 2: Equipment Required for a 2-Switch ToR Solution Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

22 Figure 16: 2-Switch ToR VCS Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

23 BUILDING A 2-SWITCH AGGREGATION LAYER USING VCS At the aggregation layer in the data center, ToR switches have traditionally had standby links to ToR switches running STP, or other link level protocols, to provide resiliency. As server virtualization increases the density of servers in a rack, the demand for bandwidth from the access switches must increase, to reduce oversubscription. This in turn drives the demand for active/active uplink connectivity from ToR switches. This chapter discusses the best practices for setting up a two-node VCS Fabric for aggregation, which expands the Layer 2 domain without the need for spanning tree. Design Considerations Topology The variables that affect a 2-switch design are oversubscription and latency. Oversubscription is directly dependent on the number of uplinks, downlinks, and ISLs. Depending upon the application, latency can be a deciding factor in the topology design. Oversubscription, in a 2-switch topology, is a simple ratio of uplinks to downlinks. In a 2 60 port switch Fabric with 4 ISL links, 112 usable ports remain. Any of these 112 ports can be used as either uplinks or downlinks to give the desired oversubscription ratio. Licensing VCS operates in a 2-switch topology without the need to purchase additional software licenses. However, if FCoE support is needed, a separate FCoE license must be purchased. For VCS configurations that exceed 2 switches, VCS licenses are required to form a VCS fabric. In addition, if FCoE is required, an FCoE license in addition to a VCS license is required. Implementation Figure 17 shows a sample topology using a 2 60 switch Brocade VDX This topology provides a 2.5:1 oversubscription and 80 downlink ports to provide active/active connectivity to 20 Brocade FCX 648 switches with 4 10G uplinks each. Each of these Brocade FCX switches have 48x1G downlink ports, providing 960 (48 20) 1G server ports. Table 3 shows the BOM. Hardware Quantity Comments BR-VDX (-F or R) 2 Network OS v G-SFPP-TWX For ISLs 10G-SFPP-TWX For Brocade FCX Uplinks 10G-SFPP-SR 10 For uplinks on VDX side FCX-648 E or I 20 For 1 GbE server connectivity Table 3: Equipment List for a 2-Switch Aggregation Solution Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

24 Figure 17: 2-Switch VCS Fabric in Aggregation Building the Fabric Please refer to the VCS Nuts and Bolts (Within the Fabric) section. The same best practices apply to a 2- switch ToR solution. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

25 24-SWITCH VCS REFERENCE ARCHITECTURE Figure 18: 24-Switch Brocade VCS Reference Architecture with 10 GbE Server Access Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

26 APPENDIX A: VCS USE CASES Brocade VCS fabric technology can be used in multiple places in the network. Traditionally, data centers are built using three-tier architectures with access layers providing server connectivity, aggregation layers aggregating the access layer devices, and the data center core layer acting as an interface between the campus core and the data center. This appendix describes the value that VCS fabric technology delivers in various tiers of the data center. VCS Fabric Technology in the Access Layer Figure 19 shows a typical deployment of VCS Fabric technology in the access layer. The most common deployment model in this layer is the 2-switch ToR, as discussed previously. In the access layer, VCS fabric technology can be inserted in existing architectures, as it fully interoperates with existing LAN protocols, services, and architecture. In addition, VCS Fabric technology delivers additional value by allowing active-active server connectivity to the network without additional management overhead. At the access layer, VCS Fabric technology allows 1 GbE and 10 GbE server connectivity and flexibility of oversubscription ratios, and it is completely auto-forming, with zero configuration. Servers see the VCS ToR as a single switch and can fully utilize the provisioned network capacity, thereby doubling the bandwidth of network access. Figure 19: VCS Fabric Technology in the Access Layer Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

27 VCS Fabric Technology in the Collapsed Access/Aggregation Layer Traditionally, Layer 2 (L2) networks have been broadcast-heavy, which forced the data center designers to build smaller L2 domains to limit both broadcast domains and failure domains. However, in order to seamlessly move virtual machines in the data center, it is absolutely essential that the VMs are moved within the same Layer 2 domain. In traditional architectures, therefore, VM mobility is severely limited to these small L2 domains. Brocade has taken a leadership position in the market by introducing Transparent Interconnection of Lots of Links (TRILL)-based VCS Fabric technology, which eliminates all these issues in the data center. Figure 20 shows how a scaled-out self-aggregating data center edge layer can be built using VCS Fabric technology. This architecture allows customers to build resilient and efficient networks by eliminating STP, as well as drastically reducing network management overhead by allowing the network operator to manage the whole network as a single logical switch. Figure 20: VCS Fabric Technology in the Access/Aggregation Layer Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

28 VCS Fabric Technology in a Virtualized Environment Today, when a VM moves within a data center, the server administrator needs to open a service request with the network admin to provision the machine policy on the new network node where the machine is moved. This policy may include, but is not limited to, VLANs, Quality of Service (QoS), and security for the machine. VCS Fabric technology eliminates this provisioning step and allows the server admin to seamlessly move VMs within a data center by automatically distributing and binding policies in the network at a per-vm level, using the Automatic Migration of Port Profiles (AMPP) feature. AMPP enforces VM-level policies in a consistent fashion across the fabric and is completely hypervisor-agnostic. Figure 21 shows the behavior of AMPP in a 10-node VCS fabric. Figure 21: VCS Fabric Technology in a Virtualized Environment Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

29 VCS Fabric technology in Converged Network Environments VCS Fabric technology has been designed and built ground-up to support shared storage access to thousands of applications or workloads. VCS Fabric technology allows for lossless Ethernet using DCB and TRILL, which allows VCS Fabric technology to provide multi-hop, multi-path, highly reliable and resilient FCoE and Internet Small Computer Systems Interface (iscsi) storage connectivity. Figure 22 shows a sample configuration with iscsi and FCoE storage connected to the fabric. Figure 22: VCS Fabric Technology in a Converged Network Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

30 Brocade VDX 6710 Deployment Scenarios In this deployment scenario, the Brocade VDX 6710s (VCS fabric-enabled) extend benefits natively to 1 GbE servers. Figure 23: Brocade VDX 6710 Deployment Scenario Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

31 GLOSSARY BPDU ELD MAC PDU RBridge RSTP VCS vlag VLAN xstp Bridge Protocol Datagram Unit Edge Loop Detection protocol. Used on the edge ports of a VCS fabric to detect and remove loops. Media Access Control. In Ethernet, it refers to the 48-bit hardware address. Protocol Datagram Unit Routing Bridge. A switch that runs the TRILL (Transparent Interconnection of Lots of Links) protocol. Rapid Spanning Tree Protocol. An IEEE standard for building a loop-free LAN (Local-Area Network), which allows ports to rapidly transition to forwarding state. Virtual Cluster Switching. Brocade VCS Fabric technology is a method of grouping a fabric of switches together to form a single virtual switch that can provide a transparent bridging function. Virtual Link Aggregation Group. You can create a LAG using multiple switches in a VCS fabric. vlag provides better high availability and faster protection switching than a normal LAG. Virtual LAN. Subdividing a LAN into logical VLANs allows separation of traffic from different sources within the LAN. An abbreviation used in this document to indicate all types of Spanning Tree Protocol, for instance, STP, RSTP, MSTP (Multiple STP), PVST+ (Per VLAN Spanning Tree Plus), and RPVST+ (Rapid PVST+). RELATED DOCUMENTS For more information about Brocade VCS Fabric technology, please see the Brocade VCS Fabric Technical Architecture: For the Brocade Network Operating System Admin Guide and Network OS Command Reference: The Network OS Release notes can be found at For more information about the Brocade VDX Series of switches, please see the product Data sheets: Brocade VDX 6710 Data Center Switch: Brocade VDX 6720 Data Center Switch: Brocade VDX 6730 Data Center Switch: Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v of 32

Ethernet Fabrics: An Architecture for Cloud Networking

Ethernet Fabrics: An Architecture for Cloud Networking WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic

More information

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers WHITE PAPER www.brocade.com Data Center Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers At the heart of Brocade VDX 6720 switches is Brocade Virtual Cluster

More information

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

VMware and Brocade Network Virtualization Reference Whitepaper

VMware and Brocade Network Virtualization Reference Whitepaper VMware and Brocade Network Virtualization Reference Whitepaper Table of Contents EXECUTIVE SUMMARY VMWARE NSX WITH BROCADE VCS: SEAMLESS TRANSITION TO SDDC VMWARE'S NSX NETWORK VIRTUALIZATION PLATFORM

More information

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Enabling automatic migration of port profiles under Microsoft Hyper-V with Brocade Virtual Cluster Switching technology

More information

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS 全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17

More information

Introducing Brocade VCS Technology

Introducing Brocade VCS Technology WHITE PAPER www.brocade.com Data Center Introducing Brocade VCS Technology Brocade VCS technology is designed to revolutionize the way data center networks are architected and how they function. Not that

More information

How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan)

How To Set Up A Virtual Network On Vsphere 5.0.5.2 (Vsphere) On A 2Nd Generation Vmkernel (Vklan) On An Ipv5 Vklan (Vmklan) Best Practices for Virtual Networking Karim Elatov Technical Support Engineer, GSS 2009 VMware Inc. All rights reserved Agenda Best Practices for Virtual Networking Virtual Network Overview vswitch Configurations

More information

Brocade VCS Fabric Technology and NAS with NFS Validation Test

Brocade VCS Fabric Technology and NAS with NFS Validation Test Brocade VCS Fabric Technology and NAS with NFS Validation Test NetApp/VMware vsphere 5.0 Red Hat Enterprise Linux This material outlines sample configurations and associated test results of Brocade VCS

More information

Brocade Data Center Fabric Architectures

Brocade Data Center Fabric Architectures WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center. TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building

More information

Building Tomorrow s Data Center Network Today

Building Tomorrow s Data Center Network Today WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,

More information

Brocade Data Center Fabric Architectures

Brocade Data Center Fabric Architectures WHITE PAPER Brocade Data Center Fabric Architectures Building the foundation for a cloud-optimized data center TABLE OF CONTENTS Evolution of Data Center Architectures... 1 Data Center Networks: Building

More information

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center info@globalknowledge.net www.globalknowledge.net Planning for the Redeployment of

More information

Chapter 3. Enterprise Campus Network Design

Chapter 3. Enterprise Campus Network Design Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This

More information

VCS Monitoring and Troubleshooting Using Brocade Network Advisor

VCS Monitoring and Troubleshooting Using Brocade Network Advisor VCS Monitoring and Troubleshooting Using Brocade Network Advisor Brocade Network Advisor is a unified network management platform to manage the entire Brocade network, including both SAN and IP products.

More information

Scalable Approaches for Multitenant Cloud Data Centers

Scalable Approaches for Multitenant Cloud Data Centers WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,

More information

The Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

The Impact of Virtualization on Cloud Networking Arista Networks Whitepaper Virtualization takes IT by storm The Impact of Virtualization on Cloud Networking The adoption of virtualization in data centers creates the need for a new class of networking designed to support elastic

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

VMware Virtual SAN 6.2 Network Design Guide

VMware Virtual SAN 6.2 Network Design Guide VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...

More information

Configuring EtherChannels

Configuring EtherChannels CHAPTER 12 This chapter describes how to configure EtherChannels on the Cisco 7600 series router Layer 2 or Layer 3 LAN ports. For complete syntax and usage information for the commands used in this chapter,

More information

An Introduction to Brocade VCS Fabric Technology

An Introduction to Brocade VCS Fabric Technology WHITE PAPER www.brocade.com DATA CENTER An Introduction to Brocade VCS Fabric Technology Brocade VCS Fabric technology, which provides advanced Ethernet fabric capabilities, enables you to transition gracefully

More information

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based

More information

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (meclavea@brocade.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (jallen@llnw.com) Senior Architect, Limelight

More information

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013 Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center

More information

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER

VMware Virtual SAN Network Design Guide TECHNICAL WHITE PAPER TECHNICAL WHITE PAPER Table of Contents Intended Audience.... 3 Overview.... 3 Virtual SAN Network... 3 Physical Network Infrastructure... 4 Data Center Network... 4 Host Network Adapter.... 5 Virtual

More information

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage David Schmeichel Global Solutions Architect May 2 nd, 2013 Legal Disclaimer All or some of the products detailed in this presentation

More information

Enterasys Data Center Fabric

Enterasys Data Center Fabric TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable

More information

TRILL Large Layer 2 Network Solution

TRILL Large Layer 2 Network Solution TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network

More information

The Future of Cloud Networking. Idris T. Vasi

The Future of Cloud Networking. Idris T. Vasi The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers

More information

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

As IT organizations look for better ways to build clouds and virtualized data

As IT organizations look for better ways to build clouds and virtualized data DATA SHEET www.brocade.com BROCADE VDX 6720 DATA CENTER SWITCHES DATA CENTER Revolutionizing the Way Data Center Networks Are Built HIGHLIGHTS Simplifies network architectures and enables cloud computing

More information

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch DATA CENTER Best Practices for High Availability Deployment for the Brocade ADX Switch CONTENTS Contents... 2 Executive Summary... 3 Introduction... 3 Brocade ADX HA Overview... 3 Hot-Standby HA... 4 Active-Standby

More information

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.

More information

DEDICATED NETWORKS FOR IP STORAGE

DEDICATED NETWORKS FOR IP STORAGE DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores

More information

Expert Reference Series of White Papers. VMware vsphere Distributed Switches

Expert Reference Series of White Papers. VMware vsphere Distributed Switches Expert Reference Series of White Papers VMware vsphere Distributed Switches info@globalknowledge.net www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,

More information

Implementing L2 at the Data Center Access Layer on Juniper Networks Infrastructure

Implementing L2 at the Data Center Access Layer on Juniper Networks Infrastructure IMPLEMENTATION GUIDE Implementing L2 at the Data Center Access Layer on Juniper Networks Infrastructure Although Juniper Networks has attempted to provide accurate information in this guide, Juniper Networks

More information

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)

Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) Extreme Networks White Paper Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) The evolution of the data center fabric has been well documented. The

More information

Multitenancy Options in Brocade VCS Fabrics

Multitenancy Options in Brocade VCS Fabrics WHITE PAPER DATA CENTER Multitenancy Options in Brocade VCS Fabrics As cloud environments reach mainstream adoption, achieving scalable network segmentation takes on new urgency to support multitenancy.

More information

Nutanix Tech Note. VMware vsphere Networking on Nutanix

Nutanix Tech Note. VMware vsphere Networking on Nutanix Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking

More information

Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops

Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops ITE I Chapter 6 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Implement Spanning Tree Protocols LAN Switching and Wireless Chapter 5 Explain the role of redundancy in a converged

More information

16-PORT POWER OVER ETHERNET WEB SMART SWITCH

16-PORT POWER OVER ETHERNET WEB SMART SWITCH 16-PORT POWER OVER ETHERNET WEB SMART SWITCH User s Manual (DN-95312) - 0 - Content Web Smart Switch Configure login ---------------------------------- 2 Administrator Authentication Configuration ---------------------------------------------

More information

Juniper Networks QFabric: Scaling for the Modern Data Center

Juniper Networks QFabric: Scaling for the Modern Data Center Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications

More information

Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

Impact of Virtualization on Cloud Networking Arista Networks Whitepaper Overview: Virtualization takes IT by storm The adoption of virtualization in datacenters creates the need for a new class of networks designed to support elasticity of resource allocation, increasingly

More information

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL

More information

Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels

Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels Design Guide Chapter 4: Spanning Tree Design Guidelines for Cisco NX-OS Software and Virtual PortChannels 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.

More information

A Platform Built for Server Virtualization: Cisco Unified Computing System

A Platform Built for Server Virtualization: Cisco Unified Computing System A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease

More information

BUILDING A NEXT-GENERATION DATA CENTER

BUILDING A NEXT-GENERATION DATA CENTER BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking

More information

HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios

HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios HP Virtual Connect Ethernet Cookbook: Single and Multi Enclosure Domain (Stacked) Scenarios Part number 603028-003 Third edition August 2010 Copyright 2009,2010 Hewlett-Packard Development Company, L.P.

More information

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Simplifying the Data Center Network to Reduce Complexity and Improve Performance SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,

More information

Data Center Design IP Network Infrastructure

Data Center Design IP Network Infrastructure Cisco Validated Design October 8, 2009 Contents Introduction 2 Audience 3 Overview 3 Data Center Network Topologies 3 Hierarchical Network Design Reference Model 4 Correlation to Physical Site Design 5

More information

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT CCNA DATA CENTER BOOT CAMP: DCICN + DCICT COURSE OVERVIEW: In this accelerated course you will be introduced to the three primary technologies that are used in the Cisco data center. You will become familiar

More information

6/8/2011. Document ID: 12023. Contents. Introduction. Prerequisites. Requirements. Components Used. Conventions. Introduction

6/8/2011. Document ID: 12023. Contents. Introduction. Prerequisites. Requirements. Components Used. Conventions. Introduction Page 1 of 9 Products & Services Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches Document ID: 12023 Contents Introduction Prerequisites Requirements Components Used Conventions

More information

Fibre Channel over Ethernet in the Data Center: An Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable

More information

Data Center Evolution without Revolution

Data Center Evolution without Revolution WHITE PAPER www.brocade.com DATA CENTER Data Center Evolution without Revolution Brocade networking solutions help organizations transition smoothly to a world where information and applications can reside

More information

Virtual PortChannel Quick Configuration Guide

Virtual PortChannel Quick Configuration Guide Virtual PortChannel Quick Configuration Guide Overview A virtual PortChannel (vpc) allows links that are physically connected to two different Cisco Nexus 5000 Series devices to appear as a single PortChannel

More information

Using MLAG in Dell Networks

Using MLAG in Dell Networks dd version Using MLAG in Dell Networks A deployment guide for Dell Networking switches (version ) Dell Engineering March 04 January 04 A Dell Deployment and Configuration Guide Revisions Date Description

More information

Brocade VCS Fabrics: The Foundation for Software-Defined Networks

Brocade VCS Fabrics: The Foundation for Software-Defined Networks WHITE PAPER DATA CENTER Brocade VCS Fabrics: The Foundation for Software-Defined Networks Software-Defined Networking (SDN) offers significant new opportunities to centralize management and implement network

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Layer 3 Network + Dedicated Internet Connectivity

Layer 3 Network + Dedicated Internet Connectivity Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for

More information

An Introduction to Brocade VCS Fabric Technology

An Introduction to Brocade VCS Fabric Technology WHITE PAPER www.brocade.com DATA CENTER An Introduction to Brocade VCS Fabric Technology Brocade VCS Fabric technology, which provides advanced Ethernet fabric capabilities, enables you to transition gracefully

More information

TRILL for Data Center Networks

TRILL for Data Center Networks 24.05.13 TRILL for Data Center Networks www.huawei.com enterprise.huawei.com Davis Wu Deputy Director of Switzerland Enterprise Group E-mail: wuhuajun@huawei.com Tel: 0041-798658759 Agenda 1 TRILL Overview

More information

Storage Area Network Design Overview Using Brocade DCX 8510. Backbone Switches

Storage Area Network Design Overview Using Brocade DCX 8510. Backbone Switches Storage Area Network Design Overview Using Brocade DCX 8510 Backbone Switches East Carolina University Paola Stone Martinez April, 2015 Abstract The design of a Storage Area Networks is a very complex

More information

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Simplify Your Data Center Network to Improve Performance and Decrease Costs Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should

More information

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011 FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient

More information

Switching in an Enterprise Network

Switching in an Enterprise Network Switching in an Enterprise Network Introducing Routing and Switching in the Enterprise Chapter 3 Version 4.0 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Compare the types of

More information

Juniper / Cisco Interoperability Tests. August 2014

Juniper / Cisco Interoperability Tests. August 2014 Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper

More information

BLADE PVST+ Spanning Tree and Interoperability with Cisco

BLADE PVST+ Spanning Tree and Interoperability with Cisco BLADE PVST+ Spanning Tree and Interoperability with Cisco Technical Brief Industry-standard PVST+ Spanning Tree Protocol with Cisco interoperability Introduction...1 Spanning Tree Protocol (IEEE 802.1d)...1

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

The evolution of Data Center networking technologies

The evolution of Data Center networking technologies 0 First International Conference on Data Compression, Communications and Processing The evolution of Data Center networking technologies Antonio Scarfò Maticmind SpA Naples, Italy ascarfo@maticmind.it

More information

hp ProLiant network adapter teaming

hp ProLiant network adapter teaming hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2

More information

Network Virtualization

Network Virtualization Network Virtualization Petr Grygárek 1 Network Virtualization Implementation of separate logical network environments (Virtual Networks, VNs) for multiple groups on shared physical infrastructure Total

More information

Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code

Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code Information Price Course Code Free Course Introducing Cisco Data Center Networking No. Days: 4 No. Courses: 2 Introducing Cisco Data Center Technologies No. Days: 5 Contact us at: Telephone: 888-305-1251

More information

WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter

WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter Sponsored by: Brocade Communications Systems Inc. Lucinda Borovick March 2011 Global Headquarters: 5 Speen

More information

CloudEngine 6800 Series Data Center Switches

CloudEngine 6800 Series Data Center Switches Series Data Center Switches Series Data Center Switches Product Overview Huawei CloudEngine series (CE for short) switches are nextgeneration 10G Ethernet switches designed for data centers and highend

More information

An Oracle White Paper October 2013. How to Connect Oracle Exadata to 10 G Networks Using Oracle s Ethernet Switches

An Oracle White Paper October 2013. How to Connect Oracle Exadata to 10 G Networks Using Oracle s Ethernet Switches An Oracle White Paper October 2013 How to Connect Oracle Exadata to 10 G Networks Using Oracle s Ethernet Switches Introduction... 1 Exadata Database Machine X3-2 Full Rack Configuration... 1 Multirack

More information

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand

More information

SDN CENTRALIZED NETWORK COMMAND AND CONTROL

SDN CENTRALIZED NETWORK COMMAND AND CONTROL SDN CENTRALIZED NETWORK COMMAND AND CONTROL Software Defined Networking (SDN) is a hot topic in the data center and cloud community. The geniuses over at IDC predict a $2 billion market by 2016

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

- EtherChannel - Port Aggregation

- EtherChannel - Port Aggregation 1 Port Aggregation - EtherChannel - A network will often span across multiple switches. Trunk ports are usually used to connect switches together. There are two issues with using only a single physical

More information

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009

Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results. May 1, 2009 Juniper Networks EX Series/ Cisco Catalyst Interoperability Test Results May 1, 2009 Executive Summary Juniper Networks commissioned Network Test to assess interoperability between its EX4200 and EX8208

More information

Addressing Scaling Challenges in the Data Center

Addressing Scaling Challenges in the Data Center Addressing Scaling Challenges in the Data Center DELL PowerConnect J-Series Virtual Chassis Solution A Dell Technical White Paper Dell Juniper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY

More information

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership June 4, 2012 Introduction As data centers are forced to accommodate rapidly growing volumes of information,

More information

Data Center Multi-Tier Model Design

Data Center Multi-Tier Model Design 2 CHAPTER This chapter provides details about the multi-tier design that Cisco recommends for data centers. The multi-tier design model supports many web service architectures, including those based on

More information

Technology Overview for Ethernet Switching Fabric

Technology Overview for Ethernet Switching Fabric G00249268 Technology Overview for Ethernet Switching Fabric Published: 16 May 2013 Analyst(s): Caio Misticone, Evan Zeng The term "fabric" has been used in the networking industry for a few years, but

More information

How To Switch In Sonicos Enhanced 5.7.7 (Sonicwall) On A 2400Mmi 2400Mm2 (Solarwall Nametra) (Soulwall 2400Mm1) (Network) (

How To Switch In Sonicos Enhanced 5.7.7 (Sonicwall) On A 2400Mmi 2400Mm2 (Solarwall Nametra) (Soulwall 2400Mm1) (Network) ( You can read the recommendations in the user, the technical or the installation for SONICWALL SWITCHING NSA 2400MX IN SONICOS ENHANCED 5.7. You'll find the answers to all your questions on the SONICWALL

More information

How To Balance On A Cisco Catalyst Switch With The Etherchannel On A Fast Ipv2 (Powerline) On A Microsoft Ipv1 (Powergen) On An Ipv3 (Powergadget) On Ipv4

How To Balance On A Cisco Catalyst Switch With The Etherchannel On A Fast Ipv2 (Powerline) On A Microsoft Ipv1 (Powergen) On An Ipv3 (Powergadget) On Ipv4 Cisco - Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switch...Page 1 of 10 Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches Document ID: 12023 Contents

More information