Flattening the Data Center Network

Size: px
Start display at page:

Download "Flattening the Data Center Network"

Transcription

1 Ethernet Fabric Revolutionizes Data Center Network Design A Technology Validation Report Prepared for Brocade

2 i

3 Contents About DeepStorage iii The Bottom Line Introduction The Emergence of Ethernet Fabrics 3 Ethernet Fabrics 3 VM Mobility 4 Converged Networking 4 Our Modest Proposal 5 The Requirements 6 Brocade VCS Fabric Technology 7 Cost Analysis 8 Bandwidth and Failure Analysis Management and Expansion Applicability of the Model Design for Smaller and Larger Configurations Conventional Spanning Tree Solution Cost Analysis Bandwidth and Failure Analysis 3 Management and Expansion 3 Cisco Nexus 7000 with Cisco Nexus 000 Fabric Extenders 3 Cost Analysis 4 About the Cisco Nexus 000 Fabric Extender 5 Bandwidth and Failure Analysis 6 Management and Expansion 6 Lab Testing 7 Our Test Configuration 0 vsphere Host Server Hardware 0 Virtual DVD Store Server Configuration Conclusions Appendix A Network Configuration Bills of Materials 4 Brocade VCS Configuration 4 Cisco 650-E/Cisco 4500-X Configuration 5 Notes on Cisco 650-E/Cisco 4500-X Configuration 5 Cisco Nexus 7000/Cisco Nexus ii

4 About DeepStorage DeepStorage, LLC is dedicated to revealing the deeper truth about storage, networking, and related data center technologies to help information technology professionals deliver superior services to their users and still get home at a reasonable hour. DeepStorage Reports are based on our hands-on testing and more than 5 years of experience making technology work in the real world. Our philosophy of real-world testing means we configure systems as we expect most customers will use them, thereby avoiding Lab Queen configurations designed to maximize benchmark performance. This report was sponsored by Brocade. But we always retain final editorial control over our publications. iii

5 The Bottom Line In recent years, the Ethernet market has evolved significantly. Part of that evolution has been the differentiation of data center networks from campus and metro Ethernet networks. New technologies have been developed specifically to address some of the unique challenges that each type of network presents. Vendors, including Brocade, have developed a new class of Ethernet switches based on fabric technology specifically designed for use within today s dynamic virtualizationdriven data centers. These switches are equipped with new features that make better use of inter-switch links, scale out linearly, automatically configure, ensure Quality of Service (QoS), and dynamically adapt to topology changes. This paper proposes that these technological advances most specifically the development of Ethernet fabrics based on Transparent Interconnection of Lots of Links (TRILL) or comparable technologies allow network architects designing networks of moderate scale, up to 00 or so servers, to forgo the large, expensive core switches required by classic Ethernet architectures, replacing them with a fabric of more costeffective data center switches. The benefits of an Ethernet fabric in the data center network are compelling, including more effective bandwidth utilization, improved network failure recovery, and lower cost of ownership compared to networks using traditional Ethernet architectures. The paper compares several data center network designs, each supporting physical servers. Those designs are outlined below: A full-mesh network using Brocade VCS Fabric technology A network using Cisco 650E and Cisco 4500X switches and a conventional Spanning Tree Protocol (STP) A network using Cisco Nexus 7000 core switches with Cisco Nexus 000 Fabric Extenders After comparing both cost and performance benchmarks for the three network designs, it was found that a full-mesh network comprising Brocade switches was 43 percent less expensive to acquire than the Cisco Nexus 7000/Cisco Nexus 000 solution and less than half the cost of the traditional Cisco Catalyst 6500/ Cisco Catalyst 4500X configuration. In addition, the Brocade network design provided far greater east-west bandwidth between servers and resiliency. * Purchase Price 5-Year Support and Maintenance 5-Year Power 5-Year Total Cost Rack Units Devices to Manage Uplink Oversubscription Brocade VC Fabric $30,60 $3,45 $3,5 $564, ( with BNA) 5.5: Cisco Catalyst 650-E/ Cisco Catalyst 4500X $767,874 $575,06 $34,553 $,478,333 4: Cisco Nexus 7000/ Cisco Nexus 000 $56,00 $46,00 $0,000 $,06, 4 4:

6 FOUNDRY NETWORKS Link A c t ivity Console Power FOUNDRY NETWORKS FastIron Workgroup ServerIron 400 FOUNDRY NETWORKS Link A c t ivity Console Power F O U N D R Y F O U N D R Y N E T W O R K S BigIron 5000 N E T W O R K S BigIron 5000 FOUNDRY NETWORKS FastIron Workgroup ServerIron 400 FOUNDRY NETWORKS Link A c t ivity Console Power FOUNDRY NETWORKS FastIron Workgroup ServerIron 400 FOUNDRY NETWORKS Link A c t ivity Console Power FOUNDRY NETWORKS ServerIron 400 FastIron Workgroup FOUNDRY NETWORKS Link A c t ivity Console Power FastIron Workgroup Flattening the Data Center Network Introduction First-generation local area networks (LANs) were designed primarily to carry data directly between client PCs and file servers or protocol gateways that, in turn, connected to mainframe or supermini computers (for example, DEC s VAX). This traffic pattern, with most traffic flowing directly between clients and servers, was the de facto standard through the 0s, when two-tier client server applications were the norm. As data changed from file requests to SQL transactions and queries, the vast majority of traffic flowed north to south between clients and servers with little traffic, other than backups, flowing among servers. The flow of traffic in today s data center is markedly different. Applications have been redesigned to allow access from Web browsers and lightweight clients, extending their reach well beyond the traditional fat-client applications of yesteryear. As a result, more network traffic flows between the Web/application servers, where applications run, and database servers, which hold the data, than between the application servers and users. Virtualization also drives traffic in the east-to-west, server-to-server direction as users run low-bandwidth connections to virtual PCs on VDI servers, and as server administrators migrate virtual servers from host to host. The conventional data center Ethernet design comprising two core switches at the top, an optional layer of aggregation switches below, and edge switches connected to servers, was mandated by the limitations of the Spanning Tree Protocol (STP). STP prevents loops in Ethernet networks by disabling all but one path between each switch in the network at the root bridge. Figure below illustrates those disabled links in red. Core X X X X Aggregation X X X X X Edge Servers Figure A Conventional Network

7 As a result of STP disabling all but one path from each downstream bridge to the STP root bridge, at least half the uplink bandwidth in any STP network is consumed by idle backup links until something goes wrong. This behavior also makes it impractical to use more than two core switches, since one switch will still be serving as the STP root bridge. Because additional uplinks to incremental core switches are disabled in this model, adding a third or fourth core switch simply increases cost without actually increasing scalability or bandwidth. The limitations of the spanning tree protocol, including its hierarchical nature, force network architects to build inefficient networks, adding redundant links and components that remain idle and are used only in case of another device s or link s failure. The multiple layers of switches also cause each packet traveling from server A to server B to traverse multiple switch hops, adding latency to the overall network. The Emergence of Ethernet Fabrics Not satisfied with the functionality of standard Ethernet, networking vendors have developed a new technology designed to optimize Ethernet for the today s new data center environment: Ethernet Fabric. Ethernet Fabrics Ethernet fabrics in the data center address the limitations of the 5-year-old STP. While STP solves some of the loop problems present in Ethernet networks, it creates issues of its own. The most obvious of these shortcomings is the waste of potential bandwidth caused by simply blocking paths to prevent loops. STP also has the annoying tendency to choose non-optimal paths, sending data over the river and through the woods because the most direct path is blocked. Even the latest Rapid Spanning Tree Protocol can take several seconds to converge when an active link or device fails. Ethernet fabrics based on TRILL eliminate STP, allowing all links in the network to be active, and take full advantage of the enormous increases in compute power available in today s switches. These switches recognize not just the active path to any other switch but all other available paths. Like Layer 3 routers, fabric switches use a routing protocol to build an internal map of all the other switches in the network. Also, like a router, fabric switches can balance traffic loads across all equal-length paths throughout the network from point A to point B. Ethernet fabrics make more efficient use of both the intelligence in each switch and the links connecting the switches providing: More east-west bandwidth Faster network convergence on topology changes like link or switch failures Fully automatic convergence on link or switch additions Reduced impact of switch failures through distributed rather than hierarchical topologies 3

8 VM Mobility As organizations make the transition to server virtualization, they soon discover that they not only get the expected hardware savings that result from server consolidation, they also realize the possibility of significant flexibility gains based on the ability to dynamically migrate virtual servers from host to host. But dynamic placement of workloads does present a problem for traditional networks that use port profiles including traffic QoS and Access Control Lists (ACLs) to deliver the network performance and security required by each workload. When a hypervisor moves a virtual machine (VM) from one host to another, it will now be connected to ports running the wrong profile for that workload. Switches that incorporate Brocade VCS Fabric technology detect virtual machine movements and automatically migrate port profiles as VMs move, a feature Brocade calls AMPP (automatic migration of port profiles). While AMPP is hypervisor agnostic, Brocade has integrated AMPP with VMware s vsphere Management Server to automatically create virtual server profiles that define the QoS, security settings, and vlans for virtual machines. Converged Networking A new series of Ethernet extensions collectively called Data Center Bridging (DCB) adds the option of lossless transport to Ethernet networks in the data center. The DCB standards define eight priority levels for Ethernet traffic, with a per-priority pause mechanism stopping packets from entering the network if the network is overly congested, along with the protocols that switches use to negotiate their DCB features. This lossless transport mode is used in large networks to support Fibre Channel over Ethernet (FCoE), which encapsulates Fibre Channel Protocol (FCP), into Ethernet frames converging the SAN and LAN into a single network. In today s network, DCB s lossless transport emulates the traffic light at a freeway entrance smoothing traffic on the network and improving efficiency not just for FCoE but for all network traffic. Any slight delays on ingress to the network introduce less latency than waiting for Transmission Control Protocol (TCP) timeout and subsequent retransmission required when a packet is dropped in a traditional Ethernet. 4

9 Our Modest Proposal Most data centers today use the top-of-rack switching topology, which places fixedconfiguration switches at the top of each server rack. These switches serve as the edge and/or aggregation layer(s) in the data center network. Top-of-rack architectures make even more sense as data centers make the transition from Gbps to 0Gbps architectures, since the economics of 0Gbps Ethernet dictate keeping cable lengths relatively short. 0Gbase-T promises to address this cable length limitation, but it hasn t yet reached widespread adoption. For short distances (up to 5 meters), 0Gbps network architects have the option of using preconfigured direct-attach cables with SFP+ connectors at a cost of $50-$300 each. For longer distances, fiber optic transceivers can be plugged into the same SFP+ socket using fiber optic cable. But 0Gbps transceivers still can cost on the order of $,000 each or more than $,000 per link. As a result, a 0Gbps top-of-rack switch can pay for itself simply based on the reduced cost of optics. As top-of-rack switches like Brocade VDX switches were coming to market, we realized that a fabric comprising top-of-rack switches could function as the entire data center network in moderate-sized environments with fewer than roughly 00 servers with the fabric replacing the function of the traditional core switches. This design promised to not only significantly reduce expenses when compared with a more traditional design, it also had the potential to deliver greater east-west bandwidth at lower latency than a hierarchical network especially in case of a device failure. 5

10 The Requirements To test our theory, we set out to design a data center network for a fictional enterprise Acme Inc. Its data center contains x86 servers, about half of which are currently vsphere hosts. Acme Inc. s CIO mandated the following simple set of requirements for the new data center. physical servers 00 0Gbps converged connections (all dual-homed) Gbps Intelligent Platform Management Interface (IPMI) connections An additional 50 /0Mbps ports for incremental device management connections A simple upgrade path to 50 servers Each server is to be dual-homed to different switches, so switch failures and firmware upgrades are non-disruptive A minimum of 60Gbps of inter-switch bandwidth, with at least 40Gbps remaining after any single device failure. Initial configurations must provide 40Gbps of bandwidth from the data center network to the campus network. This must be upgradeable to 80Gbps as needed. If they are no longer going to the core of the overall network, Acme Inc. will repurpose two Cisco Catalyst 650-E switches that currently serve as the core of the network to be the core of the campus network. Higher-level services from inter-vlan routing to load balancing, firewalls, and other security services will be provided by the campus network as Acme Inc. expects the majority of east-west traffic to travel on the same vlan. Acme Inc. s servers are connected to a small number of vlans/subnets: User connections to the majority of servers DMZ servers Secure servers (holding sensitive data) Hypervisor and other management traffic vmotion/live Migration traffic iscsi storage traffic FCoE storage traffic Backup traffic The above specifications will be used to test our theory that a set of top-of-rack fabric switches can provide a reliable, high-performance data center network without the expense of traditional modular core switching gear. This will allow us to compare and contrast this new network design with the following alternatives: A 0Gbps network using traditional STP architecture A network using Cisco Nexus 7000 Series core switches and Cisco Nexus 000 Series Fabric Extenders 6

11 For each network design, we will examine: Available east-west bandwidth How the network can be expected to behave in case of a device failure How complex the initial system configuration will be Ongoing cost of vendor maintenance and support Data center rack space consumption Acquisition cost * Power consumption The expansion path for each solution to 50 and 300 servers A network design using a pair of Cisco Nexus 5500 switches with Cisco Nexus 000 fabric extenders was considered, but as each Nexus 5500 switch would have to carry half the total network traffic. Since the Nexus 5500 doesn t support high-availability operations like online firmware upgrades or redundant supervisor modules, Acme s management didn t feel it offered a high enough level of resiliency. Juniper s QFabric was originally considered for this comparison until it was determined that QFabric would cost significantly more than the other options. Creating a QFabric with no single point of failure would require a pair of QFabric Interconnects, the modular switching engine, and a pair of QFabric Directors acting as control servers. With a price tag of more than $300,000 for these central management devices, this network design could be competitive only if amortized over more network ports than were required for this test case. Brocade VCS Fabric Technology Our proposed network design connects five 60-port Brocade VDX Switches in a fullmesh fabric using two 0Gbps links between each switch. By enabling Brocade VCS fabric technology to create a Layer multipath fabric from the switches, traffic is load balanced across all equal-length paths through the network. The core of the network is comprised of three Brocade VDX port 0Gbps Ethernet Switches and two Brocade VDX 6730 Switches,which also have Fibre Channel ports. The network will have two vlans dedicated to FCoE traffic, with each Brocade VDX 6730 being responsible for connecting one FCoE vlan to the Fibre Channel switch for Acme Inc s SAN A and the other connecting the second vlan to SAN B. * Cost analyses are based on manufacturer s suggested retail prices (MSRP) as published by respective vendors. (While we are fully aware that enterprise customers may receive significant discounts from their suppliers, these discounts are not predictable and are usually comparable from vendor to vendor. Volume discounts and customer retention discounts may be available from an incumbent vendor, but prospective vendors may offer similar discounts to get new business. Therefore, MSRP will be used in all comparisons to level the playing field.) 7

12 Flattening the Data Center Network Brocade VDX RLOM Brocade VDX RLOM Brocade VDX 670 Brocade VDX Brocade VDX 670 RLOM Brocade VDX Brocade VDX Brocade VDX 670 RLOM RLOM Brocade VDX Brocade VDX Brocade VDX RLOM Brocade VDX 670 Brocade VDX RLOM Brocade VDX 670 Brocade VDX 670 Brocade VDX RLOM RLOM Brocade VDX Brocade VDX Figure The Fabric as Core Additional Brocade VDX 670 Switches will support lower-speed devices as well as Intelligent Platform Management Interface (IPMI) management ports for the servers. Since these switches will participate in the Layer multipath fabric, they can be connected in a full- or partial-mesh configuration and can also serve lower-value, or older, servers that don t require more than Gbps per connection. The Ethernet fabric is attached to Acme Inc. s campus core with two 0Gbps connections from each of two Brocade VDX 670 Switches. These are combined to form a single Link Aggregation Control Protocol (LACP) trunk to the core using the Brocade vlag feature, which allows LACP trunking with connections to separate switches in the fabric. Cost Analysis MSRP for the Ethernet fabric solution is $30,60, or $3,0 per server, as of October 0. Pricing includes twinax cables for Inter-Switch Links (ISLs) and optics for the 40Gbps connection to the campus network. Since the interconnections between the Brocade VDX top-of-rack switches run between adjacent server racks, we can assume that the vast majority of these connections are less than 5 meters long and can therefore be made with low-cost twinax direct attach cables. By contrast, most core switches are across the data center from servers, necessitating much more expensive fiber optic ISLs. Each Brocade VDX 670 and Brocade 6730 Switch will consume two rack units in the server racks where they re installed. Each Brocade VDX 670 Gbps Switch will take U, for a total rack space requirement of 3U for the solution. 8 Brocade VCS Fabric Purchase Price $30,60 5-Year Support and Maintenance $3,45 5-Year Power $3,5 5-Year Total Cost $564,330 Rack Units 0 Devices to Manage 5 ( with BNA) Uplink Oversubscription 5.5: 5-Year TCO $564,330

13 Assuming an industry standard support and maintenance cost of 5 percent of MSRP, annual maintenance for the solution will be $46,38. Brocade VCS Fabric technology will use a maximum of 00 watts of power. Assuming a power usage effectiveness (PUE) of and a power cost of $0. per Kwh, the system will cost $4,65 per year for power and cooling. Based on the above assumptions, the five-year total cost for this network design is $564,330. Bandwidth and Failure Analysis Traffic from one server to another across a common switch is forwarded at full wire speed across the Brocade VDX switch s non-blocking architecture. Traffic between servers that don t share an active connection to a common switch has access to 80Gbps of aggregate bandwidth 0Gbps of bandwidth from each switch to every other switch. As the switches are connected in a full mesh, the longest data path through the fabric is just two switch hops resulting in total latency of.µs. With 40 server connections and eight fabric connections per switch, the solution is 5: oversubscribed. When all available ports are used for servers, the oversubscription rate rises to 6.5: (5 servers per switch). Increasing fabric bandwidth is simply a matter of adding more ISLs, since Brocade VCS Fabric technology will automatically recognize additional links and begin load balancing traffic across them seconds after they re installed. Adding a third link to each connection in the Ethernet fabric would boost bandwidth to 0Gbps at a cost of around $,500. Five 60-port switches with 40Gbps interconnects would make 40 0Gbps Ethernet ports available for use, increase fabric bandwidth to 60Gbps, and increase cost by only about percent more than the starting configuration. In case of a switch or link failure, the network will reroute traffic over the surviving portion of the network. Since every switch knows the topology of the entire network, the network reconverges many times faster than a STP network, which must rediscover paths whenever a path fails. This reconvergence occurs in a small fraction of a second, allowing standard TCP or SCSI retries to recover with minimal performance impact. A switch failure in a full-mesh network reduces the available fabric bandwidth by /(n-) where n equals the number of switches in the fabric. For our five-switch fabric, bandwidth will be reduced by 5 percent, to 60Gps, a significantly smaller impact than the 50 percent bandwidth loss if a traditional core switch failed. The impact on a larger, nine-switch fabric would be only.5 percent of the available bandwidth of 60Gbps. Management and Expansion Brocade VCS Fabric technology is self-configuring, automatically rebalancing traffic across all available switches and inter-switch links. Network engineers don t have to spend time designing and configuring multiple Link Aggregation Control Protocol (LACP) connections between the fabric switches, though the campus core to fabric

14 LACP connection via v-lag will require attention. Once the switches are interconnected and placed in Brocade VCS Fabric technology mode through a single Command-Line Interface (CLI) command, each switch will be automatically assigned a Routing Bridge (Rbridge) number, and the fabric will establish itself automatically. This solution has a total of eight devices to manage, but the entire fabric can be managed as a single logical switch through Brocade Network Advisor management software. When the software makes a Simple Network Management Protocol (SNMP) connection to a Brocade VCS switch, it recognizes the fabric and automatically adds the rest of the switches to the management Figure 3 A VCS Fabric in Brocade Network Advisor Just as modular switches will identify ports by slot and port, Brocade VCS switch ports are identified, and can be managed, by Rbridge number/slot (which will be 0 for fixedfunction switches like those we re deploying) and port number allowing an administrator to concurrently manage multiple ports on multiple switches in the fabric. To grow the network so that it can support the 50 servers that Acme Inc. has specified, we must add an additional Brocade VDX 670 Switch to the network at an October 0 list cost of $4,500, including the 0 twinax cables that will be needed to connect the incremental switch to the other switches in the fabric. A six-switch mesh will have 336 usable 0Gbps Ethernet ports. 0

15 Applicability of the Model Design for Smaller and Larger Configurations The Ethernet fabric model scales down very well, allowing users with smaller networks to choose between a pair of 60-port switches or a fabric comprising three or four 4-port switches. In this reduced configuration, the failure of a single switch has a smaller impact on network bandwidth. As the number of switches grows, the number of interconnections in a full-mesh network grows according to the formula n*(n-) where n equals the number of switches in the mesh. Using 60-port switches and 0Gbps interconnections in a full mesh, the interconnect overhead reaches 30 percent for a 0-switch network ultimately providing 40 useable ports out of 600. For networks that need to support 00 or more servers and 400+ connections, the fullmesh architecture is less attractive. For these environments, partial-mesh solutions are more appropriate, most specifically a spine and leaf architecture with several switches making up a central spine and servers connected to leaf switches around the periphery. A group of four 60-port switches in a 60Gbps full mesh could support an outer ring of 4 switches with each switch connected to each of the core switches at 0Gbps to provide 36 server access ports. For 300 servers, we would deploy three Brocade VDX 670 Switches as the spine, with 4 edge switches to provide 67 server- and storage-facing ports. Conventional Spanning Tree Solution Since the Acme Inc. network team didn t have experience with new data center networking technologies, its first thought was to build a 0Gbps server network using conventional switches. This Cisco-based solution used the 3-port model from the new Cisco 4500X switch line, with the optional 8-port expansion module. Each Cisco 4500X would use four ports as an LACP trunk (which Cicso calls EtherChannel) to each Cisco Catalyst 650-E core switch leaving 3 ports per switch for server connections. This configuration couldn t meet the minimum bandwidth requirements using industrystandard switches and STP, since each edge switch would then have only 40Gbps of uplink bandwidth. Cisco does offer two configuration options that would allow all eight uplinks to be active: The Cisco virtual switching system (VSS), which allows a pair of Cisco Catalyst 6500 Series Switches to function as one spanning tree bridge or Virtual port channel, which allows LACP links to be homed across multiple servers Either option will require a few hours of a skilled network engineer s time to configure.

16 Figure 4 Cisco 650-E and Cisco 4500-X Solution Providing the specified 6 0Gbps ports will require seven Cisco Catalyst 4500-X Series Switches, which will, in turn, require 8 ports on each Cisco Catalyst 650 Switch to be delivered through two 6-port 0Gbps Ethernet cards in each switch. The required Gbps ports are provided by a pair of 48-port Gigabit Ethernet cards Since Cisco 0Gbps line cards for the Cisco Catalyst 6500 Series Switches use the older XENPAK-style optics, direct-attach twinax cables cannot be used to connect the Cisco Catalyst 4500-X top-of-rack switches to the core. In this deployment scenario, fiber-optic connections must be used adding significantly to the overall cost. Cost Analysis Brocade The total VCS acquisition Fabric cost for this solution is $767,874 or $7,678 per server including a new Purchase Cisco Price 650-E $30,60 chassis. Assuming an industry standard support and maintenance cost of 5 5-Year percent Support of MSRP, annual maintenance for the and Maintenance $3,45 solution will total $5,8. 5-Year Power $3,5 Each Cisco 650-E chassis occupies 5U of rack 5-Year space, Total Cost while the $564,330 Cisco Catalyst 4500-X Switches occupy U each. Total rack space consumed is Rack therefore Units U. 0 Devices Each to Manage Cisco 650-E 5 ( with BNA) chassis will draw somewhere around 5000W while the Cisco Catalyst 4500-X Uplink switches draw up to 400W each for a total power Oversubscription 5.5: consumption of,800w. Assuming a PUE of and a power cost of $0. 5-Year per TCOKwh, the system will cost $6,0 per year for power and cooling. $564,330 Cisco Catalyst 650-E Cisco Catalyst 4500X Purchase Price 5-Year Support and Maintenance 5-Year Power 5-Year Total Cost Rack Units Devices to Manage Uplink Oversubscription $767,874 $575,06 $34,553 $,478,333 4: 5-Year TCO $,478,333

17 Based on the above assumptions, the five-year total cost for this network design is $,478,333. If Acme Inc. chooses to upgrade the supervisors and install line cards in its existing Cisco Catalyst 6500 chassis to continue to have a single network core, it could save approximately $40,000. This would make sense only if its current switches had sufficient slots available for line cards. Bandwidth and Failure Analysis Traffic among servers connected to the same Cisco Catalyst 4500-X Series Switch can run at wire rate, while all other traffic will take three switch hops as it passes through the source, core, and destination switches over the 40Gbps trunk to each core switch. These links are 4: oversubscribed under normal conditions (3 servers:8 uplinks), creating a potential bottleneck. Even worse, the Cisco WS-X686 line card is itself 4: oversubscribed, so the end-to-end path could be as much as 8: oversubscribed. An alternative design would use an 8-port Cisco WS-X608 line card, which is not oversubscribed to the switch fabric, but that would necessitate using a Cisco Catalyst 653-E Switch Chassis to have any slots free for future growth raising the cost of the solution to almost $ million. In case of a core switch failure, the oversubscription rate for the whole network will double to 8:. In case of an edge switch failure, the server s Network Interface Controller (NIC) teaming or LACP connection will failover to the server s connection to another switch. Management and Expansion This solution has nine devices to manage and approximately 60 inter-switch connections that must be manually configured into LACP trunks. Initial configuration of the network will require at least a full day s work by a skilled engineer. Expanding the network to 50 servers will require two additional Cisco Catalyst 4500-X Series edge switches and two additional 0Gbps line cards for the core Cisco Catalyst 650 Switches. This expansion will cost $0,64. Expanding to 300 servers would add five Cisco Catalyst 4500-X Series Switches and another pair of 0Gbps line cards. Cisco Nexus 7000 with Cisco Nexus 000 Fabric Extenders The Nexus series represents Cisco s entry into the market for next-generation data center switches. The first Nexus design we will evaluate comprises a pair of Cisco Nexus 7000 modular switches and the Cisco Nexus 000 Fabric Extender. These solutions work together to provide connectivity to servers and other devices on the data center network. Each Cisco Nexus 3 has 3 useable ports and eight 0Gbps Ethernet ports for upstream connections to one or two Cisco Nexus 7000 or Cisco Nexus 5000 switches. We used seven Cisco Nexus 3s to provide 4 0Gbps Ethernet ports for server and storage connections. 3

18 NK- 3TM STAT ID NK- M800P NK- 3TM STAT ID PORT GROUP PORT GROUP PORT GROUP STATUS ID NK- M800P NK- 3TM STAT ID PSU FAN SUP FAB IOM Cisco Nexus 7000 Series STATUS N7K-F48XP-5 PORT GROUP PORT GROUP 3 PORT GROUP 4 PORT GROUP 5 PORT GROUP 6 PORT GROUP 7 PORT GROUP 8 PORT GROUP PORT GROUP STATUS N7K-M48GT N7K-M48GT ID STATUS SYSTEM ACTIVE PWR MGMT CONSOLE SERIAL PORT COM/AUX SERIAL PORT LINK ACT USB DEVICE PORT HOST PORTS CMP STATUS CMP MGMT ETH RESET LINK ACT N7K-SUP HOST PORTS ID STATUS SYSTEM ACTIVE PWR MGMT CONSOLE COM/AUX SERIAL PORT SERIAL PORT CMP MGMT ETH USB DEVICE PORT LINK ACT RESET CMP STATUS LINK ACT N7K-SUP NK- M800P NK- 3TM STAT ID NK- M800P NK- 3TM STAT ID PORT GROUP PORT GROUP PORT GROUP STATUS ID NK- M800P NK- 3TM STAT ID PSU FAN SUP FAB IOM Cisco Nexus 7000 Series STATUS N7K-F48XP-5 PORT GROUP PORT GROUP 3 PORT GROUP 4 PORT GROUP 5 PORT GROUP 6 PORT GROUP 7 PORT GROUP 8 PORT GROUP PORT GROUP STATUS N7K-M48GT N7K-M48GT ID STATUS SYSTEM ACTIVE PWR MGMT CONSOLE SERIAL PORT COM/AUX SERIAL PORT CMP MGMT ETH LINK ACT RESET USB DEVICE PORT HOST PORTS CMP STATUS LINK ACT N7K-SUP ID STATUS SYSTEM ACTIVE PWR MGMT CONSOLE COM/AUX SERIAL PORT SERIAL PORT CMP MGMT ETH USB DEVICE PORT HOST PORTS LINK ACT RESET CMP STATUS LINK ACT N7K-SUP NK- 3TM STAT ID NK- M800P NK- M800P Flattening the Data Center Network 48-port 0//0 line cards in the Cisco Nexus 7000 were used to provide a total of twisted-pair Ethernet ports. While Acme Inc. s design requirement called for only 40 lower-speed Ethernet ports, which would require only three cards, in our experience, most network designers would add the matching Brocade cards to VCS both Fabric of the core switches rather than use two cards in one switch and one in the other. Purchase Price $30,60 5-Year Support and Maintenance $3,45 5-Year Power Nexus 000 Fabric Extenders $3,5 5-Year Total Cost Rack Units $564,330 0 Nexus 700 Devices to Manage Uplink Oversubscription Nexus ( with BNA) 5.5: 5-Year TCO $564,330 Figure 5 Cisco Nexus 700 and Cisco Nexus 3 Network Even though it s not required, with just two Cisco Nexus 7000 switches, we are configuring the core switches to use Cisco FabricPath (a Cisco pre-standard TRILL implementation that automatically configures the inter-switch links and load balances across them). Cost Analysis MSRP for this solution is $56,00 or $5,6 per server. The Cisco Nexus 700 switches are each rack units tall, while the Cisco Nexus 3s are U each. The total solution will require 4 rack units, including a full 4U rack for each Cisco Nexus 700 alone. Assuming the industry standard 5 percent annual cost for support and maintenance, a support contract for this solution will cost $85,380 a year or $46,00 over the five-year projected life of the network. Cisco Nexus 7000 Cisco Nexus 000 Purchase Price 5-Year Support and Maintenance 5-Year Power 5-Year Total Cost Rack Units Devices to Manage Uplink Oversubscription $56,00 $46,00 $0,000 $,06, 4 4: 5-Year TCO $,06, 4

19 About the Cisco Nexus 000 Fabric Extender While the Cisco Nexus 3 Fabric Extender is a U box with SFP+ 0Gbps Ethernet ports, it is not an Ethernet switch. Fabric extenders function more like remote line cards than true switches; all traffic from the 3 downlink ports on the Cisco Nexus 3 is sent upstream to the Cisco Nexus 7000 or Cisco Nexus 5000 parent switch for processing even if the source and destination are ports on the same fabric extender. As a result, while it may appear that a solution based on the Cisco Nexus 3 with 3 server-facing ports and eight uplinks, and a solution based on the Cisco Catalyst 4500-X, which also has 3 user ports and eight ports for inter-switch links, are both 4: oversubscribed the switch actually offers significantly more bandwidth. If two servers connected to the same switch want to exchange data, the switch forwards packets between ports, consuming none of the bandwidth on its inter-switch link ports. If those two servers were connected to a Cisco Nexus 000, traffic would be routed first to the parent switch and then back to the Cisco Nexus 000. In fact, traffic between ports on the same Cisco Nexus 000 uses twice the ISL bandwidth as traffic going to a port on another Cisco Nexus 000 since it travels to the parent switch and back again. Since we can t quantify the amount of traffic that exists between ports on the same switch, and because the advantages of workload mobility in today s data center make keeping related workloads on the same switch counterproductive we use a worst-case scenario when describing a switch such as Cisco Catalyst 4500-X or a Brocade VDX 670, as 4: oversubscribed, which makes this comparison somewhat misleading. Each Cisco Nexus 700 will consume around 5000W of power and each Cisco Nexus 3 about 5W. The total power use for the system is 575W. Assuming a PUE of and power costs of $0. per Kwh, the system will cost $4,000 for power and cooling. Based on the above assumptions, the five-year total cost for this network design is $,06,. An alternative configuration would use six Cisco Nexus 4 fabric extenders to provide low-speed connections. This approach would reduce the cost of the two Cisco Nexusbased configurations by $,000. While it s likely that the Cisco Nexus 3s at the top of each server rack will be across the data center from the Cisco Nexus 700s and require fiber-optic connections, we ve priced this configuration using twinax cables for all inter-switch links to avoid inflating the cost for those data centers where the longest links are less than 5 meters. 5

20 An alternative configuration would use six Cisco Nexus 4 fabric extenders to provide low-speed connections. This approach would reduce the cost of the two Cisco Nexusbased configurations by $,000. Bandwidth and Failure Analysis Each Cisco Nexus 3 has 3 ports for server connections and eight uplinks to the Cisco Nexus 700s, making each link oversubscribed 4:. Since all traffic to and from servers will pass over the Cisco Nexus 3 uplinks, these connections are likely to become congested. In case of a core switch failure, half of the network s east-west bandwidth is lost, bringing the oversubscription rate to 8:. It s common practice to connect the first Ethernet port on servers in racks A and B to the switch in rack A, and the second to the switch in rack B. As a result, a failure of a fabric extender would cause all server traffic in both racks to be shifted to the same fabric extender again raising the oversubscription rate to 8:. Management and Expansion Since fabric extenders are seen and managed as if they were remote line cards for their parent switches, this configuration really has only two points of management the Cisco Nexus 700 Switches. Since we ve chosen to use Cisco FabricPath, the interswitch TRILL will detect the inter-switch links and configure them. Growing to 50 servers will require adding two more Cisco Nexus 3 Fabric Extenders. Since Cisco FabricPath creates fabrics of Cisco Nexus 7000 Switches (each with Cisco Nexus 000s attached), this architecture can be expanded to thousands of 0Gbps Ethernet ports. This alternative approach would reduce the cost of the two Cisco Nexus-based configurations by $,000. As in the first Cisco Nexus configuration we ve priced, this configuration uses twinax cables for all inter-switch links to avoid inflating the cost for those data centers where the longest links are less than 5 meters. 6

21 Lab Testing Our testing concentrated on the behavior of an application running across a Brocade VCS fabric technology-based switch cluster during a switch failure. We ran the Dell DVD Store application to provide a realistic application load simulating users connecting to an Internet store selling DVDs. Like many Web applications, the DVD Store uses an application server to handle the business processes and user interface, combined with a database server. For our testing, we used virtual application and database servers on separate physical servers running VMware vsphere 4.. The physical servers were configured to use the network interface on each server that was connected to a common switch as the primary interface, with failover to secondary interfaces on different switches. We started up the DVD Store application and then cut the power to the common switch that was carrying the data to see how quickly the VCS fabric and VMware vsphere NIC teaming would recover and how this failure would impact the application. Figure 6 DVD Store Performance Next, we created a modified version of the DVD Store application to record the number of operations per second, the maximum round-trip transaction latency, and the average round-trip transaction latency once per second (rather than the default every 0 seconds). Note that what DVD Store is reporting here is transaction latency including SQL server access, not just network latency. 7

22 As you can see in Figure 6 above, when the switch that was in the data path was powered down, the maximum latency jumped to ms, which indicated that some frames were lost as the switch failed, but the system failed over to the alternate path so quickly that while latency peaked at ms, average latency for that second remained ms, indicating a very small number of lost frames. More important, the rate at which the application processed DVD sales was essentially unchanged at 5,87 operations per second compared to an average of 5,83 per second for the 0-minute test run. Given the low impact of the switch failure on application performance, we set up Solarwinds Real-Time Bandwidth Monitor to make sure data were actually taking the paths we had configured them to take. When we graphed the traffic levels for the four ports connected to our test servers and ran our test again, we saw the screen shown below as Figure 7. Figure 7 SNMP Traces of Network Traffic During Test In this screenshot, the top two graphs illustrate the traffic running through the common switch, and the bottom two graphs show the ports on the switch for the backup connections on the servers. As the primary switch goes offline, you can see the traffic rerouted to the alternate path. Since these graphs are generated from SNMP data collected in each switch, there are no data for the period when the default switch is powered down. Also, note when it comes back online, the traffic is rerouted back to its default path. 8

Ethernet Fabrics: An Architecture for Cloud Networking

Ethernet Fabrics: An Architecture for Cloud Networking WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.

More information

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers WHITE PAPER www.brocade.com Data Center Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers At the heart of Brocade VDX 6720 switches is Brocade Virtual Cluster

More information

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is

More information

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center info@globalknowledge.net www.globalknowledge.net Planning for the Redeployment of

More information

BUILDING A NEXT-GENERATION DATA CENTER

BUILDING A NEXT-GENERATION DATA CENTER BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking

More information

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011 FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Enabling automatic migration of port profiles under Microsoft Hyper-V with Brocade Virtual Cluster Switching technology

More information

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center 10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers

More information

FIBRE CHANNEL OVER ETHERNET

FIBRE CHANNEL OVER ETHERNET FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,

More information

A Platform Built for Server Virtualization: Cisco Unified Computing System

A Platform Built for Server Virtualization: Cisco Unified Computing System A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease

More information

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS 全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!

More information

Data Center Convergence. Ahmad Zamer, Brocade

Data Center Convergence. Ahmad Zamer, Brocade Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations

More information

Introducing Brocade VCS Technology

Introducing Brocade VCS Technology WHITE PAPER www.brocade.com Data Center Introducing Brocade VCS Technology Brocade VCS technology is designed to revolutionize the way data center networks are architected and how they function. Not that

More information

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable

More information

As IT organizations look for better ways to build clouds and virtualized data

As IT organizations look for better ways to build clouds and virtualized data DATA SHEET www.brocade.com BROCADE VDX 6720 DATA CENTER SWITCHES DATA CENTER Revolutionizing the Way Data Center Networks Are Built HIGHLIGHTS Simplifies network architectures and enables cloud computing

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

Data Center Evolution without Revolution

Data Center Evolution without Revolution WHITE PAPER www.brocade.com DATA CENTER Data Center Evolution without Revolution Brocade networking solutions help organizations transition smoothly to a world where information and applications can reside

More information

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT

CCNA DATA CENTER BOOT CAMP: DCICN + DCICT CCNA DATA CENTER BOOT CAMP: DCICN + DCICT COURSE OVERVIEW: In this accelerated course you will be introduced to the three primary technologies that are used in the Cisco data center. You will become familiar

More information

Juniper Networks QFabric: Scaling for the Modern Data Center

Juniper Networks QFabric: Scaling for the Modern Data Center Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications

More information

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS

More information

Building Tomorrow s Data Center Network Today

Building Tomorrow s Data Center Network Today WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,

More information

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series . White Paper Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series What You Will Learn The Cisco Nexus family of products gives data center designers the opportunity

More information

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly

More information

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet Whitepaper 10 Things to Know Before Deploying 10 Gigabit Ethernet Table of Contents Introduction... 3 10 Gigabit Ethernet and The Server Edge: Better Efficiency... 3 SAN versus Fibre Channel: Simpler and

More information

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,

More information

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership June 4, 2012 Introduction As data centers are forced to accommodate rapidly growing volumes of information,

More information

SN0054584-00 A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP

SN0054584-00 A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP SN0054584-00 A Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP Information

More information

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for

More information

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation Cisco Nexus Family Provides a Granular, Cost-Effective Path for Data Center Evolution What You Will Learn As businesses move

More information

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven

More information

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized

More information

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Virtual PortChannels: Building Networks without Spanning Tree Protocol . White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed

More information

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures WHITE PAPER www.brocade.com IP Network Multi-Chassis Trunking for Resilient and High-Performance Network Architectures Multi-Chassis Trunking is a key Brocade technology in the Brocade One architecture

More information

over Ethernet (FCoE) Dennis Martin President, Demartek

over Ethernet (FCoE) Dennis Martin President, Demartek A Practical Guide to Fibre Channel over Ethernet (FCoE) Dennis Martin President, Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable

More information

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput

More information

Fibre Channel over Ethernet in the Data Center: An Introduction

Fibre Channel over Ethernet in the Data Center: An Introduction Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification

More information

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O Introduction Data centers are growing at an unprecedented rate, creating challenges for enterprises. Enterprise-level

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code

Course. Contact us at: Information 1/8. Introducing Cisco Data Center Networking No. Days: 4. Course Code Information Price Course Code Free Course Introducing Cisco Data Center Networking No. Days: 4 No. Courses: 2 Introducing Cisco Data Center Technologies No. Days: 5 Contact us at: Telephone: 888-305-1251

More information

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL

More information

NetApp Ethernet Storage Evaluation

NetApp Ethernet Storage Evaluation Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding

More information

Blade Switches Don t Cut It in a 10 Gig Data Center

Blade Switches Don t Cut It in a 10 Gig Data Center Blade Switches Don t Cut It in a 10 Gig Data Center Zeus Kerravala, Senior Vice President and Distinguished Research Fellow, zkerravala@yankeegroup.com Introduction: Virtualization Drives Data Center Evolution

More information

VMware Virtual SAN 6.2 Network Design Guide

VMware Virtual SAN 6.2 Network Design Guide VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...

More information

The Future of Cloud Networking. Idris T. Vasi

The Future of Cloud Networking. Idris T. Vasi The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers

More information

Chapter 3. Enterprise Campus Network Design

Chapter 3. Enterprise Campus Network Design Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This

More information

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V White Paper Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V What You Will Learn The modern virtualized data center is today s new IT service delivery foundation,

More information

Cisco Nexus Family Delivers Data Center Transformation

Cisco Nexus Family Delivers Data Center Transformation Cisco Nexus Family Delivers Data Center Transformation Data Center-Class Family of Switches Built to Help Customers Evolve Their Data Centers What You Will Learn As organizations increasingly rely on IT

More information

Data Center Network Evolution: Increase the Value of IT in Your Organization

Data Center Network Evolution: Increase the Value of IT in Your Organization White Paper Data Center Network Evolution: Increase the Value of IT in Your Organization What You Will Learn New operating demands and technology trends are changing the role of IT and introducing new

More information

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud Introduction Cloud computing is one of the most important topics in IT. The reason for that importance

More information

Scalable Approaches for Multitenant Cloud Data Centers

Scalable Approaches for Multitenant Cloud Data Centers WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,

More information

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1 WHITE PAPER Network Simplification with Juniper Networks Technology Copyright 2011, Juniper Networks, Inc. 1 WHITE PAPER - Network Simplification with Juniper Networks Technology Table of Contents Executive

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment

More information

VMware and Brocade Network Virtualization Reference Whitepaper

VMware and Brocade Network Virtualization Reference Whitepaper VMware and Brocade Network Virtualization Reference Whitepaper Table of Contents EXECUTIVE SUMMARY VMWARE NSX WITH BROCADE VCS: SEAMLESS TRANSITION TO SDDC VMWARE'S NSX NETWORK VIRTUALIZATION PLATFORM

More information

VCS Monitoring and Troubleshooting Using Brocade Network Advisor

VCS Monitoring and Troubleshooting Using Brocade Network Advisor VCS Monitoring and Troubleshooting Using Brocade Network Advisor Brocade Network Advisor is a unified network management platform to manage the entire Brocade network, including both SAN and IP products.

More information

Non-blocking Switching in the Cloud Computing Era

Non-blocking Switching in the Cloud Computing Era Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...

More information

Storage Area Network Design Overview Using Brocade DCX 8510. Backbone Switches

Storage Area Network Design Overview Using Brocade DCX 8510. Backbone Switches Storage Area Network Design Overview Using Brocade DCX 8510 Backbone Switches East Carolina University Paola Stone Martinez April, 2015 Abstract The design of a Storage Area Networks is a very complex

More information

Using High Availability Technologies Lesson 12

Using High Availability Technologies Lesson 12 Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?

More information

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance This white paper compares the performance of blade-to-blade network traffic between two enterprise blade solutions: the Dell

More information

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Course Description Configuring Cisco Nexus 5000 Switches (DCNX5K) v2.1 is a 5-day ILT training program that is designed

More information

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade

More information

VERITAS Backup Exec 9.0 for Windows Servers

VERITAS Backup Exec 9.0 for Windows Servers WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

DCICT: Introducing Cisco Data Center Technologies

DCICT: Introducing Cisco Data Center Technologies DCICT: Introducing Cisco Data Center Technologies Description DCICN and DCICT will introduce the students to the Cisco technologies that are deployed in the Data Center: unified computing, unified fabric,

More information

Layer 3 Network + Dedicated Internet Connectivity

Layer 3 Network + Dedicated Internet Connectivity Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for

More information

An Oracle White Paper October 2013. How to Connect Oracle Exadata to 10 G Networks Using Oracle s Ethernet Switches

An Oracle White Paper October 2013. How to Connect Oracle Exadata to 10 G Networks Using Oracle s Ethernet Switches An Oracle White Paper October 2013 How to Connect Oracle Exadata to 10 G Networks Using Oracle s Ethernet Switches Introduction... 1 Exadata Database Machine X3-2 Full Rack Configuration... 1 Multirack

More information

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center May 2013 Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center Executive Summary There is no single end-all cabling configuration for every data center, and CIOs, data center

More information

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain

More information

Fibre Channel over Ethernet: A necessary infrastructure convergence

Fibre Channel over Ethernet: A necessary infrastructure convergence Fibre Channel over : A necessary infrastructure convergence By Deni Connor, principal analyst April 2008 Introduction Consolidation of IT datacenter infrastructure is happening in all forms. IT administrators

More information

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol

More information

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology Network Simplification with Juniper Networks Technology 1 Network Simplification with Juniper Networks Technology Table of Contents Executive Summary... 3 Introduction... 3 Data Center Network Challenges...

More information

Over the past few years organizations have been adopting server virtualization

Over the past few years organizations have been adopting server virtualization A DeepStorage.net Labs Validation Report Over the past few years organizations have been adopting server virtualization to reduce capital expenditures by consolidating multiple virtual servers onto a single

More information

Remote PC Guide Series - Volume 1

Remote PC Guide Series - Volume 1 Introduction and Planning for Remote PC Implementation with NETLAB+ Document Version: 2016-02-01 What is a remote PC and how does it work with NETLAB+? This educational guide will introduce the concepts

More information

Using a Fabric Extender with a Cisco Nexus 5000 Series Switch

Using a Fabric Extender with a Cisco Nexus 5000 Series Switch CHAPTER Using a Fabric Extender with a Cisco Nexus 5000 Series Switch This chapter describes the Cisco Nexus 000 Series Fabric Extenders (FEXs) and includes these sections: Information About, page - Cisco

More information

HP AppSystem for SAP HANA

HP AppSystem for SAP HANA Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013 Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center

More information

Juniper / Cisco Interoperability Tests. August 2014

Juniper / Cisco Interoperability Tests. August 2014 Juniper / Cisco Interoperability Tests August 2014 Executive Summary Juniper Networks commissioned Network Test to assess interoperability, with an emphasis on data center connectivity, between Juniper

More information

Networking Topology For Your System

Networking Topology For Your System This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.

More information

Data Center Design IP Network Infrastructure

Data Center Design IP Network Infrastructure Cisco Validated Design October 8, 2009 Contents Introduction 2 Audience 3 Overview 3 Data Center Network Topologies 3 Hierarchical Network Design Reference Model 4 Correlation to Physical Site Design 5

More information

DATA CENTER. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1

DATA CENTER. Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1 Brocade VDX/VCS Data Center Layer 2 Fabric Design Guide for Brocade Network OS v2.1.1 CONTENTS Introduction...4 Building a Multi-Node VCS Fabric...4 Design Considerations...4 Topology...4 Clos Fabrics...4

More information

Cloud-ready network architecture

Cloud-ready network architecture IBM Systems and Technology Thought Leadership White Paper May 2011 Cloud-ready network architecture 2 Cloud-ready network architecture Contents 3 High bandwidth with low latency 4 Converged communications

More information

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Simplifying the Data Center Network to Reduce Complexity and Improve Performance SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,

More information

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing White Paper Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing What You Will Learn The data center infrastructure is critical to the evolution of IT from a cost center to a business

More information

Switching Fabric Designs for Data Centers David Klebanov

Switching Fabric Designs for Data Centers David Klebanov Switching Fabric Designs for Data Centers David Klebanov Technical Solutions Architect, Cisco Systems klebanov@cisco.com @DavidKlebanov 1 Agenda Data Center Fabric Design Principles and Industry Trends

More information

Unified Storage Networking

Unified Storage Networking Unified Storage Networking Dennis Martin President Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage infrastructure Fibre Channel:

More information

Flexible Modular Data Center Architecture Simplifies Operations

Flexible Modular Data Center Architecture Simplifies Operations Flexible Modular Data Center Architecture Simplifies Operations INTRODUCTION In today s ever changing IT world, technology moves at the speed of light. In response, IT and Facility stakeholders need data

More information

ANZA Formación en Tecnologías Avanzadas

ANZA Formación en Tecnologías Avanzadas Temario INTRODUCING CISCO DATA CENTER TECHNOLOGIES (DCICT) DCICT is the 2nd of the introductory courses required for students looking to achieve the Cisco Certified Network Associate certification. This

More information