Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches

Similar documents
Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

University of Arizona Increases Flexibility While Reducing Total Cost of Ownership with Cisco Nexus 5000 Series Switches

A Platform Built for Server Virtualization: Cisco Unified Computing System

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

SummitStack in the Data Center

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

SummitStack in the Data Center

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Evolving Data Center Architectures: Meet the Challenge with Cisco Nexus 5000 Series Switches

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation

IT Agility Delivered: Cisco Unified Computing System

FIBRE CHANNEL OVER ETHERNET

Blade Switches Don t Cut It in a 10 Gig Data Center

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

Fibre Channel over Ethernet in the Data Center: An Introduction

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Unified Computing Systems

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Data Center Architecture with Panduit, Intel, and Cisco

Unified Storage Networking

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

Addressing Scaling Challenges in the Data Center

3G Converged-NICs A Platform for Server I/O to Converged Networks

Data Center Design IP Network Infrastructure

Cloud Ready: Architectural Integration into FlexPod with Microsoft Private Cloud Solution

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

How To Design A Data Centre

Using a Fabric Extender with a Cisco Nexus 5000 Series Switch

Data Center Networking Designing Today s Data Center

Architectural Comparison: Cisco UCS and the Dell FX2 Platform

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Cisco UCS Business Advantage Delivered: Data Center Capacity Planning and Refresh

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

Cisco Nexus 5548P, 5548UP, 5596UP, and 5596T Switches

Simplifying Data Center Network Architecture: Collapsing the Tiers

ECONOMICS OF DATA CENTER OPTICS SPIE PHOTONICS WEST INVITED PAPER (9775-2) FEBRUARY 16, 2016

Data Center Network Evolution: Increase the Value of IT in Your Organization

How To Build A Cisco Uniden Computing System

White Paper Solarflare High-Performance Computing (HPC) Applications

Data Center Evolution without Revolution

Converged block storage solution with HP StorageWorks MPX200 Multifunction Router and EVA

Juniper Networks QFabric: Scaling for the Modern Data Center

Block based, file-based, combination. Component based, solution based

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

FCoE Deployment in a Virtualized Data Center

The Future of Cloud Networking. Idris T. Vasi

Cisco Nexus 5548P, 5548UP, 5596UP, and 5596T Switches

The ABC of Direct Attach Cables

The Four Pillar Strategy for Next Generation Data Centers

How To Write An Article On An Hp Appsystem For Spera Hana

New Trends Make 10 Gigabit Ethernet the Data-Center Performance Choice

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Solutions Guide. High Availability IPv6

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Optimizing Infrastructure Support For Storage Area Networks

What Is Microsoft Private Cloud Fast Track?

White Paper. Best Practices for 40 Gigabit Implementation in the Enterprise

HP Virtual Connect. Tarass Vercešuks / 3 rd of October, 2013

Flexible Modular Data Center Architecture Simplifies Operations

SX1012: High Performance Small Scale Top-of-Rack Switch

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Cisco Unified Communications on the Cisco Unified Computing System

Axia-Approved Ethernet Switches

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

SX1024: The Ideal Multi-Purpose Top-of-Rack Switch

Introducing Brocade VCS Technology

over Ethernet (FCoE) Dennis Martin President, Demartek

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

FCoE Enabled Network Consolidation in the Enterprise Data Center

Extreme Networks: Public, Hybrid and Private Virtualized Multi-Tenant Cloud Data Center A SOLUTION WHITE PAPER

Cisco UCS B460 M4 Blade Server

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

Cisco Unified Computing System and EMC VNX5300 Unified Storage Platform

Private cloud computing advances

Cisco Advanced Routing and Switching for Field Engineers - ARSFE

Cisco Desktop Virtualization Solution with Citrix XenDesktop: Deliver Desktops and Applications as On-Demand Services

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager

System Select Guide for Data Center Cabling Part 1: Networking Architectures with Cisco Nexus Switches

Oracle Exalogic Elastic Cloud: Datacenter Network Integration

Transcription:

Rack-Level I/O Consolidation with Cisco Nexus 5000 Series Switches Introduction Best practices for I/O connectivity in today s data centers configure each server with redundant connections to each of two networks: two or more Ethernet network interfaces connect to IP networks, and two or more Fibre Channel Interfaces connect to storage networks. This approach has created a proliferation of cabling that imposes significant costs (Figure 1). Each network requires a dedicated set of switches, interfaces, transceivers, cabling, and the ongoing expense of managing, servicing, powering, and cooling the equipment needed to support them. There are hidden costs as well. To have sufficient expansion slot capacity to support all these interfaces, IT departments often must upgrade to larger, more expensive servers solely for their I/O capacity, expending capital and consuming valuable data center rack space. Figure 1. Today s Data Center Best Practices Equip Each Server with a Redundant Pair of Ethernet and Fibre Channel Adapters, Resulting in a Proliferation of Access-Layer Cabling and Switching Cisco Nexus 5000 Series Switches change this situation by delivering a unified network fabric that supports Ethernet, Fibre Channel, and even interprocess communication (IPC) networks over a single 10 Gigabit Data Center Ethernet (DCE) and Fibre Channel over Ethernet (FCoE) link (Figure 2). Servers can be configured with a single pair of converged network adapters (CNAs), sometimes called multifunction adapters, that interface with both the LAN and storage stacks in the operating system. This new type of adapter does not require any change to LAN or SAN behavioral models. FCoE is compatible with Fibre Channel management models, configuration methods, and tools, making the transition to FCoE a quick and easy proposition. 2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 1 of 6

Figure 2. Cisco Nexus 5000 Series Switches Consolidate Ethernet, Fibre Channel, and IPC Networks onto a Single Network Link, Simplifying Rack-Level Cabling and Switching The benefits of this rack-level I/O consolidation include the following: Multiple NICs and HBAs can be consolidated into one CNA and use up to the 10-Gbps bandwidth limit per port. Reducing the number of adapters simplifies server configuration and conserves expansion slots. Total cost of ownership (TCO) savings can be significant; for more information, please see Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O. All server-to-access-switch cabling can be replaced with the low-cost, low-latency Small Form-Factor Pluggable Plus (SFP+) direct-attach 10 Gigabit copper solution, which eliminates costly fiber transceivers, large cable bundles, and typically 90 percent of the long runs to end-of-row Fibre Channel switches. The Cisco Nexus 5000 Series Switches connect directly to data center storage networks through native Fibre Channel interfaces, protecting existing investments in storage equipment, management software, and staff training. Fewer upstream ports due to aggregation of adapter ports at the access switch means fewer modular SAN switch ports required. This document explores the benefits of I/O consolidation in two deployment scenarios: highdensity racks of two-rack-unit (2RU) servers, and racks of blade servers as they begin to support 10 Gigabit Ethernet. I/O Consolidation with High-Density 2RU Server Pods The Cisco Nexus 5020 Switch has the port density and the I/O consolidation capability to dramatically reduce the number of cables and switches needed to support racks filled with 2RU servers. Figure 3 illustrates one possible scenario: a pod composed of two racks, each with 20 2RU servers and a single Cisco Nexus 5020. 2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2 of 6

Figure 3. I/O Consolidation Scenario with Two 42RU Racks of 20 2RU Servers Each Each server is configured with two CNAs that carry the server s LAN and SAN traffic over 10 Gigabit Ethernet network links. One CNA per server connects with the switch in its own rack; the other CNA connects to the switch in the second rack. CNAs can provide a pair of 10 Gigabit Ethernet interfaces, so if redundancy is not a concern, a single CNA per server with a cable to each of the two switches is another potential configuration. For cabling within the pod, the SFP+ direct-attach 10 Gigabit copper solution provides a low-cost, low-power, and low-latency alternative to fiber or 10GBASE-T copper. This solution integrates Twinax cabling with SFP+ transceivers to deliver only 0.25 microsecond of latency per link, while consuming only 0.1 watt (W) of power per transceiver, helping to lower both capital and operating costs. Any available ports on the Cisco Nexus 5020 can be used as LAN uplinks; however, in this scenario the 40 fixed switch ports are all occupied by connections to servers. In this case, ports on expansion modules are used to provide the number of uplinks needed to establish a desired oversubscription ratio. Attaching to native Fibre Channel SANs requires 4-Gbps Fibre Channel ports that are available only on expansion modules. Cisco offers three expansion module options: a 6-port 10 Gigabit Ethernet, Data Center Ethernet, and FCoE expansion module; an 8-port 1-, 2-, or 4-Gbps Fibre Channel expansion module; and a 4-port 10 Gigabit Ethernet, Data Center Ethernet, and FCoE module plus a 4-port 1-, 2-, or 4-Gbps Fibre Channel expansion module. The most common scenario in this two-rack, 40-server configuration uses two of the Ethernet plus Fibre Channel expansion modules. This scenario would provide a total of eight Ethernet and eight Fibre Channel uplinks. If each port were populated, this would support an oversubscription ratio of 5:1 on each of the two networks. The four most feasible combinations of the two expansion modules are summarized in Table 1. 2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 3 of 6

Table 1. Different Combinations of Expansion Modules Yield Different Oversubscription Ratios for the LAN and SAN Expansion Module Combinations 2X 4-port 10 Gigabit Ethernet plus 4 Fibre Channel 1X 4-port 10 Gigabit Ethernet plus 4 Fibre Channel 1X 8-port Fibre Channel 1X 6-port 10 Gigabit Ethernet 1X 8-port Fibre Channel 1X 4-port 10 Gigabit Ethernet plus 4 Fibre Channel 1X 6-port 10 Gigabit Ethernet Number of Ethernet Links and Oversubscription Ratio 8 5:1 4 10:1 6 6.67:1 10 4:1 Number of Fibre Channel Links and Oversubscription Ratio 8 5:1 12 3.33:1 8 5:1 4 10:1 Benefits of Consolidated I/O An IT department can consolidate I/O in this way using a number of different methods: For servers already configured with compatible 10 Gigabit Ethernet NICs, a driver can be used to implement both IP and Fibre Channel stacks and pass both traffic streams over the same link to the access-layer switch. In existing Gigabit Ethernet environments, Gigabit Ethernet NICs are often used along with a pair of Fibre Channel HBAs; in these environments, Fibre Channel and several Gigabit Ethernet links can be consolidated onto a single pair of CNAs that use one pair of cables per server. In organizations rolling out new servers, the cost of deploying multiple 10 Gigabit Ethernet NICs and multiple Fibre Channel HBAs can be avoided by configuring servers from the start with CNAs to support redundant, consolidated I/O. Regardless of the route by which I/O consolidation is achieved, the benefits are numerous: All in-rack I/O connections can be supported by low-cost, low-latency, and low-power SFP+ direct-attach 10 Gigabit copper cabling between servers and the top-of-rack switch. The need for more expensive fiber connections is limited to LAN connections to the aggregation layer, or Fibre Channel connections to the SAN core. The number of in-rack cables is reduced by two or more per server, which also reduces the number of adapters and transceivers along with their power draw and the load they place on the data center cooling infrastructure. With aggregation accomplished in server racks, or groups of server racks, only a small number of fiber connections need to extend to the aggregation layer. This feature helps reduce the overall number of switches, freeing valuable data center rack space while reducing capital and operating costs. I/O Consolidation in Blade Server Racks A growing number of options exist for 10 Gigabit Ethernet connectivity to blade servers. Today, some blade systems support internal switches that provide 10 Gigabit Ethernet connectivity to each blade and a small number of uplinks from the chassis. This situation is likely to change rapidly, with 10 Gigabit Ethernet appearing on blade server motherboards, and consolidated network adapters appearing in mezzanine-board form factors. The alternative approach is for 2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 4 of 6

network pass-through connections to deliver each blade s network interfaces to the chassis and then extended directly to the access-layer switch. Network pass-through configurations are preferred over built-in switches because they allow all traffic to be managed by a uniform data center network infrastructure, eliminating the need for another, inconsistent layer of management in the data center network. Blade switches force fixed oversubscription ratios at the chassis level, and they also are not likely to provide the features needed to extend the unified network fabric to each blade; pass-through connections eliminate these concerns. Hypothetical Blade Server with Ethernet Pass-Through Connections Figure 4 shows a hypothetical blade server configuration using pass-through networking. Two blade systems are installed in a rack, each with 16 blades and two 10 Gigabit Ethernet links per blade extended to the chassis. I/O is consolidated on each blade in either of two ways: A mezzanine card can provide hardware-based consolidation just as CNAs provide in rackmount servers. For blades with fixed 10 Gigabit DCE-capable Ethernet NICs, software drivers can provide both an Ethernet and a Fibre Channel stack to the OS, consolidating I/O and passing both traffic classes to the 10 Gigabit Ethernet link. Figure 4. I/O Consolidation Scenario for a Hypothetical Blade Server with an Ethernet Pass-Through Configuration In this example, 32 of the fixed Cisco Nexus 5020 ports on each switch are occupied with network links to blade systems, leaving 8 fixed ports available for network uplinks. This configuration yields an Ethernet network oversubscription ratio of 4:1. Note that this configuration allows full line-rate connectivity between all blades in the rack with no oversubscription in the rack. 2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 5 of 6

Connectivity to the Fibre Channel SAN is provided by configuration of expansion modules for the Cisco Nexus 5020. A logical choice would be the addition of one 8-port 1-, 2-, or 4-Gbps Fibre Channel expansion card to the switch. This configuration would support Fibre Channel oversubscription ratios that can be adjusted from 4:1 to 16:1 based simply on the number of uplinks used on the 8-port module. Conclusion The unified network fabric supported by Cisco Nexus 5000 Series Switches dramatically simplifies rack-level networking by consolidating Ethernet and Fibre Channel traffic onto a single network link. I/O consolidation at the rack level helps reduce the number of adapters, cables, transceivers, and upstream ports required through the use of Cisco Nexus 5000 Series Switches in top-of-rack configurations. The cost savings potential makes I/O consolidation at the rack level a compelling business proposition; for more Information, see Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O. For More Information http://www.cisco.com/go/nexus http://www.cisco.com/go/dc http://www.cisco.com/en/us/products/ps9670/prod_white_papers_list.html Printed in USA C11-490579-00 08/08 2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 6 of 6