3G Converged-NICs A Platform for Server I/O to Converged Networks



Similar documents
Doubling the I/O Performance of VMware vsphere 4.1

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Unified Storage Networking

FIBRE CHANNEL OVER ETHERNET

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

over Ethernet (FCoE) Dennis Martin President, Demartek

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

How To Evaluate Netapp Ethernet Storage System For A Test Drive

White Paper Solarflare High-Performance Computing (HPC) Applications

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

Virtualizing the SAN with Software Defined Storage Networks

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Fibre Channel over Ethernet in the Data Center: An Introduction

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN

Building Enterprise-Class Storage Using 40GbE

Data Center Convergence. Ahmad Zamer, Brocade

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

FCoE Deployment in a Virtualized Data Center

WHITE PAPER. Best Practices in Deploying Converged Data Centers

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

Simplify Your Network with Broadcom Converged IO Solutions

Block based, file-based, combination. Component based, solution based

Brocade One Data Center Cloud-Optimized Networks

1-Gigabit TCP Offload Engine

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

New Trends Make 10 Gigabit Ethernet the Data-Center Performance Choice

Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers

FCoE Enabled Network Consolidation in the Enterprise Data Center

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

SMB Direct for SQL Server and Private Cloud

Unified Fabric: Cisco's Innovation for Data Center Networks

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

How To Get 10Gbe (10Gbem) In Your Data Center

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Michael Kagan.

Boosting Data Transfer with TCP Offload Engine Technology

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

How To Design A Data Centre

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Introduction to Cloud Design Four Design Principals For IaaS

Converged block storage solution with HP StorageWorks MPX200 Multifunction Router and EVA

Unified Computing Systems

SAN and NAS Bandwidth Requirements

Data Center Evolution without Revolution

Cloud-ready network architecture

IEEE Congestion Management Presentation for IEEE Congestion Management Study Group

IP ETHERNET STORAGE CHALLENGES

Private cloud computing advances

HP iscsi storage for small and midsize businesses

Global Headquarters: 5 Speen Street Framingham, MA USA P F

A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center

iscsi: Accelerating the Transition to Network Storage

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Technical Brief: Introducing the Brocade Data Center Fabric

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Enterasys Data Center Fabric

Building a Scalable Storage with InfiniBand

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Converged networks with Fibre Channel over Ethernet and Data Center Bridging

State of the Art Cloud Infrastructure

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

High Performance MySQL Cluster Cloud Reference Architecture using 16 Gbps Fibre Channel and Solid State Storage Technology

Next Generation Storage Networking for Next Generation Data Centers

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Ethernet Fabric Requirements for FCoE in the Data Center

Optimizing Infrastructure Support For Storage Area Networks

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Frequently Asked Questions k. Third-party information brought to you courtesy of Dell. NIC Partitioning (NPAR) FAQs

Best Practices for Simplifying Your Cloud Network

Transcription:

White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview of key industry dynamics, Ethernet capabilities with data center bridging, and an understanding of how C-NICs will emerge as the platform for connectivity to converged networks. June 2010

Data Center Infrastructure Consolidation Infrastructure consolidation is a key data center initiative because it simplifies operations and saves money. That s why IT organizations routinely standardize infrastructure to consolidate training, deployment, and management. This practice extends to host connectivity where standardizing on one type of hardware platform and one driver image is common for LAN, SAN, and high-performance computing (HPC) network adapters. In a recent survey of IT professionals with hands-on experience with host adapters, 38% said that all of their servers were configured with a standard 1-GbE NIC and driver image. Another 18% said that up to 75% of their servers were configured with a standard 1-GbE NIC image. Other 100% of my servers use a standard 1-GbE NIC and driver image 51 to 75% of my servers use a standard 1-GbE NIC and driver image 26 to 50% of my servers use a standard 1-GbE NIC and driver image 11 to 25% of my servers use a standard 1-GbE NIC and driver image Less than 10% of my servers use a standard 1-GbE NIC and driver image No standard, each server is a custom configuration 0% 10% 20% 30% 40% 50% Figure 1: IT Pro Survey Results What percentage of your servers are configured with a standard 1-GbE NIC and driver image? Page 2

Server I/O Consolidation Has Not Been Possible Today, dramatically different classes of network controller ASICs are required to support the specific needs of Ethernet LANs, Ethernet/iSCSI SANs, Fibre Channel SANs, and InfiniBand HPC clusters. I/O consolidation has not been possible because prior generations of network controllers were not designed for the variety of network types that would allow universal deployments within the LAN-on- Motherboard (LOM) or adapter card system environments. Ethernet NIC & LOM iscsi HBA Fibre Channel HBA Table 1: Types of I/O Adapters for Networking in the Data Center Network I/O Types Network Unique Capability InfiniBand HCA LAN SAN SAN HPC Cluster LOM (LAN-on-Motherboard) LAN 1/10 GbE controllers embedded in most servers for communications with clients, remote server management, file-oriented network attached storage (NAS), and block-oriented iscsi storage Ethernet NIC (Network Interface Card) LAN 1/10 GbE PCI-E and blade server mezzanine cards used for communications with clients, remote server management, and file-oriented NAS iscsi HBA (Host Bus Adapter) SAN 1/10 GbE PCI-E and blade server mezzanine cards used for communications with block-oriented iscsi storage Fibre Channel HBA (Host Bus Adapter) SAN 4/8 GbE PCI-E and blade server mezzanine cards used for communications with block-oriented Fibre Channel storage InfiniBand HCA (Host Channel Adapter) HPC Cluster 20/40 GbE PCI-E and blade server mezzanine cards used for low-latency inter-processor communications (IPC) required in highperformance computing clusters Page 3

A New Generation of 10-GbE Technology Products based on the first generation of 10-GbE controller ASICs have been commercially available since 1994. A new class of Ethernet controller ASICs is emerging in 2010 to support the second generation of 10-GbE technology. Key components of the new 10 Gb Ethernet are data center bridging, iscsi offload, Fibre Channel over Ethernet, Remote Direct Memory Access (RDMA), and 10GBASE-T. Data Center Bridging Ethernet is designed to be a best-effort network that may drop packets or deliver packets out of order when the network or devices are busy. Extensions to the Ethernet protocol provide reliability without incurring the penalties of TCP. Additionally, they provide more control of bandwidth allocation and, thus, a more effective use of bandwidth. These enhancements make Ethernet C-NICs viable host adapters for Fibre Channel SAN and HPC cluster traffic. Table 2: Ethernet Enhancements That Comprise Data Center Bridging IEEE Protocol Priority Flow Control (PFC) P802.1Qbb What It Does A link-level priority flow control mechanism to pause different classes of traffic Enhanced Transmission Selection (ETS) P802.1Qaz Manages the bandwidth between different traffic classes for multiprotocol links Congestion Management (CM) P802.1Qau Provides end-to-end congestion management for the Fibre Channel over Ethernet (FCoE) protocol Data Center Bridging Capabilities Exchange Protocol (DCBCXP) 802.1Qaz A discovery and capabilities exchange protocol to ensure consistent configuration of enhanced Ethernet devices across a network iscsi HBA Offload A server can support connectivity to an iscsi SAN by configuring a NIC with a software initiator that runs on the server. A C-NIC with iscsi HBA offload removes the burden of iscsi protocol processing from the server and frees the processor resources to support the business applications for which the server was purchased. Page 4

Fibre Channel over Ethernet This protocol was developed so that IT organizations could leverage their Ethernet training and equipment to support connectivity to Fibre Channel storage. This is accomplished with the FCoE protocol that encapsulates Fibre Channel frames over Ethernet networks. C-NICs that support FCoE connect to Fibre Channel storage through an Ethernet switch with FCoE bridging or directly to native FCoE storage through a traditional Ethernet switch. 10GBASE-T In the near future, most data centers will migrate from 1-GbE networks to 10-GbE networks. A mass migration will be accelerated with 10GBASE-T technology that enables the use of familiar and low-cost RJ45 connectors. C-NICs with RJ45 connectors and 10GBASE-T allow data center managers to deploy 10-GbE networks and storage using CAT6A copper cables already present in most of the world s data centers. Without 10GBASE-T, miles of new optical cables and/or more expensive optical connectors must be run to migrate an entire data center to 10 Gb speeds. A recent end-user survey by IT Brand Pulse indicates that IT professionals understand the benefits of 10GBASE-T technology. Approximately 50% said they would prefer to connect their 10-GbE adapters with 10GBASE-T. Other Blade server mezzanine card connection 10GBASE-T SFP+ CX4 0% 10% 20% 30% 40% 50% 60% Figure 2: IT Pro Survey Results Dell and HP End Users To which 10-GbE adapters do you prefer to connect? Page 5

Convergence on Ethernet Eventually the new generation of Ethernet, viable for Fibre Channel SANs and HPC clusters, will be pervasive in the data center. During convergence, islands of Fibre Channel, Ethernet, and InfiniBand adapters, switches, and storage will converge onto Ethernet. Before After Enterprise Servers HPC Clusters Enterprise Servers HPC Clusters FC HBA Ethernet NIC+LOM IB HCA FCoE Ethernet NIC+LOM TCP/IP RDMA FC Switch Ethernet Switch IB Switch FCoE Ethernet Switch TCP/IP RDMA FC Storage Ethernet Storage iscsi NAS FCoE Ethernet Storage iscsi NAS Figure 3: Data Center Convergence on Ethernet The process of convergence on Ethernet is a continuum consisting of many milestones achieved by vendors and IT organizations over the course of several years. Convergence on Ethernet is already being implemented by organizations deploying LANs and iscsi SANs. Figure 4: Timeline for Convergence on Ethernet Page 6

Server Virtualization Driving the Need for 10 GbE Server virtualization has clearly moved to the core of data center architectures and is driving the emergence of dense computing nodes consisting of multiprocessor and multicore servers loaded with virtual machines. As a result, data center managers are dealing with a massive aggregation of I/O and a corresponding need for more network bandwidth. The 8x growth in 10-GbE NIC port shipments expected between 2009 and 2013 indicates their response will be to dramatically increase the deployment of 10-GbE networks. Today most servers running server virtualization software are configured with multiple 1 GbE NIC ports to support the increased I/O load from multiple virtual machines. A recent focus group said their virtualized servers were configured with an average of eight 1 GbE NIC ports. In a related survey, data center managers overwhelmingly selected server virtualization as the primary driver for 10 GbE adoption. Members of the focus group expressed a strong interest in a 10 GbE C-NIC platform to provide additional network bandwidth and to reduce the number of ports for reduced power and management complexity. Other 10 Gb LANs HPC Clusters 10 Gb NAS Server Virtualization 10 Gb iscsi Storage 10 Gb FCoE Xeon Servers 0% 5% 10% 15% 20% 25% 30% 35% 40% Figure 5: IT Pro Survey Results Server Virtualization Which data center application will drive your adoption of 10 GbE? Page 7

10 GbE for Server I/O to Converged Networks An entire networking industry is at the threshold of a massive migration from disparate LAN, SAN, and HPC networks to converged networks based on Ethernet with data center bridging. One result is that multiple types of host adapters will be replaced by thirdgeneration converged NICs that offer a combination of the following five features and benefits found only in 3G C-NICS: 1. Hardened LAN Connectivity 3G C-NICs and LOM make up 88% of the network I/O ports in the data center today. 2009 Server Adapter and LOM Ports 88% Ethernet 21M Ports 11% Fibre Channel 2.7M Ports >1% InfiniBand 200K Ports Figure 6: Data Center Network Adapter Types Page 8

2. Backward Compatibility 3G C-NICs and LOM deliver backwards compatibility with 1-GbE adapters including training, drivers, installation scripts, and management software. 3G C-NICs are backwards compatible with 1-GbE NIC training, drivers, installation scripts, and management software. 3. Hardened for iscsi SANs Figure 7: 3G C-NIC Backwards Compatibility In 2010 iscsi storage arrays will comprise about 38% of all storage arrays. By 2013 about half of all storage arrays will be iscsi. 3G C-NICs and LOM are hardened for fast iscsi SAN environments with a 1/10 GbE adapter run-rate of over 1M iscsi offload HBA ports per year in 2010 and growing rapidly. 100% 90% 80% 70% 60% 50% 40% iscsi storage arrays will be about 50% of all storage array units shipped by 2013. 30% 20% 10% 0% 2009 2010 2011 2012 2013 Figure 8: External Storage Array Unit Shipments Page 9

4. Bridge to Fibre Channel SANs 3G C-NICs and LOM offer simultaneous connectivity to NAS, iscsi SANs, and Fibre Channel SANs. Simultaneous LAN and SAN protocol support is a fundamental requirement for converged networking. 5. Forward Looking Figure 9: Simultaneous LAN and SAN Protocol Support 3G C-NICs and LOM are backwards compatible with 1 GbE networks and forward looking with an architecture designed to provide lower power and lower cost solutions to data center managers. 3G C-NICs represent a converged I/O platform for higher performance networking, while also being more energy-efficient. Figure 10: 3G C-NIC Converged I/O Platform Page 10

A No-Compromise Converged I/O Platform Checklist The following checklist identifies the features needed for a 10 GbE C-NIC to serve as a platform for I/O convergence in your data center: Data Center Bridging (DCB) support for all protocols: DCB is what enables the Lossless Ethernet capability needed for storage networking. Some NICs and Converged Network Adapters (CNAs) support DCB exclusively on one protocol at a time. 3G C-NICs are designed to support DCB while running all SAN, LAN, and offload protocols simultaneously. Simultaneous LAN and SAN protocol support: For data centers that deploy LANs and a tiered SAN model that includes iscsi and FC storage, a converged NIC must be able to run the TCP/IP, iscsi, and FCoE protocols simultaneously. Hardened LAN and iscsi stack: Since 88% of data center ports are LAN ports, it's best to converge on a 10 GbE C-NIC platform hardened from millions of installed 1 GbE NICs. Common drivers for 1 GbE and 10 GbE: It's possible to experience a seamless migration from 1 GbE to 10 GbE by using C-NICs that share a common driver image. Full offload protocol support to reduce CPU utilization: Some NICs and CNAs support iscsi software initiators, but only 3G C-NICs provide full-offload engines to reduce CPU utilization and maximize CPU usage for applications. Offload protocols can reduce CPU utilization by 70%, which also means lower power draw by the server. High Performance, coupled with low CPU utilization: Some NICs and CNAs provide high Input/Output Operations Per Second (IOPS), while some NICs and CNAs minimize CPU utilization. Only 3G C-NICs do both, allowing a network to perform without sacrificing valuable CPU cycles allocated for applications and virtualization. Low power, Wake-On-LAN, and Energy Efficient Ethernet: Implementation of the IEEE 802.3az specification for Energy Efficient Ethernet (EEE) reduces server power usage when workloads are low. Adhering to Wake-on-LAN and EEE standards helps free-up data center budgets and is an important contributor to a greener data center. 10GBASE-T: CAT6A cabling is pervasive for traditional Ethernet networking. Only 3G C-NICs support 10GBASE-T connections, allowing data center managers to use existing CAT6A cables, longer distances, lower RJ45 connector costs, and existing tools. Page 11

Summary If you re a server, storage, or network architect or administrator responsible for host connectivity to a LAN, SAN, or HPC cluster, now is a good time to begin planning your migration to converged 10 GbE networks and C-NICs. More on Broadcom s 10 GbE For more information about Broadcom s 10 GbE controllers and adapters visit: http://www.broadcom.com/products/ethernet-controllers/enterprise-server Related Articles iscsi HBA offload and 10GBASE-T: http://www.infostor.com/index/blogs_new/frank_berry/blogs/infostor/frank-berry_s_blog/ post987_855043080666472366.html http://www.networkcomputing.com/next-gen-network/broadcom-unveils-worlds-1st-converged-nic-with-iscsi-offload-on- 10gbase-t.php Convergence and Virtualization: http://www.infostor.com/index/articles/display/4348346471/articles/infostor/san/fibre-channel/2010/april-2010/ fcoe-i_o_convergence.html Broadcom, the pulse logo, Connecting everything, and the Connecting everything logo are among the trademarks of Broadcom Corporation and/or its affiliates in the United States, certain other countries and/ or the EU. Any other trademarks or trade names mentioned are the property of their respective owners. BROADCOM CORPORATION 5300 California Avenue Irvine, CA 92617 2010 by BROADCOM CORPORATION. All rights reserved. Phone: 949-926-5000 Fax: 949-926-5203 E-mail: info@broadcom.com Web: www.broadcom.com 3G_C-NIC-WP100-R June 2010