From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller



Similar documents
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

3G Converged-NICs A Platform for Server I/O to Converged Networks

Doubling the I/O Performance of VMware vsphere 4.1

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Block based, file-based, combination. Component based, solution based

White Paper Solarflare High-Performance Computing (HPC) Applications

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

Building Enterprise-Class Storage Using 40GbE

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

1-Gigabit TCP Offload Engine

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Brocade One Data Center Cloud-Optimized Networks

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

High Speed I/O Server Computing with InfiniBand

SMB Direct for SQL Server and Private Cloud

HP iscsi storage for small and midsize businesses

Boosting Data Transfer with TCP Offload Engine Technology

Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers

State of the Art Cloud Infrastructure

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

Global Headquarters: 5 Speen Street Framingham, MA USA P F

White Paper. Enhancing Storage Performance and Investment Protection Through RAID Controller Spanning

InfiniBand in the Enterprise Data Center

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

Data Center Evolution without Revolution

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Private cloud computing advances

Connecting the Clouds

iscsi: Accelerating the Transition to Network Storage

IP SAN Best Practices

FIBRE CHANNEL OVER ETHERNET

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Unified Fabric: Cisco's Innovation for Data Center Networks

Data Center Networking Designing Today s Data Center

Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

New Trends Make 10 Gigabit Ethernet the Data-Center Performance Choice

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

An Oracle White Paper December Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric

Building the Virtual Information Infrastructure

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

Virtualizing the SAN with Software Defined Storage Networks

Windows Server 2008 R2 Hyper-V Live Migration

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

High-Performance Networking for Optimized Hadoop Deployments

Cloud Service Provider Builds Cost-Effective Storage Solution to Support Business Growth

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

A Platform Built for Server Virtualization: Cisco Unified Computing System

Introduction to Cloud Design Four Design Principals For IaaS

Pluribus Netvisor Solution Brief

SummitStack in the Data Center

The proliferation of the raw processing

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

SummitStack in the Data Center

Brocade Solution for EMC VSPEX Server Virtualization

EVOLUTION OF NETWORKED STORAGE

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards

How To Design A Data Centre

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Cisco UCS Business Advantage Delivered: Data Center Capacity Planning and Refresh

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Mellanox Academy Online Training (E-learning)

Unified Storage Networking

Network Attached Storage. Jinfeng Yang Oct/19/2015

Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Data Center Convergence. Ahmad Zamer, Brocade

Technical Brief: Introducing the Brocade Data Center Fabric

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Windows Server 2008 R2 Hyper-V Live Migration

Blade Switches Don t Cut It in a 10 Gig Data Center

Transcription:

White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller (C-NIC) providing accelerated client/server, clustering, and storage networking and enabling the use of unified TCP/IP Ethernet communications. The breadth and importance of server applications that can benefit from C-NIC capabilities, together with the emergence of server operating systems interfaces enabling highly integrated network acceleration capabilities, promise to make C- NICs a standard feature of volume server configurations. C-NIC deployment will provide dramatically improved application performance, scalability and server cost-of-ownership. The unified Ethernet network architecture enabled by C-NIC will be non-disruptive to existing networking and server infrastructure, while providing significantly better performance at reduced cost visà-vis alternatives. April, 2005

It is widely recognized that the single greatest factor impacting data center application performance and scalability is the server network I/O bottleneck. The fact that network bandwidth and traffic loads for client/server, clustering and storage traffic have outpaced and will continue to consistently outpace CPU performance increases results in a significant and growing mismatch of capabilities that has become one of the data center manager s largest problems. A common solution to this challenge has been to use different networking technologies optimized for specific server traffic types, for example, Ethernet for client/server communications and file-based storage, Fibre Channel for block-based storage, and special purpose low-latency protocols for server clustering. However, such an approach has acquisition and operational cost difficulties, may be disruptive to existing applications, and inhibits migration to newer server system topologies, such as blade servers and virtualized server systems. Enter Unified Ethernet Communications An emerging approach is focused on evolving ubiquitous Gigabit Ethernet TCP/IP networking to address the requirements of client/server, clustering, and storage communications through deployment of a unified Ethernet communications fabric. The vision of such a network architecture is that it is non-disruptive to existing data center infrastructure and provides significantly better performance at a fraction of the cost all the while preserving the existing investment in server and network infrastructure. Upgrading of data centers will be enabled in an evolutionary fashion on a rack-by-rack basis using existing management tools and interfaces. Additionally, the approach promises no modifications to existing applications. At the root of the emergence of unified Ethernet data center communications is the coming together of three networking technology trends: TCP Offload Engine (TOE), remote direct memory access (RDMA) over TCP, and iscsi. TOE TOE refers to the TCP/IP protocol stack being offloaded to a dedicated controller in order to reduce TCP/IP processing overhead in servers equipped with standard Gigabit network interface controllers (NICs). While TOE technology has been the focus of significant vendor engineering investment, a number of obstacles remain for broad-based TOE deployment. Broadcom s TOE Technology overcomes these and provides a cost effective vehicle for deployment. A disadvantage of existing TOE products, which typically are an embedded component of TOE network interface cards (TOE NICs), is the expense. One TOE NIC is needed in each server, and two are needed for high-availability configurations. Costs are likely to be high, approximately $700 to $1000 for a TOE NIC. Broadcom has announced TOE technology as a cost effective solution for both NIC and LAN on motherboard (LOM) applications. In a NIC configuration, the price is expected to be similar to existing GbE NIC cards. In a LOM configuration, the ASIC cost can be as low as $35 in volume. Page 2

RDMA RDMA is a technology that allows the network interface controller (NIC), under the control of the application, to place data directly into and out of application memory, removing the need for data copying and enabling support for low-latency communications, such as clustering and storage communications. RDMA networking has until now consisted of specialized interconnects which represent an expensive and (for existing data centers) disruptive alternative due to their proprietary and monolithic architectures. In essence, these require a special RDMA-enabled NIC with a proprietary interface to a high-speed, proprietary fabric. For organizations with a large investment in standard servers and Gigabit networking and various commercial and custom software applications, or for organizations with tight budgets, this is an unattractive option because it requires abandoning existing infrastructure rather than building upon it. A range of networking industry leaders, including Broadcom, Cisco, Dell, EMC, HP, IBM, and Microsoft, has come together to support the development of an RDMA over TCP protocol standard and provides facilities immediately useful to existing and future Gigabit TCP/IP-based clustering, storage, and other application protocols. RDMA APIs including Sockets Direct Protocol (SDP) and Winsock Direct (WSD) give the base of existing TCP/IP sockets applications access to the benefits of RDMA over TCP, while RDMA-native APIs, which allow applications to take maximum advantage of RDMA over TCP, have also been developed. These include the Direct Access Programming Library (DAPL) developed by the DAT Collaborative and incorporated into the standardization efforts of the Interconnect Standard Consortium (ISC) of the Open Group, as well as the emerging Windows Server Named Buffers API. iscsi iscsi is designed to enable end-to-end block storage networking over TCP/IP Gigabit networks. It is a transport protocol for SCSI that operates on top of TCP through encapsulation of SCSI commands in a TCP data stream. iscsi is emerging as an alternative to parallel SCSI or Fibre Channel within the data center as a block I/O transport for a range of applications including SAN/NAS consolidation, messaging, database, and high-performance computing. Application servers have two available options for supporting iscsi initiators: A software-only initiator, such as the Microsoft iscsi software driver with a standard Gigabit Ethernet NIC or an iscsi host bus adapter (HBA). iscsi HBAs typically use dedicated controllers to fully offload iscsi and TCP/IP protocols from server host CPU for optimum performance and leverage storage services, such as multi-path I/O provided by the server operating system. Page 3

Unleashing Data Center Application Performance Through IP Protocol Suite Offload The specific performance benefits provided by TCP/IP, RDMA, and iscsi offload depend upon two broad application-specific factors: the nature of application network I/O and location of its performance bottlenecks. Among networking I/O characteristics, average transaction size and throughput and latency sensitivity play an important role in determining the value TOE and RDMA bring to a specific application. The degree to which an application is storage network-bound is a critical determinant in the benefit that iscsi block storage networking brings to it. Margalla Communications analysis of workloads running on scale-out 1P/2P servers shows that IP protocol suite offload benefits span the range of applications running within today s data centers. As summarized in Figure 1, IP protocol suite offload is especially beneficial for front-end web server applications, back-end highperformance computing and decision support applications, and Common Internet File Service (CIFS ) file storage services. These applications taken together comprise a major portion of the overall server market, both in terms of shipments and revenue, making IP protocol suite offload the industry s first real and complete solution to the server network I/O problem that can be introduced in a seamless and non-disruptive manner. IP Protocol Suite Offload Value by Server Workload Source: Volume Server Workload Analysis, Margalla Communications, 2003 Server Application Common Internet File Services (CIFS ) TCP/IP Offload RDMA over TCP Offload ISCSI Offload Comments Persistent, high-bandwidth client to file server connections Backend iscsi storage enables file server consolidation. Web Services TOE/RDMA benefit scales with Web server capacity. High- Performance Computing (HPC) Decision Support Services RDMA improves HPC cluster scalability Biosciences and earth sciences applications are network I/O bound. Decision support client-to-server and server-to-storage communications is typically network I/O bound RDMA impacts server cluster scalability. Page 4

The breadth and importance of server applications that can benefit from IP suite offload, together with the emergence of server operating systems interfaces enabling highly integrated offload capabilities, promise to make IP protocol suite offload a standard feature of volume server configurations. This promise can only be fulfilled through the emergence of a new converged network interface controller (C-NIC) product category which goes beyond existing monolithic offload approaches to provide unified TOE, RDMA, and iscsi offload and dramatically improved application performance, scalability, and server cost-of-ownership. Converged Network Interface Controller The Critical Enabler for Unified Ethernet Communications The main focus of emerging C-NIC products will be on integrating each of the main hardware and software components of IP protocol suite offload into a unified whole. Specifically, an integrated C-NIC device will comprise the following specific capabilities: TCP protocol offload tightly coupled with server operating systems RDMA over TCP offload supported via RDMA interfaces such as Sockets Direct Protocol (SDP) or WinSock Direct (WSD) iscsi offload which is tightly coupled with server operating system storage stack. Embedded server management capability with support for OS present and absent states enabling complete remote systems management for rack, blade, and virtualized server systems Server Convergence over Ethernet Cluster IPC Storage LAN Management Server with C-NIC Workstation Clustering Traffic Networking Traffic Twisted Pair Storage Traffic Management Traffic Storage C-NICs Concurrently Accelerate Ethernet TCP/IP-based Networking, Clustering, and Storage Traffic Page 5

The requirement for seamless integration of the multiple facets of C-NIC functionality with server operating system environments is a critical product requirement; out-of-the-box acceleration of existing and new applications using standard TCP/IP and SCSI interfaces will be a prerequisite for C-NIC deployment. A single integrated C-NIC device with associated software will support accelerated performance for general data networking, clustering, and storage communication. As shown in Figure 2, C-NIC-equipped servers will have the capability to concurrently support the three different traffic types across one or multiple server Gigabit ports, truly enabling unified end-to-end Gigabit Ethernet-based communications. C-NIC Value Proposition Summary C-NICs will allow data center administrators to maximize the value of available server resources. It will allow servers to share GbE network ports for all types of traffic, remove network overhead, simplify existing network cabling, and facilitate infusion of server and network technology upgrades. These benefits are provided with no changes to application software and while maintaining the existing user and software management interfaces. The following section describes the important benefits that C-NICs provide. Increased Server and Network Performance Compared to existing Gigabit NICs, C-NICs allow full overhead of network I/O processing to be removed from the server. In addition, aggregation of networking, storage, and clustering I/O offload into the C-NIC function removes network overhead and significantly increases effective network I/O, while improving costeffectiveness versus alternative discrete offload approaches. Simplified Server Additions and Data Center Network Upgrades Since network I/O functions have been aggregated, newly added C-NIC-equipped servers, such as blade servers, only need to be connected to a single Gigabit Ethernet connection. Conversely, upgrading to 10 GbE only requires the addition of a module to the server. Page 6

Improved Operational Efficiency By using C-NICs, the interfaces to each server and rack are simplified. There are fewer connection points, fewer cables, fewer adapter cards, and easier upgrades to existing networks. Changes are localized to the C-NIC. Fewer changes translate to improved efficiency. In addition, for some data centers, more productive servers can result in fewer servers, which reduce acquisition and ongoing maintenance and management expenses. Phone: 949-450-8700 Fax: 949-450-8710 E-mail: info@broadcom.com Web: www.broadcom.com C-NIC-WP102-RDS Broadcom, the pulse logo, Connecting everything, the Connecting everything logo, BroadSAFE, BroadVoice and BroadVoice32 are trademarks of Broadcom Corporation and/or its subsidiaries in the United States and certain other countries. BROADCOM CORPORATION 16215 Alton Parkway, P.O. Box 57013 Irvine, California 92619-7013 2005 by BROADCOM CORPORATION. All rights reserved.