Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability



Similar documents
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Doubling the I/O Performance of VMware vsphere 4.1

3G Converged-NICs A Platform for Server I/O to Converged Networks

iscsi: Accelerating the Transition to Network Storage

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Block based, file-based, combination. Component based, solution based

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

iscsi Boot Functionality: Diskless Booting Enables SANlike Benefits at Lower Costs

1-Gigabit TCP Offload Engine

SMB Direct for SQL Server and Private Cloud

Boosting Data Transfer with TCP Offload Engine Technology

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Building Enterprise-Class Storage Using 40GbE

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Fibre Channel Over and Under

White Paper. Enhancing Storage Performance and Investment Protection Through RAID Controller Spanning

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

Brocade One Data Center Cloud-Optimized Networks

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers

Unified Computing Systems

VERITAS Backup Exec 9.0 for Windows Servers

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

New!! - Higher performance for Windows and UNIX environments

White Paper Solarflare High-Performance Computing (HPC) Applications

ADVANCED NETWORK CONFIGURATION GUIDE

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

Virtualizing the SAN with Software Defined Storage Networks

Private cloud computing advances

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

VMWARE WHITE PAPER 1

High Speed I/O Server Computing with InfiniBand

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Connecting the Clouds

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

WanVelocity. WAN Optimization & Acceleration

How To Evaluate Netapp Ethernet Storage System For A Test Drive

FIBRE CHANNEL OVER ETHERNET

State of the Art Cloud Infrastructure

Data Center Evolution without Revolution

Brocade Solution for EMC VSPEX Server Virtualization

Cut I/O Power and Cost while Boosting Blade Server Performance

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

IEEE Congestion Management Presentation for IEEE Congestion Management Study Group

Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services

IP SAN Best Practices

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

An Oracle White Paper December Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET

Storage Solutions Overview. Benefits of iscsi Implementation. Abstract

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Global Headquarters: 5 Speen Street Framingham, MA USA P F

What Is Microsoft Private Cloud Fast Track?

Deployments and Tests in an iscsi SAN

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

Quantum StorNext. Product Brief: Distributed LAN Client

MOVE AT THE SPEED OF BUSINESS. a CELERA DATASHEET WAN OPTIMIZATION CONTROLLERS

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

Fibre Channel over Ethernet in the Data Center: An Introduction

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

EMC Virtual Infrastructure for Microsoft SQL Server

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

HP iscsi storage for small and midsize businesses

IP Storage in the Enterprise Now? Why? Daniel G. Webster Unified Storage Specialist Commercial Accounts

Optimizing Large Arrays with StoneFly Storage Concentrators

Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E

EMC Unified Storage for Microsoft SQL Server 2008

Broadcom Ethernet Network Controller Enhanced Virtualization Functionality

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

A Platform Built for Server Virtualization: Cisco Unified Computing System

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Data Center Networking Designing Today s Data Center

FCoE Deployment in a Virtualized Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

RoCE vs. iwarp Competitive Analysis

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

Transcription:

White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol stack to a TCP offload engine (TOE)-based network interface card (NIC). This results in full integration of TOE capabilities within the Windows operating environment, while preserving customer investments in applications, manageability, and security. NetXtreme II converged network interface controllers (C-NICs) allow data center administrators to maximize the value of available server resources, which allows servers to share Gigabit Ethernet (GbE) network ports for data networking, clustering, and storage traffic, while removing network processing overhead and delivering optimal application scalability. These benefits are provided with no changes to application software, while maintaining the existing user and software management interfaces. March 2005

Page 2 Unified Ethernet Communications An emerging approach to improving data center return on investment (ROI) is focused on evolving GbE TCP/IP networking to address the requirements of client/server, clustering and storage communications through the deployment of a unified Ethernet communications fabric. The vision of such a network architecture makes it non-disruptive to an existing data center infrastructure, while providing significantly better performance at a fraction of the cost all while preserving the existing investment in the server and network infrastructure. At the root of the emergence of unified Ethernet data center communications are the following three networking standards: TCP/IP over GbE Remote Direct Memory Access (RDMA) over TCP iscsi TCP/IP Over GbE TCP/IP over GbE is the predominant protocol suite for enterprise data networking. TCP/IP local area networks (LANs) have primarily been implemented using Ethernet, which has seen steady growth from 10 Mbps (megabits per second) in 1990, through 100 Mbps in the mid-1990s, to 1 Gbps (gigabit per second) today. Specifications for 10-Gbps Ethernet were ratified, and the technology is in the process of being deployed by early adopters. RDMA Over TCP RDMA over TCP technology allows the converged network interface controller, under the control of the application, to bypass the networking stacks and place data directly into and out of a remote application memory. Bypass of the networking stack along with direct data placement removes the need for data copying and enables support for low-latency communications involving clustering and storage. Given the significance of this technology, a number of networking industry leaders, including Broadcom, Cisco, Dell, EMC, Hewlett Packard, IBM, Microsoft, Intel, and Network Appliance, have come together to support the development of an RDMA over TCP protocol standard. RDMA over TCP provides for standardized and highly affordable clustering, storage, and other application protocols, replacing proprietary and more expensive technologies with existing GbE infrastructure.

Page 3 iscsi iscsi is designed to enable end-to-end block storage networking over TCP/IP Gigabit networks. iscsi is a transport protocol for SCSI, which operates on top of TCP through encapsulation of SCSI commands in a TCP data stream. iscsi is emerging as an alternative to parallel SCSI or Fibre Channel within the data center as a block I/O transport for a range of applications, including SAN/NAS consolidation, messaging, database, and high-performance computing. While the processing power of server CPUs that run TCP/IP protocol software has increased over time, the demands of existing TCP/IP-based applications are growing faster. For clustering and storage networking, low CPU utilization and low latencies are required. TCP/IP over Ethernet can replace the alternative technologies (i.e., Fibre Channel or proprietary clustering hardware), but a significantly better approach, other than software-based protocol processing, is required. TCP Offload Engines for Network Protocol Processing Efficiency Historically, networks have been slow (relative to the processing power of the CPU), and handing off the TCP/IP functionality to the CPU was adequate. Network technology and traffic have evolved at a much faster rate than CPU performance to the point that processing TCP/IP at Gigabit speeds can overwhelm even the most modern CPUs. A common rule is that it takes 1 GHz of server CPU capacity to process 1 Gbps of TCP/IP network traffic (in one direction or half-duplex). To address this shortcoming, companies are building new devices with TCP/IP offload engines. The TOE technology offloads the TCP/IP protocol stack to a dedicated controller in order to reduce TCP/IP processing overhead in servers equipped with standard Gigabit network interface controllers. While TOE technology has been the focus of significant vendor engineering investment, a number of obstacles have been in the way of broad-based TOE deployment. The first generation TOE products have been based on an all-inclusive approach for TCP/IP offload, including offload of connection setup, data path offload, and support for ancillary protocols such as Dynamic Host Configuration Protocol (DHCP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), and Internet Group Management Protocol (IGMP). Vendors were attempting to ship the offloaded TCP/IP stack as a parallel protocol stack due to the lack of standard operating systems interfaces that enable the integration of TOE capabilities. Implementing a TOE approach that requires duplicating all the previously mentioned functionality creates a number of important challenges with respect to complexity, manageability, and network security.

Page 4 The first challenge leads to the administrative complexity of having to manage two separate TCP/IP implementations within the same system, each of which has its own repository for connection-state and protocol-specific information. In addition, continuity of experience for users and continued support for networking features, such as link aggregation, virtual LANs (VLANs), load balancing and failover, and the forwarding of traffic between network interfaces, may be put at risk. The second challenge is that the offloading connection setup may also have security implications. By having centralized connection management in operating system software that has been hardened and tuned over a number of years, there is a single, mature location to protect against denial-of-service and other attacks along with, typically, many more CPU resources to perform this protection. Another challenge is that early TOE implementations used dedicated memory on NICs for data buffering and, in some cases, context buffering. The result is a large subsystem that is more expensive and power hungry. This approach cannot be used for higher speeds as the size of the required memory scales with the network speed. The real estate required for a TOE with external memory makes it harder to offer it as a standard LAN connection for the server. Windows TCP Chimney Offload Architecture Overview Microsoft s new TCP Chimney offload architecture enables offload of the TCP protocol stack to a TOE-based NIC. This enables the full integration of TOE capabilities within the Windows operating environment, while preserving customer investments in applications, manageability, robustness, and security. Windows applications that are currently bound by network processing overhead enjoy significant performance improvements, and scale better when used with TCP Chimney. The result is a reduction in the overhead of network protocol processing for the server CPU. The TCP Chimney offload architecture is slated to ship this year as part of the recently announced Scalable Networking Pack (SNP) for Microsoft Windows, which is named Longhorn. The Microsoft Chimney offload architecture departs significantly from the allinclusive first-generation TOE approaches and provides seamless integration of TOE capabilities within Windows. The TCP Chimney specifically offloads the performance-sensitive TCP/IP data path to the offload target TOE NIC (moving data between two peers), while maintaining the control path operations for setup and teardown of accelerated TCP/IP connections and support of ancillary protocols, such as DHCP, RIP IGMP, and ARP, within the Windows TCP/IP stack.

The following figure is a diagram of the Microsoft TCP Chimney block. Page 5

Page 6 Benefits of the Chimney Approach of TCP Offload The first benefit of the Chimney approach of TCP offload is the use of the Windows TCP/IP stack for supporting connection setup/teardown and support of ancillary protocols, which enables a simplified implementation and faster deployment model. The benefits of Chimney-based TCP/IP acceleration are transparently available to all Windows-based network applications for all applications using the standard WinSock interface (Microsoft s implementation of the popular Sockets API). The second benefit of the Chimney approach of TCP offload is that the support of connection setup by the Windows protocol stack addresses the security vulnerabilities posed by offload hardware handling the connection setup. Windows TCP Chimney is ideally suited to enabling TOE to become a standard function for volume server GbE ports. Broadcom s industry-leading NetXtreme II C-NIC delivers TCP protocol offload designed to match the Chimney TCP offload architecture to provide acceleration for GbE-based data as well as for clustering and storage traffic. NetXtreme II Value Proposition Summary NetXtreme II C-NICs allow data center administrators to maximize the value of available server resources. NetXtreme II C-NICs also allow servers to share GbE network ports for all types of GbE TCP/IP traffic, while removing network overhead and simplifying existing network cabling, and facilitate infusion of server and network technology upgrades. Benefits of the NetXtreme II C-NICs The following sections summarize the benefits provided by NetXtreme II C-NICs. Increased Server and Network Performance Compared to existing Gigabit NICs, NetXtreme II C-NICs significantly reduce the overhead of network I/O processing from the server. Aggregation of networking, storage, and clustering I/O offload into the C-NIC function improves costeffectiveness versus single-purpose networks by leveraging the existing Ethernet infrastructure. Internal Broadcom benchmarking (using the industry popular NTTTCP benchmark) has shown that NetXtreme II C-NICs reduce server CPU utilization by up to five times, dramatically improving server efficiency. Lower TCO for Data Center and Simplified Server Additions and Data Center Network Upgrades Because data networking, clustering, and storage functions have been aggregated, newly added NetXtreme II C-NIC-equipped servers (such as blade servers) only need to be connected to a GbE connection. Conversely, upgrading to a higher speed (such as 10 GbE) only requires the replacement of one NIC type versus multiple dedicated NICs and HBAs. Low latency clustering addresses the needs of database and other latency-sensitive applications.

Page 7 Full Networked Storage Solution NetXtreme II C-NICs provide high performance and low CPU utilization for file (i.e., CIFS and NFS), as well as block (iscsi) storage, which eliminates the need for a dedicated SAN with specialized equipment (such as Fibre Channel). Improved Data Center Operational Efficiency By using NetXtreme II C-NICs, the interfaces to each server and rack are simplified. There are fewer connection points, cables, adapter cards, and easier upgrades to existing networks. Changes are localized to the NetXtreme II C-NIC. Fewer changes translate to improved efficiency. In addition, more productive servers can result in fewer servers for some data centers, which reduce acquisition and ongoing maintenance and management expenses.

Page 8 Broadcom, the pulse logo, Connecting everything, the Connecting everything logo, and NetXtreme are among the trademarks of Broadcom Corporation and/or its affiliates in the United States, certain other countries and/or the EU. Any other trademarks or trade names mentioned are the property of their respective owners. BROADCOM CORPORATION 16215 Alton Parkway, P.O. Box 57013 Irvine, California 92619-7013 2005 by BROADCOM CORPORATION. All rights reserved. NetxtremeII-WP100-R 03/01/05 Phone: 949-450-8700 Fax: 949-450-8710 E-mail: info@broadcom.com Web: www.broadcom.com