10GBASE-T for Broad 10 Gigabit Adoption in the Data Center



Similar documents
10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Data Center Architecture with Panduit, Intel, and Cisco

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet

White Paper Solarflare High-Performance Computing (HPC) Applications

Intel Ethernet SFP+ Optics

3G Converged-NICs A Platform for Server I/O to Converged Networks

Cloud based Holdfast Electronic Sports Game Platform

A Superior Hardware Platform for Server Virtualization

Intel Network Builders: Lanner and Intel Building the Best Network Security Platforms

Optimizing Infrastructure Support For Storage Area Networks

How To Get 10Gbe (10Gbem) In Your Data Center

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Deploying 10GBASE-T as the Low-Cost Path to the Cloud

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Intel Media SDK Library Distribution and Dispatching Process

Intel Ethernet Controller X540 Feature Software Support Summary. LAN Access Division (LAD)

Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization

Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1

Data Center Optimization: Component Choice. Innovative Design. Enabling Infrastructure

Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1.

10GBASE-T SFP+ Transceiver Module: Get the most out of your Cat 6a Cabling

iscsi Quick-Connect Guide for Red Hat Linux

40GBASE-T Advantages and Use Cases

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Doubling the I/O Performance of VMware vsphere 4.1

Leading Virtualization 2.0

Next Generation Data Centre ICT Infrastructures. Harry Forbes CTO NCS

Intel Ethernet Converged Network Adapter X540

SummitStack in the Data Center

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms

PCI Express* Ethernet Networking

Vendor Update Intel 49 th IDC HPC User Forum. Mike Lafferty HPC Marketing Intel Americas Corp.

OPTIMIZING SERVER VIRTUALIZATION

The Next Generation of Converged Networking: Fibre Channel over Ethernet Using 10GBASE-T

The Case for Rack Scale Architecture

Intel Platform and Big Data: Making big data work for you.

A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center

Intel Cloud Builder Guide to Cloud Design and Deployment on Intel Xeon Processor-based Platforms

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

Intel Cloud Builder Guide to Cloud Design and Deployment on Intel Platforms

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

Simplify VMware vsphere * 4 Networking with Intel Ethernet 10 Gigabit Server Adapters

Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture

Data Center E-Book. Deploying, Managing and Securing an Efficient Physical Infrastructure.

The Need for Speed Drives High-Density OM3/OM4 Optical Connectivity in the Data Center

Intel Network Builders

FIBRE CHANNEL OVER ETHERNET

Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers

Addressing Scaling Challenges in the Data Center

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

Intel Solid-State Drive Pro 2500 Series Opal* Compatibility Guide

Unified Storage Networking

Romley/Sandy Bridge Server I/O Solutions By Seamus Crehan Crehan Research, Inc. March 2012

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

New Trends Make 10 Gigabit Ethernet the Data-Center Performance Choice

Brocade Solution for EMC VSPEX Server Virtualization

Intel Desktop Board D945GNT

NFV Reference Platform in Telefónica: Bringing Lab Experience to Real Deployments

Data Center Evolution without Revolution

Creating Overlay Networks Using Intel Ethernet Converged Network Adapters

Intel Desktop Board DG965RY

Considerations for choosing top-of-rack in today's fat-tree switch fabric configurations

Solution Recipe: Improve PC Security and Reliability with Intel Virtualization Technology

over Ethernet (FCoE) Dennis Martin President, Demartek

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Intel Core i5 processor 520E CPU Embedded Application Power Guideline Addendum January 2011

How To Increase Network Performance With Segmentation

Intel Desktop Board DG41WV

Different NFV/SDN Solutions for Telecoms and Enterprise Cloud

How To Evaluate Netapp Ethernet Storage System For A Test Drive

White Paper: Understanding Network and Storage Fabric Challenges in Today s Virtualized Datacenter Environments

Fiber Channel Over Ethernet (FCoE)

Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms

Intel Desktop Board DQ965GF

Simplifying Data Center Network Architecture: Collapsing the Tiers

Managing Data Center Power and Cooling

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

Integrating Oracle's Exadata Database Machine with a Data Center LAN Using Oracle Ethernet Switch ES2-64 and ES2-72 ORACLE WHITE PAPER MARCH 2015

IP SAN Best Practices

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Intel SSD 520 Series Specification Update

Data Centers. Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions

Advantages of 4 th Generation CAT 6A UTP Cables. Antonio Cartagena Business Development - Caribbean Leviton Network Solutions

COSBench: A benchmark Tool for Cloud Object Storage Services. Jiangang.Duan@intel.com

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Obsolete Fiber Technology? Not in my Data Center!

Intel Ethernet Converged Network Adapter X540-T2

Transcription:

Third-party information brought to you courtesy of Dell. WHITE PAPER 10GBASE-T Ethernet Network Connectivity 10GBASE-T for Broad 10 Gigabit Adoption in the Data Center The increasing use of virtualization and unified networking places extreme I/O demands on a single, 1-gigabit LAN that can cause bottlenecks and growing complexities. Moving to 10 Gigabit Ethernet (10 GbE) addresses these problems. Broad deployment on 10GBASE-T will simplify data center infrastructures, making it easier to manage server connectivity while delivering the bandwidth needed for heavily virtualized servers and I/O-intensive applications. Carl G. Hansen Intel Corporation Carrie Higbie Siemon Yinglin (Frank) Yang CommScope, Inc. EXECUTIVE SUMMARY Advancing technologies place increased demands on 1 GbE networks. Trends, such as data center server consolidation, present a bandwidth challenge. 10 GbE alleviates this bottleneck while providing multiple media options, decreasing power requirements, and latency well within most users requirements. Meanwhile, costs are generally dropping. 10 Gigabit Ethernet: Drivers for Adoption The growing use of virtualization in data centers to address the need to reduce IT costs has caused many administrators to take a serious look at 10 Gb Ethernet (10 GbE) as a way to reduce the complexities they face when using the existing 1 Gb Ethernet (1 GbE) infrastructures. The server consolidation associated with virtualization has had significant impact on network I/O because they combine the network needs of several physical machines and the other background services, such as live migration, over the Ethernet network onto a single machine. Together with trends such as unified networking, the ability to use a single Ethernet network for both data and storage traffic is increasing I/O demands to the point where a 1 GbE network can be a bottleneck and a source of complexity in the data center. The move to implement unified networking requires rethinking of data center networks. While 1 GbE connections might be able to handle the bandwidth requirements of a single traffic type, they do not have adequate bandwidth for multiple traffic types during peak periods. This creates a need for multiple 1 GbE connections. Moving to 10 Gigabit Ethernet (10 GbE) addresses these network problems by providing more bandwidth and simplifies the network infrastructure by consolidating multiple gigabit ports into a single 10 gigabit connection. Data Center Administrators have a number of 10 GbE interfaces to choose from including CX4, SFP+ Fiber, SFP+ Direct Attach Copper (DAC), and 10GBASE-T. Today, most are choosing either 10 GbE Optical or SFP+ DAC. However, limitations with these interfaces have kept them from being broadly deployed across the data center. Fiber connections are not cost-effective for broad deployment, and SFP+ DAC is limited by its seven-meter reach, and requires a complete infrastructure upgrade. CX4 is an older technology that does not meet high density requirements. For 10GBASE-T, the perception to date has been that it required too much power and was too costly for broad deployments. These concerns are being addressed with the latest manufacturing processes that are significantly reducing both the power and the cost of 10GBASE-T. Widespread deployment requires a cost-effective solution that is backward compatible and has the flexibility capable of reaching the majority of and servers in the data center. This white paper looks at what is driving choices for

Table of Contents 10 GbE: Drivers for Adoption...... 1 The Need for 10 Gigabit Ethernet.. 2 Media Options for 10 Gb Ethernet. 2 10GBASE-CX4................... 2 SFP+............................ 3 10GBASE-SR (SFP+ Fiber)..........3 10GBASE-SFP+ DAC................3 10GBASE-T...................... 3 Reach.............................3 Backward Compatibility............3 Power.............................3 Latency...........................3 Cost...............................3 Data Center Network Architecture Options for 10 GbE............... 4 The Future of 10GBASE-T......... 4 10GBASE-T as LOM............... 4 Conclusion...................... 4 2 deploying 10 GbE and how 10GBASE-T will lead to broader deployment, including its integration into server motherboards. It also outlines the advantages of 10GBASE-T in the data center, including improved bandwidth, greater flexibility, infrastructure simplification, ease of migration, and cost reduction. The Need for 10 Gigabit Ethernet A variety of technological advancements and trends are driving the increasing need for 10 GbE in the data center. For instance, the widespread availability of multi-core processors and multi-socket platforms is boosting server performance. That performance allows customers to host more applications on a single server, resulting in multiple applications competing for a finite number of I/O resources on the server. Customers are also using virtualization to consolidate multiple servers onto a single physical server, reducing their equipment and power costs. Servers using the latest Intel Xeon processors can support server consolidation ratios of up to fifteen to one. However, server consolidation and virtualization have a significant impact on a server s network bandwidth requirements, as the I/O needs of several servers now need to be met by a single physical server s network resources. To match the increase in network I/O demand, IT has scaled their network by doubling, tripling, or even quadrupling the number of gigabit Ethernet connections per server. This model has led to increased networking complexity, as it requires additional Ethernet adapters, network cables and switch ports. The transition to unified networking adds to the increasing demand for high bandwidth networking. IT departments are moving to unified networking to help simplify network infrastructure by converging LAN and SAN traffic, including iscsi, NAS, and Fibre Channel over EthernetFCoE for a single Ethernet data center protocol. This convergence does simplify the network but significantly increases network I/O demand by enabling multiple traffic types to share a single Ethernet fabric. Continuing down the 1 GbE path is not sustainable, as the added complexity, power demands, and cost of additional GbE adapters will not enable customers to scale to meet current and future I/O demands. Simply put, scaling GbE to meet these demands significantly increases the cost and complexity of the network. Moving to 10 GbE addresses the increased bandwidth needs while greatly simplifying the network and lowering power consumption by replacing multiple gigabit connections with a single or dual-port 10 GbE connection. Media Options for 10 Gb Ethernet Despite industry consensus regarding the move to 10 GbE, its broad deployment of has been limited, due to a number of factors. Understanding this dynamic requires an examination of the pros and cons of current 10 GbE media options. The challenge IT managers face with 10 GbE currently is that each of the current options has a downside, whether in terms of cost, power consumption, or reach. 10GBASE-CX4 10GBASE-CX4 was an early favorite for 10 GbE deployments, however its adoption was limited by the bulky and expensive cables, and its reach is limited to 15 meters. The size of the CX4 connector prohibited higher switch densities required for large scale deployment. Larger diameter cables are purchased in fixed lengths resulting in challenges to manage cable slack. Pathways and spaces may not be sufficient to handle the larger cables. SFP+ SFP+ s support for both fiber optic cables and DAC make it a better (more flexible) solution than CX4. SFP+ is ramping today, but has limitations that will prevent this media from moving to every server.

10GBASE-SR (SFP+ Fiber) Fiber is great for latency and distance (up to 300 meters), but it is expensive. Fiber offers low power consumption, but the cost of laying fiber networking everywhere in the data center is prohibitive due largely to the cost of the electronics. The fiber electronics can be four to five times more expensive than their copper counterparts, meaning that ongoing active maintenance, typically based on original equipment purchase price, is also more expensive. Where a copper connection is readily available in a server, moving to fiber creates the need to purchase not only the fiber switch port, but also a fiber NIC for the server. 10GBASE-SFP+ DAC DAC is a lower cost alternative to fiber, but it can only reach 7 meters and it is not backward-compatible with existing GbE. DAC requires the purchase of an adapter card and requires a new top of rack (ToR) switch topology. The cables are much more expensive than structured copper channels, and cannot be field terminated. This makes DAC more expensive than 10GBASE-T. The adoption rate of DAC for LOM will be low since it does not have the flexibility and reach of 10GBASE-T. 10GBASE-T 10GBASE-T offers the most flexibility, the lowest cost media, and is backwardcompatible with existing 1 GbE networks. Reach Like all BASE-T implementations, 10GBASE-T works for lengths up to 100 meters, giving IT managers a far greater level of flexibility in connecting devices in the data center. With flexibility in reach, 10GBASE-T can accommodate either top of the rack, middle of row, or end of the row network topologies. This gives IT managers the most flexibility in server placement since it will work with existing structured cabling systems. For higher grade cabling plants (category 6A and above) 10GBASE-T operates in low power mode (also known as data center mode) on channels under 30 m. This means a further power savings per port over the longer 100 m mode. Data centers can create any-to-all patching zones to assure less than 30 m channels to realize this savings. Backward Compatibility Because 10GBASE-T is backward-compatible with 1000BASE-T, it can be deployed in existing 1 GbE switch infrastructures in data centers that are cabled with CAT6 and CAT6A (or above) cabling, enabling IT to keep costs down while offering an easy migration path to 10 GbE. Power The challenge with 10GBASE-T is that the early physical layer interface chips (PHYs) consumed too much power for widespread adoption. The same was true when gigabit Ethernet products were released. The original gigabit chips were roughly 6.5 Watts per port. With process improvements, the chips improved from one generation to the next. The resulting GbE ports are now under 1 W per port. The same has proven true for 10GBASE- T. The good news with 10GBASE-T is that the PHYs benefit greatly from the latest manufacturing processes. PHYs are Moore s Law-friendly, and the newer process technologies will continue to reduce both the power and cost of the latest 10GBASE-T PHYs. When 10GBASE-T adapters were first introduced in 2008, they required 25 W of power for a single port. Power has been reduced in successive generations of using newer and smaller process technologies. The latest 10GBASE-T adapters require only 10 W per port. Further improvements will reduce power even more. In 2011, power will drop below 6 W per port, making 10GBASE-T suitable for motherboard integration and high-density. Latency Depending on packet size, latency for 1000BASE-T ranges from below 1 µs to over 12 µs. 10GBASE-T ranges from just over 2 µs to less than 4 µs--a much tighter latency range. For Ethernet packet sizes of 512 bytes or larger, 10GBASE-T s overall throughput offers an advantage over 1000BASE-T. Latency for 10GBASE-T is more than three times lower than 1000BASE-T with larger packet sizes. Only the most latent-sensitive applications such as High Performance Computing )HPC) or high frequency trading systems would be affected by normal 10 GbE latency. The incremental 2 µs latency of 10GBASE-T is of no consequence to most users. For the large majority of enterprise applications that have been operating for years with 1000BASE-T latency, 10GBASE-T latency only makes things better. Many LAN products purposely add small amounts of latency to reduce power consumption or CPU overhead. A common LAN feature is interrupt moderation. Enabled by default, this feature typically adds ~100 µs of latency in order to allow interrupts to be coalesced and greatly reduce the CPU burden. For many users this trade-off provides an overall positive benefit. Cost As power metrics have dropped dramatically over the last three generations, cost has followed a similar downward curve. First-generation 10GBASE-T adapters cost $US 1000 per port. Today s thirdgeneration dual-port 10GBASE-T adapters are less than $US 400 per port. In 2011, 10GBASE-T will be designed in as LAN on Motherboard (LOM) and will be included in the price of the server. By using the new resident 10GBASE-T LOM modules, users will see a significant savings over the purchase price of more expensive SFP+ DAC and fiber optic adapters and will be able to free up and I/O slot in the server. 3

10 GbE Technologies and Data Center Network Architectures Technology 10GBASE-SR (SFP+ Fiber) 10GBASE-SFP+ DAC 10GBASE-CX4 10GBASE-T Data Center Network Architecture (ToR) Middle of Row (MoR) End of Row (EoR) Core network Middle of Row End of Row 10 Gb Physical Media Percentage Forecast Connectivity Uplinks from ToR to aggregation layer Inter-cabinet connectivity from servers to MoR Inter-cabinet connectivity from servers to EoR Backbone Inter-cabinet connectivity from servers to MoR Inter-cabinet connectivity from servers to EoR Data Center Network Architecture Options for 10 GbE The chart to the right lists the typical data center network architectures applicable to the various 10 GbE technologies. The table clearly shows 10GBASE-T technology provides greater design flexibility than its two copper counterparts. The Future of 10GBASE-T Intel sees broad deployment of 10 GbE in the form of 10GBASE-T. As shown in the chart below, in 2010 fiber represents of the 10 GbE physical media in data centers,but this percentage with continue to drop to approximately 12% by 2013. Direct-attach connections will grow over the next few years to by 2013 with large deployments in IP Data Centers and for High Performance Computing. 10GBASE-T will grow from only 4% of physical media in 2010 to in 2013 and eventually becoming the predominate media choice. CX-4 Direct Attach BaseT Fiber 100% 50% 0% 14% 10% 2% 74% 2008 1 18% 4% 68% 2009 6% 46% 4% 2010 47% 16% 35% 2011 42% 36% 2 2012 12% 2013 10GBASE-T as LOM Server OEMs will standardize on BASE-T as the media of choice for broadly deploying 10 GbE for rack and tower servers. 10GBASE-T provides the most flexibility in performance and reach. OEMs can create a single motherboard design to support GbE, 10 GbE, and any distance up to 100 meters. 1GBASE-T is the incumbent in the vast majority of data centers today, and 10GBASE-T is the natural next step. Conclusion Broad deployment on 10GBASE-T will simplify data center infrastructures, making it easier to manage server connectivity while delivering the bandwidth needed for heavily virtualized servers and I/O-intensive applications. As volumes rise, prices will continue to fall, and new silicon processes have lowered power and thermal values. These advances make 10GBASE-T suitable for integration on server motherboards. This level of integration, known as LAN on Motherboard (LOM) will lead to mainstream adoption of 10 GbE for all server types in the data center. 4

For more information about 10GBASE-T and other Intel Ethernet products, visit: www.intelethernet-dell.com www.intel.com/go/ethernet www.dell.com/servers Third-party information brought to you courtesy of Dell. INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PAT- ENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked reserved or undefined. Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata that may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents that have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel s Web site at www.intel.com. Copyright 2011 Intel Corporation. All rights reserved. Intel, Xeon, and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others. 0711/MBR/PMSI/PDF 325862-001US