The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way

Similar documents
Cisco Nexus Family Delivers Data Center Transformation

Simple, Reliable Performance for iscsi Connectivity

How To Evaluate Netapp Ethernet Storage System For A Test Drive

3G Converged-NICs A Platform for Server I/O to Converged Networks

Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

iscsi: Accelerating the Transition to Network Storage

Doubling the I/O Performance of VMware vsphere 4.1

Virtualizing the SAN with Software Defined Storage Networks

Demartek June Broadcom FCoE/iSCSI and IP Networking Adapter Evaluation. Introduction. Evaluation Environment

Fibre Channel Over and Under

FIBRE CHANNEL OVER ETHERNET

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Unified Computing Systems

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

Data Center Network Evolution: Increase the Value of IT in Your Organization

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

Block based, file-based, combination. Component based, solution based

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing

Data Center Evolution without Revolution

A Platform Built for Server Virtualization: Cisco Unified Computing System

Brocade One Data Center Cloud-Optimized Networks

Cisco UCS C-Series Rack-Mount Servers The Computing Platform for Virtualised Data Centres. Business Overview

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Data Center Networking Designing Today s Data Center

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

What Is Microsoft Private Cloud Fast Track?

Cloud-ready network architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

Intel s Vision for Cloud Computing

WHITE PAPER The ROI of Converged Networking Using Unified Fabric

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Microsoft Private Cloud Fast Track Reference Architecture

White. Paper. The Converged Network. November, By Bob Laliberte. 2009, Enterprise Strategy Group, Inc. All Rights Reserved

Unified Storage Networking

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Cisco Unified Data Center

Cisco Unified Computing System and EMC VNX5300 Unified Storage Platform

EMC Virtual Infrastructure for Microsoft SQL Server

David Lawler Vice President Server, Access & Virtualization Group

CASE STUDY SAGA - FcoE

How To Build A Cisco Ukcsob420 M3 Blade Server

University of Arizona Increases Flexibility While Reducing Total Cost of Ownership with Cisco Nexus 5000 Series Switches

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

How To Design A Data Centre

What Is Microsoft Private Cloud Fast Track?

Cisco UCS B460 M4 Blade Server

Cisco UCS B-Series M2 Blade Servers

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

IBM PureFlex System. The infrastructure system with integrated expertise

Technical Brief: Introducing the Brocade Data Center Fabric

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

What Is Microsoft Private Cloud Fast Track?

Fibre Channel over Ethernet in the Data Center: An Introduction

FCoE Deployment in a Virtualized Data Center

The Next Phase of Datacenter Network Resource Management and Automation March 2011

Cloud Ready: Architectural Integration into FlexPod with Microsoft Private Cloud Solution

What s New in VMware vsphere 4.1 Storage. VMware vsphere 4.1

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

WHITE PAPER. Data Center Fabrics. Why the Right Choice is so Important to Your Business

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers

Private cloud computing advances

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Peer Research Cloud Computing Research for IT Strategic Planning

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

THE CLOUD STORAGE ARGUMENT

Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series

Cisco Data Center Optimization Services

10Gb Ethernet Solution Guide. When it comes to Storage we are the Solution

Platfora Big Data Analytics

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

White Paper Technology Review

Cisco UCS Business Advantage Delivered: Data Center Capacity Planning and Refresh

HBA Virtualization Technologies for Windows OS Environments

Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure

IT Agility Delivered: Cisco Unified Computing System

WHITE PAPER. Building Blocks of the Modern Data Center

Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack

over Ethernet (FCoE) Dennis Martin President, Demartek

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

ADVANCED NETWORK CONFIGURATION GUIDE

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Accelerating Microsoft Exchange Servers with I/O Caching

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

Cisco Nexus Planning and Design Service

Enterasys Data Center Fabric

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Transcription:

The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way What You Will Learn In recent years, growth in Ethernet-based storage has surpassed that of storage-specific fabrics, propelled in large part by the increase in server virtualization. Unified networking over 10 Gigabit Ethernet in the data center offers compelling benefits for the evolving virtualized data center, including a simplified infrastructure, lower equipment and power costs, and greater flexibility. Ethernet is the foundation of cloud computing, and easy and simple access to the SAN will be critical as virtualization deployments continue to grow and new, on-demand data center models emerge. Intel and Cisco are two companies leading the transition to 10 Gigabit Ethernet unified networking. This document explores the benefits of unified networking and approaches to enabling it and the ways that Intel and Cisco are helping to bring important consolidation technologies to enterprise data center customers. Contents What You Will Learn...1 Simplifying the Network with 10 Gigabit Ethernet...2 10 Gigabit Ethernet: The Fabric of Cloud Computing...2 The Promise of Ethernet Storage....3 Ethernet Enhancements for Storage... 4 IEEE DCB for Lossless Ethernet... 4 FCoE: Enabling End-to-End Unified I/O... 4 Introducing the Open FCoE Architecture...4 Open FCoE: Momentum in the Linux* Community............ 4 Native Initiator Success: iscsi... 5 Intel and Cisco: Leading the Way to a Unified Fabric...5 Intel Ethernet Unified Networking... 5 Broad Support Commitment... 5 Cost-Effective Efficiency.... 5 Ease of Use... 5 Cisco Unified Fabric... 6 Cisco Nexus 5000 Series Switches... 6 Rich Feature Sets... 6 Investment Protection... 6 Conclusion...7 Solution provided by:

Simplifying the Network with 10 Gigabit Ethernet As IT departments look for ways to reduce costs and improve server efficiency, they are turning increasingly to server virtualization and consolidation. The benefits of virtualization are widely understood: less server hardware to purchase, reduced power and cooling needs, and centralized management. Today s servers are based on powerful new processors, including the Intel Xeon processor 5600 and 7500 series, that support more virtual machines per physical host than ever before, helping IT achieve greater consolidation ratios. The latest generation of Intel Xeon processors enables IT to consolidate servers at a 15:1 ratio, delivering power savings of up to 90 percent and a 5-month return on investment (ROI). New four-socket processors are delivering 20 times the performance of previous-generation processors. Nearly 50 percent of the four-socket servers shipped today are being used for virtualization.1 Unfortunately, the success achieved by many organizations in attaining these benefits has been hindered by past practices for networking virtualized servers. As virtual machine density increases, a physical server s networking needs also increase, adding both cost and complexity. A typical virtualized server contains 8 to 10 Gigabit Ethernet LAN ports and 2 dedicated SAN ports (Figure 1). 2 Figure 1. Multiple gigabit Ethernet connections in a typical virtualized server. As the use of server virtualization continues to grow, 10 Gigabit Ethernet unified networking is simplifying server connectivity (Figure 2). Consolidating the traffic of multiple Gigabit Ethernet connections onto a single 10 Gigabit Ethernet adapter significantly reduces cable and infrastructure complexity and overall total cost of ownership (TCO). Enhancements to the Ethernet standard also enable 10 Gigabit Ethernet support for both LAN and SAN traffic, allowing IT to achieve further benefits by converging data and storage infrastructures. As a result of its ubiquity, cost effectiveness, flexibility, and ease of use, Ethernet has emerged as the unified data center fabric. 10 Gigabit Ethernet: The Fabric of Cloud Computing The growth in server virtualization has helped data center networks evolve from discrete, siloed infrastructures to more flexible fabrics with the scalability and agility necessary to address the needs of new usage models and provide an excellent foundation for enterprise cloud computing. More than 2.5 billion users will connect to the Internet in the next 5 years2 with more than 10 billion devices.3 This usage will require eight times the amount of storage capacity, 16 times the network capacity, and more than 20 times the computing capacity by 2015.4 A new infrastructure must emerge to power this growth and Figure 2. Simplified server connectivity using 10 Gigabit Ethernet. The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way

enable the most efficient use of resources. This infrastructure is cloud computing. The cloud is an evolution of computing that delivers services over the Internet to consumers and enterprises. Services scale as needed and only when needed without user intervention. A highly scalable and efficient cloud architecture is needed to provide both the technical attributes and the extreme resource utilization and efficiency that cloud computing promises. With its reduced hardware requirements, fewer points of management, and broad ecosystem support, 10 Gigabit Ethernet delivers the flexible, simplified network infrastructure needed to support cloud computing. The following characteristics make 10 Gigabit Ethernet the ideal fabric for cloud infrastructure: Ubiquity. Ethernet connectivity ships standard on nearly every server today, and Ethernet infrastructure is a universal data center component. When 10 Gigabit Ethernet LAN-on-motherboard (LOM) connections are integrated into the next generation of servers, unified LAN and SAN connectivity will be a feature available by default. Advanced virtualization support. Advanced server virtualization enables dynamic resource allocation and is required for any cloud computing infrastructure. Technologies from companies such as Intel, Cisco, VMware, and Microsoft are delivering line-rate 10 Gigabit Ethernet throughput and support for platform virtualization enhancements. Unified networking. A 10 Gigabit Ethernet unified fabric simplifies the network infrastructure by consolidating LAN and SAN traffic. Recent Ethernet enhancements help ensure quality of service (QoS) for critical traffic. The Promise of Ethernet Storage New usage models and the dramatic growth of data in their organizations have forced IT administrators to confront complicated technical and business challenges. Today, most IT departments deploy separate LAN and storage networks, with storage often divided between network-attached storage (NAS) for file-based applications, and SAN (Fibre Channel and Small Computer System Interface over IP [iscsi]) for blockbased applications. The goal of unified networking is to allow a single-network infrastructure, based on 10 Gigabit Ethernet, to carry all these disparate traffic types. Ethernet has served as a unified data center fabric for years, supporting LAN, NAS (Network File System [NFS] and Common Internet File System [CIFS]), and iscsi SAN traffic. With recent Ethernet enhancements and the ratification of the Fibre Channel over Ethernet (FCoE) specification, Ethernet adapters based on IEEE Data Center Bridging (DCB) can now help connect servers to Fibre Channel SANs. Extending Ethernet s ubiquity and 300,000 250,000 200,000 150,000 100,000 50,000 0 wide familiarity to Fibre Channel SAN traffic will help accelerate the move to 10 Gigabit Ethernet based I/O consolidation in virtualized data centers, reduce costs, and enable simplification and agility. Given its flexibility and long history, it is not surprising that Ethernet storage is the fastest growing segment of the storage systems market. The industry research firm IDC 5 estimates that the worldwide Ethernet-based storage systems (NAS and iscsi SAN) market had a compound annual growth rate (CAGR) of approximately 23 percent between 2005 and 2009 (Figure 3). iscsi storage shipments experienced the highest growth rate (70 percent) from 2005 to 2009, as a result mainly of broad iscsi adoption in Microsoft Windows, virtual server, and blade server environments. Industry analysts project continued gains in the Ethernet storage market share due to increasing deployment of Ethernetonly data centers (which use a unified 10 Gigabit Ethernet infrastructure for all data and storage traffic), the emergence of cloud computing, and the increasing deployment of FCoE solutions. Ethernet Compared to Fibre Channel Target Unit Shipments 2002 2003 2004 2005 2006 Calendar Year Fibre Channel 2007 2008 2009 Ethernet (iscsi and NAS) Figure 3. Actual unit shipments for Ethernet and Fibre Channel storage (IDC, 2009). The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way 3

Ethernet Enhancements for Storage IEEE DCB for Lossless Ethernet To strengthen 10 Gigabit Ethernet as a unified data center fabric, the IEEE has standardized Ethernet enhancements to support storage traffic, including FCoE and iscsi. These extensions, known collectively as Data Center Bridging, or DCB, enable better traffic prioritization over a single interface and an advanced means for shaping traffic on the network to decrease congestion. In other words, DCB delivers a lossless Ethernet fabric for storage traffic. FCoE: Enabling End-to-End Unified I/O FCoE is a logical extension of Ethernet that uses Fibre Channel s network, service, and protocol layers to carry data packets over Ethernet s physical and data link layers. Fibre Channel s unique network-centered management model has proven administrative capacity to scale to thousands of end nodes in a data center. The use of Fibre Channel s higher layers smoothes the transition to FCoE because existing SAN-based applications do not need to change to benefit from the performance and cost benefits of FCoE. The provisioning responsibilities, now split between the server and storage administrators, can be maintained in the transition from Fibre Channel SANs to Ethernet-based FCoE-powered SANs. In addition, Ethernet links for FCoE traffic can be used in conjunction with native Fibre Channel links for the same traffic flow. This capability enables existing Fibre Channel SANs to be extended with new servers or storage using converged LAN and FCoE SAN links as needs change. Many enterprises have extensive Fibre Channel installations, and the availability of software-based FCoE initiators makes the Fibre Channel SAN easily accessible for any server with a 10 Gigabit Ethernet port. By allowing Fibre Channel technology to use DCB-capable Ethernet, FCoE eliminates the need for dedicated Fibre Channel host bus adapters (HBAs), reducing cabling and switch-port requirements, while coexisting with existing Fibre Channel infrastructures. The result is a simplified data center infrastructure, lower equipment and power costs, and universal SAN connectivity across the data center over the trusted Ethernet fabric. Introducing the Open FCoE Architecture The Open FCoE approach consists of standard 10 Gigabit Ethernet adapters and native OS-based FCoE initiators, which together provide a robust, scalable, and high-performance server connectivity option without expensive, proprietary hardware. As shown in Figure 4, Open FCoE implements the complete Fibre Channel protocol in the OS kernel. It provides libraries for different systemlevel implementations, allowing vendors to implement dataplane functions of the FCoE stack in hardware to deliver optimum performance. Open FCoE: Momentum in the Linux Community The Linux* community accepted the Open-FCoE project in November 2007 with the goal of accelerating development of a native FCoE initiator in the Linux kernel. The industry responded enthusiastically, and today there are more than 190 active participants in the community who are contributing code, providing review comments, and testing the Open FCoE stack. To date, Device Driver Open FCoE Solution Application I/O SCSI storage interface Fibre Channel protocol FCoE transport protocol FCoE data path offloads Ethernet MAC Ethernet PHY Operating System Figure 4. Overview of the Open FCoE initiator solution. the Open FCoE source web site (www.open-fcoe.org) has received more than 20,000 hits. Open industry standards and open source technology play significant roles in the modern data center, reducing R&D investment and enabling access to a multivendor supply chain that is designed for heterogeneous interoperability, ultimately resulting in greater choice and lower equipment costs. The Open FCoE approach offers a number of advantages in accessibility and ease of use: Accessibility. The Open FCoE approach makes Fibre Channel SAN access available to any server with a DCB-capable 10 Gigabit Ethernet adapter installed. IT departments can standardize on these adapters to simplify server connectivity across multiple use cases. With FCoE support integrated into the OS, Fibre Channel SAN access will become readily available across these converged networks and even more accessible when 10 Gigabit Ethernet LOM technology becomes pervasive. 4 The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way

Ease of use. Because the Open FCoE approach uses standard DCB-capable 10 Gigabit Ethernet adapters that comply with broad, well-known Ethernet standards, IT can use existing knowledge to configure and manage these adapters for FCoE deployments. In fact, IT can standardize on a single product or product family for all LAN and SAN connectivity. FCoE initiator integration into the OS also means common OSbased tool support across a product family, or even the use of adapters from multiple vendors, as they become available. Native Initiator Success: iscsi iscsi provides an excellent example of the success of native storage initiators integrated into the OS. Today, all major OSs include native iscsi support, delivering the same benefits as already discussed. Native iscsi initiators have continued to mature and now support advanced adapter features and platform advancements that help deliver excellent performance. In fact, Intel recently demonstrated a standard Intel Ethernet 10 Gigabit Server Adapter supporting 1.25 million I/O operations per second (IOPS) using the native iscsi initiator in Microsoft Windows Server* 2008 Release 2. 6 IT departments standardizing on Intel Ethernet Server Adapters for iscsi connectivity can use a single initiator, TCP/ IP stack, and set of management tools and IT policies. Easier server provisioning, lower likelihood of human error, and simpler management help enable lower capital and operational expenditures. This standardization also allows IT to gain the benefits of simplified management and integrated tools. Intel expects the same benefits for Open FCoE based solutions. Intel and Cisco: Leading the Way to a Unified Fabric As established leaders in Ethernet, Intel and Cisco are uniquely positioned to lead the industry transition to a unified 10 Gigabit Ethernet fabric. Together these companies led the DCB standards effort, and they were the first companies to include DCB and FCoE support in their products: Cisco in switches, and Intel in server adapters. Unified fabric support is a critical feature in products from each company, and these solutions enable infrastructure simplification, offer industryleading virtualization support, and lay the foundation for dynamic, on-demand cloud computing models. Intel Ethernet Unified Networking Ethernet is the foundation of unified networking, and Intel has 30 years of experience delivering high-quality Ethernet products. Intel is the volume leader in Ethernet adapter shipments and has led the industry through speed transitions and enhancements to the Ethernet standard. Broad Support Commitment Intel Ethernet products have the broadest OS support in the industry, and Intel s long-term product plans align future products with new server platform capabilities and upcoming data center trends, including cloud computing. Intel is expanding the capabilities of its trusted Ethernet product line to incorporate FCoE for true unified networking. Qualification by leading storage vendors means that customers can be confident that servers with the Intel Ethernet Server Adapter X520 installed will interoperate with their storage systems. Cost-Effective Efficiency Standardizing on Intel Ethernet 10 Gigabit Server Adapters uses Ethernet economics for cost-effective, broad-based unified networking deployment. A single Intel Ethernet Server Adapter X520 supports FCoE, iscsi, NAS, and LAN traffic for true unified networking without the need for expensive upgrades. In terms of performance, the Intel Ethernet Server Adapter X520 combines high throughput, intelligent hardware-based offloads with native OS initiators, and stable hardware, while leaving processor cycles available for application processing. These adapters offload the main data paths to improve FCoE throughput. The net result is excellent FCoE performance for real-world workloads. In real-world implementations, SAN performance is determined by a number of factors, including application threading and storage target processor and disk speeds. At typical IOPS levels, processor utilization is typically quite low, in part because of recent increases in processor performance. Ease of Use By using the native storage initiators integrated into the OS, the Intel Ethernet Server Adapter X520 makes it easy to connect any server to the SAN. Standardizing on the Intel Ethernet Server Adapter X520 product line delivers a number of benefits to enterprise IT: Trusted OS-based initiators Same interface to configure an adapter for LAN or SAN traffic (Figure 5) Plug and Play with existing management tools No proprietary software to install The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way 5

Figure 5. IT administrators use the same interface for configuring LAN and Fibre Channel SAN traffic. Cisco Unified Fabric Cisco Unified Fabric provides a key building block for both traditional and virtualized data centers that helps unify storage and data networking to deliver seamless convergence and scalability with reduced total cost of ownership and faster return on investment. It provides the flexibility of high-performance, highly available networks to serve diverse data centers needs including the lossless requirements to carry storage traffic (FC, FCoE, iscsi, NAS) over Ethernet. Offering the best of both LAN and SAN worlds, Cisco Unified Fabric enables storage network users to take advantage of the economy of scale, robust vendor community and aggressive roadmap of Ethernet while providing highperformance, lossless characteristics of a Fibre Channel storage network. Cisco Nexus 5000 Series Switches The Cisco Nexus 5000 Series Switches, including the new Cisco Nexus 5500 platform was the first open-standardsbased access-layer switch to support I/O consolidation at the rack level through FCoE. More information can be found by visiting http://www.cisco.com/go/ nexus5000. Rich Feature Sets The rich feature sets of the Cisco Nexus 5000 Series Switches make them well suited for data center access-layer applications. They help protect the investment in data center racks with standards-based 10 Gigabit Ethernet and FCoE features and virtual machine awareness features that allow IT departments to consolidate networks based on their own requirements and timing. The combination of higher port density, lossless Ethernet, wirespeed performance, and very low latency makes the Nexus 5000 switch series well suited for meeting the growing demand for 10 Gigabit Ethernet that can support a common Ethernet-based fabric in enterprise and service provider data centers, protecting enterprises investments. Investment Protection The Cisco Nexus 5000 Series supports consolidated I/O using FCoE on downlinks to servers. FCoE is Fibre Channel and, as such, has familiar methods of configuration, operation, and management. On the uplinks to the network, customers can choose among FCoE direct attachment to storage systems, native Fibre Channel direct attachment to storage systems, FCoE connection to FCoE-capable switches and 6 The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way

native Fibre Channel connection to existing SANs. Fibre Channel and FCoE can exist simultaneously in the Fibre Channel network, providing the most flexibility and allowing customers to continue to use their existing administration and management tools. FCoE is a compelling technology for enabling unified I/O in the data center and will continue to gain traction in the marketplace. Cisco is committed to developing, supporting, and promoting the FCoE technologies required to facilitate this transition in a smooth, evolutionary way. Data center administrators can now begin consolidating infrastructure to achieve economic benefits. IT departments can take advantage of what Cisco FCoE storage networking can offer today and be prepared to gain additional benefits from operational efficiencies and performance enhancements. Conclusion 10 Gigabit Ethernet s simplicity, flexibility, and familiarity make it the ideal fabric for a unified data center network fabric and lay the groundwork for new models, including cloud computing, that will deliver more intelligent, responsive data centers and greater business agility. Through their product and industry leadership, Intel and Cisco are helping IT departments transition to 10 Gigabit Ethernet and build next-generation data center infrastructures. The Transition to 10 Gigabit Ethernet Unified Networking: Cisco and Intel Lead the Way 7

For More Information www.intel.com/go/unifiednetworking www.cisco.com/go/fcoe 1 Source: Intel estimates as of January 2010. Performance comparison using SPECjbb*2005 business operations per second (bops). Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. 2 IDC, Server Workloads Forecast, 2009. 3 IDC, The Internet Reaches Late Adolescence, December 2009. 4 8x network: 800 terabytes per second of IP traffic estimated on internal Intel analysis Network Supply/Demand 2010 2020 forecast; 16x storage: 60 exabytes of data stored from Barclays Capital Storage Bits September 2009, extrapolation by Intel for 2015; 20x compute: Intel internal long-range planning forecast, extrapolated to 1 billion virtual servers using one virtual machine per core. 5 IDC WW Storage Systems Tracker, December 2009. 6 Test configuration: Iometer v. 2006.7.27, number of managers = 1, number of workers / manager = 30 (total number of workers) = 30, number of logical unit numbers (LUNs) = 30, number of outstanding I/O operations = 50, I/O size = 512 bytes. 10 iscsi targets with 3 LUNs per target. Target is StarWind Enterprise configured with RAM disk. SUT: Supermicro 6026T-NTR+, Intel Xeon processor W 5680 (12 M cache, 3.33 GHz, 6.40 GT/s Intel QuickPath Technology), 24 GB DDR3, Microsoft Windows Server* 2008 R2 x64. Network configuration: Cisco Nexus 5020 Switch, Intel 82599EB 10 Gigabit Ethernet Controller connected at 10 Gbps. Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit www.intel.com/performance/resources/limits.htm Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. A listing of Cisco s trademarks can be found at www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1009R) 2011 Intel Corporation. All rights reserved. Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others. 0110/BY/MESH/PDF 324987-001US