Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
|
|
|
- Madeleine Wilkins
- 10 years ago
- Views:
Transcription
1 Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics, Volex
2 Table of Contents I. Executive Summary... 2 II. Technologies in the Demonstration... 2 Data Center Bridging DCBX and ETS PFC... 3 FCoE... 4 iscsi and iscsi over DCB... 5 iwarp... 5 III. Description of Demonstration Setup... 7 IV. Introduction to products in the demonstration V. Testing methodologies and Testing Results VI. Conclusions Figures List Figure 1: Priority Flow Control... 4 Figure 2: FCoE Mapping Illustration (Source: FC BB 5 Rev 2.0)... 5 Figure 3: iwarp Data Flow Diagram... 6 Figure 4: Integrated Demonstration Diagram... 8 Figure 5: FCoE Network... 9 Figure 6: iscsi Over DCB Network Figure 7: TCP LAN Network Figure 8: iwarp Network Figure 9: Monitor Maps for Viewing The Test Results Figure 10: Testing results displayed from monitor F Figure 11: Testing results displayed from monitor B Figure 12: Testing results displayed from monitor C Table List Table 1: Traffic Class Priority and Bandwidth Summary... 9 Page 1 SC 09 11/16/2009
3 1. Executive Summary Continuous reduction of Total Cost of Ownership (TCO) is the ultimate goal for building next generation data center networks. Key technology transitions applicable to future data centers are network convergence and virtualization. The Ethernet Alliance multi vendor, multitechnology showcase demonstrates a proof of concept converged network based on 10GbE showing its ability to provide high performance networking for various traffic types, including LAN, SAN and IPC traffic. This showcase highlights how network convergence takes advantage of high speed Ethernet to deliver client messaging, storage, and server application communications over a unified network while maintaining the same level of high performance delivered in separate networks. In addition data center interconnect technologies such as 10GBASE T and SFP+ 10GbE Direct Attach Cables will be demonstrated. 2. Technologies in the Demonstration Data Center Bridging In order for Ethernet to carry LAN, SAN and IPC traffic together and achieve network convergence, some necessary enhancements are required. These enhancement protocols are summarized as Data Center Bridging (DCB) protocols also referred to as Enhanced Ethernet (EE) which are defined by the IEEE data center bridging task group. A converged Ethernet network is built based on the following DCB protocols: - DCBX and ETS Existing Ethernet standards do not provide adequate capability to control and manage the allocation of network bandwidth to different network traffic sources and/or types (traffic differentiation) or to allow management capabilities to prioritize bandwidth utilization across these sources and traffic types based on business needs. Lacking these complete capabilities, data center managers must either over provision network bandwidth for peak loads, accept customer Page 2 SC 09 11/16/2009
4 complaints during these periods, or manage traffic prioritization at the source side by limiting the amount of non priority traffic entering the network. Overcoming these limitations is a key to enabling Ethernet as the foundation for true converged data center networks supporting LAN, storage, and interprocessor communications. Enhanced Transmission Selection (ETS) protocol addresses the bandwidth allocation issues among various traffic classes in order to maximize bandwidth utilization. This standard (IEEE 802.1Qaz) specifies the protocol to support allocation of bandwidth amongst priority groups. ETS allows each node to control bandwidth per priority group. Bandwidth allocation is achieved as part of a negotiation process with link peers this is called DCBX (DCB Capability Exchange Protocol). When the actual load in a priority group doesn't use its allocated bandwidth, ETS will allow other priority groups to use the available bandwidth. The bandwidth allocation priorities allow sharing of bandwidth between traffic loads while satisfying the strict priority mechanisms already defined in IEEE 802.1Q, requiring minimum latency. ETS is defined in IEEE 802.1Qaz Task Force. An additional protocol called DCB Capability exchange Protocol (DCBX) is defined in the same specification. It provides a mechanism for Ethernet devices (Bridges, end stations) to detect DCB capability of a peer device. It also allows configuration and distribution ETS of parameters from one node to another. This simplifies management of DCB nodes significantly, especially when deployed end to end in a converged data center. The DCBX protocol uses Link Layer Discovery Protocol (LLDP) defined by IEEE 802.1AB to exchange and discover DCB capabilities. - PFC One of the fundamental requirements for a high performance storage network is guaranteed data delivery. This requirement must be satisfied for critical storage data to be transported on a converged Ethernet network with minimum latency impact. Another critical enhancement to conventional Ethernet is to enable lossless Ethernet. IEEE 802.3X PAUSE defines how to pause link traffic at a congestion point to avoid packet drop. IEEE 802.1Qbb defines Priority Flow Page 3 SC 09 11/16/2009
5 Control (PFC) which is based on IEEE 802.3X PAUSE and provides granular control of traffic flow. PFC eliminates lost frames due to congestion. PFC enables pausing less sensitive data classes while not affecting traditional LAN protocols operating through different priority classes. Figure 1 shows how the PFC works in the converged traffic scenario. Figure 1: Priority Flow Control FCoE FCoE is an ANSI T11 standard for the encapsulation of a complete FC frame into an Ethernet frame. The resulting Ethernet frame is transported over Enhanced Ethernet networks as shown in figure 2. Compared to other mapping technologies, FCoE has the least mapping overhead and maintains the same constructs as native Fibre Channel, thus operating with native Fibre Channel management software. FCoE is based on lossless Ethernet in order to enable buffer to buffer credit management and flow control of Fibre Channel packets. Page 4 SC 09 11/16/2009
6 Figure 2: FCoE Mapping Illustration (Source: FC BB 5 Rev 2.0) iscsi and iscsi over DCB iscsi, an Ethernet standard since 2003, is the encapsulation of SCSI commands transported via Ethernet over a TCP/IP network, and is by nature, a loss less storage fabric. Inherent in iscsi s design is recovery from dropped packets or over subscribed, heavy network traffic patterns. So why would iscsi need the assist of Data Center Bridging (DCB)? iscsi over DCB reduces latency in networks which are over subscribed, and provides a predictable and certain application responsiveness, eliminating Ethernet s dependence on TCP/IP (or SCTP) for the retransmission of dropped Ethernet frames. iscsi over DCB adds the reliability that Enterprise customers need for their data center storage needs. iwarp iwarp (Internet Wide Area RDMA Protocol) is a low latency RDMA over Ethernet solution. The specification defines how the RDMA (Remote Direct Memory Access) protocol runs over TCP/IP. iwarp data flow (see figure 3) delivers improved performance by: Page 5 SC 09 11/16/2009
7 Figure 3: iwarp Data Flow Diagram Eliminating Intermediate Buffer Copies: Data is placed directly in application buffers vs. being copied multiple times to driver and network stack buffers, thus freeing up memory bandwidth and CPU compute cycles for the application. Delivering a Kernel bypass Solution: Placing data directly in user space avoids kernel to user context switches which adds additional latency and consumes additional CPU cycles that could otherwise be used for application processing. Accelerated TCP/IP (Transport) Processing: TCP/IP processing is done in silicon/hardware vs. operating system network stack software, thereby freeing up valuable CPU cycles for application compute processing. By bringing RDMA to Ethernet, iwarp lends itself to environments that require lowlatency performance in an Ethernet ecosystem, including HPC (High Performance Computing) Clusters, Financial Services, Enterprise Data Centers, and Clouds. All of which value Ethernet as an existing, reliable, and proven IT environment that uses heterogeneous equipment and widely deployed management tools. Page 6 SC 09 11/16/2009
8 3. Description of Demonstration Setup Figure 4 illustrates the entire integrated demonstration network. The converged Ethernet demonstration contains: Three DCB capable 10GbE switches serving as the foundation of the converged Ethernet network One 10GBASE T switch connected to the converged network Four 10GbE servers installed with Converged Network Adapter (CNA), a single I/O adapter that supports both FCoE and TCP/IP traffic. One of these servers is running virtual applications on top of a VMware hypervisor Two load balancing clusters supporting the high performance computing traffic in the converged network Two 10GBASE T servers providing high performance TCP LAN traffic A unified storage system running Ethernet (NFS), Fibre Channel and Fibre Channel over Ethernet protocols simultaneously within a single array to further demonstrate storage system convergence A high performance iscsi storage system supporting iscsi over DCB storage traffic Multiple cabling technologies that support 10GbE links: SFP+ Directed Attached Copper, CAT6A and multimode optical cables In addition to the network components, the demonstration also includes advanced testing tools for measuring and verifying DCB and other advanced technologies in the converged Ethernet network. These tools provide the following information regarding the demonstration: Load generation, measurement statistics, and packet capture, host/target SCSI emulation and performance measurement, and virtualization application modules Trace capture and analysis and I/O performance test modules Virtualization application test modules Page 7 SC 09 11/16/2009
9 Figure 4: Integrated Demonstration Diagram In this network, three types of traffic: FCoE, iscsi over DCB, and TCP are simultaneously transported across the network to establish a converged traffic scenario at ISL#1 and ISL#2 links sharing the 10GbE bandwidth. The network structures for each individual traffic type are illustrated in Figure 5 to 8. Two key DCB features are demonstrated here: Utilizing PFC to enable lossless data over Ethernet for critical FCoE and iscsi storage data Utilizing ETS to maximize utilization of the 10GbE bandwidth to achieve high I/O and low latency of each traffic class Table 1 shows the class of services and priority group assignments to different traffic types. To differentiate the FCoE traffic generated from server storage data communication and from the load generation tools, they are assigned to different traffic classes. Page 8 SC 09 11/16/2009
10 Traffic Type Class of Services Priority Priority Group Bandwidth PFC enabled FCoE (server/storage) % Yes iscsi over DCB % Yes FCoE Load Generation TCP applications % Yes N/A 0 10% No (iscsi, virtual applications) Table 1: Traffic Class Priority and Bandwidth Summary Figure 5: FCoE Network Page 9 SC 09 11/16/2009
11 Figure 6: iscsi Over DCB Network Page 10 SC 09 11/16/2009
12 Figure 7: TCP LAN Network In addition to the traffic types sharing the 10GbE trunk link bandwidth, we also demonstrate iwarp traffic with dedicated 10GbE bandwidth in order to provide the lowest latency and highest performance on these time critical CPU communications. The iwarp cluster computing data is running through switch #1 and #2. Figure 8 shows the network configuration for the iwarp demonstration. Page 11 SC 09 11/16/2009
13 Figure 8: iwarp Network The following 10GbE physical interfaces are used to create the converged Ethernet demonstration: 10GbE SFP+ Direct Attached Copper Cables with passive and active connector interfaces provide the top of rack interconnect from servers and storage to the switching fabric 10GBASE SR SFP+ optical transceivers connected over laser optimized OM3 multi mode fiber provide inter switch links 10GBASE T over a CAT6A structured cabling system connects servers to switch ports in the 10GBASE T sub network Page 12 SC 09 11/16/2009
14 4. Introduction to products in the demonstration Amphenol: Amphenol is demonstrating a full line of high performance 10GbE and 8GbFC SFP+ interconnect products. The SFP+ product line includes Direct Attach Copper cables both, passive and active, 10GbE SR transceivers, and OM3 optical cables. Cisco: Cisco is providing high performance 10GbE switch solutions for the Ethernet Alliance converged Ethernet fabric. The Cisco Nexus 5000 Unified Fabric solution is running iscsi, FCoE and LAN traffic over a single 10GbE wire. Advanced Data Center Bridging features such as Priority Flow Control and Bandwidth Management will be featured. The Cisco Catalyst 4900M series features 10GBASE-T and SFP+ interfaces running iscsi and LAN traffic to the larger fabric. Dell: Dell is supporting the Ethernet Alliance booth at SC09 with a Data Center Bridging iscsi solution. This includes a Dell EqualLogic PS Series iscsi storage array featuring 10GbE, SFP+, Data Center Bridging, Priority Flow Control, DCBx protocol, and Enhanced Transmission Selection. Fulcrum Microsystems: Fulcrum Microsystems is demonstrating a Monaco reference platform that contains the FM4224, a FocalPoint 24-port 10GbE switch-router chip. The FM4224 contains all the features required for converged data center fabrics including support for FCoE. The Monaco platform will be used to demonstrate some of these features including Priority Flow Control and Enhanced Transmission Selection. Intel: Intel is supporting converged Ethernet with 10GbE cards. The 10GbE SFP+ NIC will demonstrate FCoE and iscsi with industry standard Data Center Bridging enabling lossless delivery. The 10GBASE-T connection will show native iscsi acceleration to storage devices; and Intel s 4 node cluster is using NetEffect Ethernet Server Cluster Adapters showing RDMA over Ethernet traffic using its low latency iwarp technology. Ixia: In the Ethernet Alliance demonstration, Ixia has a complete set of high density 10GbE load modules and network test software solutions that offer a complete endto-end Data Center network test system on a unified L2-7 platform. The platform supports critical infrastructure protocols such as PFC, DCBX, FCoE, CEE and FIP that deliver lossless Ethernet; new test applications have added high scale virtualization, Page 13 SC 09 11/16/2009
15 full software-based traffic generation supporting end-user performance measurements and real SCSI initiator/target emulation test capabilities in a live Data Center. JDSU: JDSU is demonstrating the multi-protocol Xgig platform that provides the complete test solutions for FCoE and DCB networks. In addition, JDSU is also demonstrating its application based speed benchmarking test tools that drive high IO generations and allow to validate quality products in real converged network environments Mellanox Technologies: Mellanox will demonstrate a variety of ConnectX-2 single-chip 10GbE solutions. A Low Latency Ethernet cluster will demonstrate application latency as low as 3µs. FCoE with hardware offloads running on a Data Center Bridging network demonstrates high-performance I/O consolidation. These advanced capabilities are delivered over 10GBASE-T, 10GBASE-SR, and SFP+ direct attached copper cables. NetApp: In the Ethernet Alliance demonstration at SC09, NetApp brings convergenceready storage for an end-to-end 10GbE infrastructure based on the FCoE standard. Moving to a unified 10GbE infrastructure in the data center enables an organization to efficiently migrate all storage traffic to Ethernet to achieve capital and operational efficiencies. The NetApp unified storage demo also includes FC and NFS traffic running on the same storage system. Panduit: Panduit is showing High Speed Data Transport capabilities within the Ethernet Alliance Converged Network demonstration. This demonstration showcases a multi-vendor, multi-technology 10GbE network within an ecosystem of fully operating complimentary active equipment. Panduit is exhibiting our High Speed Data Transport media of 10Gig TM OM3 fiber, 10Gig TM SFP+ Direct Attached Copper Cable Assemblies, and TX6A TM 10Gig TM UTP Copper Cabling systems. QLogic: QLogic is demonstrating their single-chip 8100 Series Converged Networking Adapter QLogic s 2 nd generation CNA. The 10GbE CNA will demonstrate FCoE storage networking with full hardware offload for superior SAN performance. The initiator will be shown operating in a virtual and non-virtual environment running over a converged Data Center Bridging Ethernet network. Spirent: At the Ethernet Alliance booth at SC09, Spirent is demonstrating support for Data Center Bridging, FCoE, FIP, SCSI and RDMA traffic patterns and Data Center Benchmarking. Spirent will also highlight Layer 2-7 virtualization/cloud computing testing that measures end user QoE/QoS. Page 14 SC 09 11/16/2009
16 Tyco Electronics: Tyco Electronics is demonstrating passive and active SFP+ copper cable assemblies in support of a converged Ethernet network infrastructure. These high speed SFP+ direct attach copper cable assemblies are compliant with SFF industry standards and fully support multiple protocols, including 10GbE, 8G Fibre Channel and FCoE. Tyco Electronics will also be exhibiting other Ethernet connections such as RJ45, MRJ21, and QSFP. Volex: Volex is demonstrating SFP+ passive and active copper interconnect solutions for the Ethernet Converged Network infrastructure. 5. Testing Methodologies and Testing Results Converged Ethernet is enabled through DCB enhancements that provide high performance networking for storage and cluster computing data over 10GbE networks. In this demonstration, we are focusing on showing the following two features: PFC ETS To activate priority flow control and bandwidth QoS, requires high bandwidth applications to saturate converged links ISL#1 and ISL#2. As shown in Figure 4 and 5, each server will run high I/O traffic driven by the I/O generation tools. The goal is to demonstrate the high I/O performance capability of a converged network. Load generation tools send the oversubscribed FCoE, iscsi and IP traffic to guarantee congestion scenarios at the ISL links. The measurement tools will verify and demonstrate PFC and ETS features by displaying throughput information on the exhibit monitors. The following diagram illustrates the location of the monitors in the converged network demonstration area. Page 15 SC 09 11/16/2009
17 Figure 9: Monitor Maps for Viewing the Test Results (a (b (c) (d Figure 10: Testing results displayed from monitor F (a) MLTT displays the I/O performance (b) Xgig Traceview shows the PFC event in the trace (c) Xgig Load Tester demonstrates throughput variation per ETS management (d) Xgig Expert evaluate ETS results against DCBX setup Page 16 SC 09 11/16/2009
18 From the monitor F, visitors can observe the following testing results I/O performance of each server PFC frames in the captured trace The bandwidth per each traffic class read from Xgig Expert: FCoE, iscsi and TCP/IP to demonstrate the ETS concept Throughput variations and PFC statistics read from the Load Tester to demonstrate the ETS bandwidth management over the time Lossless property of the converged Ethernet network shown by the Load Tester Figure 11: Testing results displayed from monitor B From monitor B, visitors can observe the following testing results The priority flow control over the time due to the ETS bandwidth management Throughput and latency performance of the switches Virtualization test demonstration Page 17 SC 09 11/16/2009
19 Figure 12: Testing results displayed from monitor C From monitor C, visitors can observe the following testing results 10GBASE T performance Converged FCoE and LAN traffic performance Impact of PFC on high priority storage transactions vs. best effort traffic SCSI Initiator and Target emulation: stateful I/O with real servers and drives Server virtualization: measuring performance between internal and external VMs 6. Conclusions The Ethernet Alliance with 14 industry leading Ethernet solution providers is demonstrating a highly consolidated network that carries SAN, LAN and IPC traffic in a single 10GbE network. Key enabling technologies demonstrated here include Data Center Bridging (DCB), Fibre Channel over Ethernet (FCoE), iscsi over DCB, iwarp, 10GbE low cost physical interfaces: SFP+ Directed Attached Copper Cables and 10GBASE T. Page 18 SC 09 11/16/2009
20 This industry first large scale network integration gives the IT manager a preview of the next generation data center infrastructure: both consolidated and virtualized with low power consumption and low Total Cost of Ownership (TCO). Page 19 SC 09 11/16/2009
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network
Converging Data Center Applications onto a Single 10Gb/s Ethernet Network Explanation of Ethernet Alliance Demonstration at SC10 Contributing Companies: Amphenol, Broadcom, Brocade, CommScope, Cisco, Dell,
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
Data Center Bridging Plugfest
Data Center Bridging Plugfest November 2010 Page 1 Table of Contents 1 Introduction & Background Error! Bookmark not defined. 1.1 Introduction... 4 1.2 DCB Plugfest Objectives and Participants... 4 1.3
FIBRE CHANNEL OVER ETHERNET
FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,
How To Evaluate Netapp Ethernet Storage System For A Test Drive
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters
W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized
over Ethernet (FCoE) Dennis Martin President, Demartek
A Practical Guide to Fibre Channel over Ethernet (FCoE) Dennis Martin President, Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage
WHITE PAPER. Best Practices in Deploying Converged Data Centers
WHITE PAPER Best Practices in Deploying Converged Data Centers www.ixiacom.com 915-2505-01 Rev C October 2013 2 Contents Introduction... 4 Converged Data Center... 4 Deployment Best Practices... 6 Testing
Unified Storage Networking
Unified Storage Networking Dennis Martin President Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage infrastructure Fibre Channel:
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:
Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure Motti Beck Director, Marketing [email protected] Michael Kagan Chief Technology Officer [email protected]
Unified Fabric: Cisco's Innovation for Data Center Networks
. White Paper Unified Fabric: Cisco's Innovation for Data Center Networks What You Will Learn Unified Fabric supports new concepts such as IEEE Data Center Bridging enhancements that improve the robustness
Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Top of Rack: An Analysis of a Cabling Architecture in the Data Center
SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.
FCoE Deployment in a Virtualized Data Center
FCoE Deployment in a irtualized Data Center Satheesh Nanniyur ([email protected]) Sr. Staff Product arketing anager QLogic Corporation All opinions expressed in this presentation are that of
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco
Cisco Datacenter 3.0 Datacenter Trends David Gonzalez Consulting Systems Engineer Cisco 2009 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Agenda Data Center Ethernet (DCE) Fiber Channel over
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
Building a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
10GBASE T for Broad 10_Gigabit Adoption in the Data Center
10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers
How To Get 10Gbe (10Gbem) In Your Data Center
Product Highlight o 10 Gigabit Ethernet (10GbE) Performance for the Entire Datacenter o Standard CAT-6a Cabling with RJ45 Connectors o Backward Compatibility with Existing 1000BASE- T Networks Simplifies
RoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
CASE STUDY SAGA - FcoE
CASE STUDY SAGA - FcoE FcoE: New Standard Leads to Better Efficency by Fibre Channel over Ethernet replaces IP and FC networks and enables faster data transfer with speeds of up to 10 Gbps When we talk
Optimizing Infrastructure Support For Storage Area Networks
Optimizing Infrastructure Support For Storage Area Networks December 2008 Optimizing infrastructure support for Storage Area Networks Mission critical IT systems increasingly rely on the ability to handle
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
White Paper Solarflare High-Performance Computing (HPC) Applications
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp
Server and Consolidation with iscsi Arrays David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individual members may use this
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
FCoE Enabled Network Consolidation in the Enterprise Data Center
White Paper FCoE Enabled Network Consolidation in the Enterprise Data Center A technology blueprint Executive Overview This white paper describes a blueprint for network consolidation in the enterprise
ETHERNET 151:ETHERNET ALLIANCE PLUGFESTS - IMPROVING DATA CENTER BRIDGING ONE TEST AT A TIME
ETHERNET 151:ETHERNET ALLIANCE PLUGFESTS - IMPROVING DATA CENTER BRIDGING ONE TEST AT A TIME Henry He Ixia Mikkel Hagen UNH-IOL June 2012 1 Image of Presenter Henry He Henry He is the Technical Product
Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center
Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center Samir Sharma, Juniper Networks Author: Samir Sharma, Juniper Networks SNIA Legal Notice The material contained in this
Virtualizing the SAN with Software Defined Storage Networks
Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands
Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over)
Extreme Networks White Paper Data Center Fabric Convergence for Cloud Computing (the Debate of Ethernet vs. Fibre Channel is Over) The evolution of the data center fabric has been well documented. The
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation
Cisco Data Center 3.0 Roadmap for Data Center Infrastructure Transformation Cisco Nexus Family Provides a Granular, Cost-Effective Path for Data Center Evolution What You Will Learn As businesses move
Ethernet Fabric Requirements for FCoE in the Data Center
Ethernet Fabric Requirements for FCoE in the Data Center Gary Lee Director of Product Marketing [email protected] February 2010 1 FCoE Market Overview FC networks are relatively high cost solutions
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect
White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating
Data Center Bridging Plugfest
Data Center Bridging Plugfest Version 1.0, August 2009 Contributors: Eddie Tan, Cisco Gary Gumanow, Dell Kirt Gillum, Dell Joy Jiang, Finisar Dan Daly, Fulcrum Microsystems Shawn Dutton, NetApp Manoj Wadekar,
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
Connecting the Clouds
Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server
ALCATEL-LUCENT ENTERPRISE AND QLOGIC PRESENT
ALCATEL-LUCENT ENTERPRISE AND QLOGIC PRESENT A VIRTUALIZED AND CONVERGED NETWORK FOR ENTERPRISE DATACENTERS APPLICATION NOTE TABLE OF CONTENTS Introduction / 1 The case for Ethernet / 2 The components
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
A Whitepaper on. Building Data Centers with Dell MXL Blade Switch
A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence. Technical Whitepaper
Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence Dell Technical Marketing Data Center Networking May 2013 THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY CONTAIN
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
Data Center Architecture with Panduit, Intel, and Cisco
Data Center Architecture with Panduit, Intel, and Cisco 0GBASE-T Application Note Integrating Panduit Category 6A Interconnects with the Cisco Nexus TM and Intel Ethernet Server Adapter X0-T 0 PANDUIT
Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led
Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Course Description Configuring Cisco Nexus 5000 Switches (DCNX5K) v2.1 is a 5-day ILT training program that is designed
Emulex Networking and Converged Networking Adapters for ThinkServer Product Guide
Networking and Converged Networking Adapters for ThinkServer Product Guide The OCe14000 family of 10 Gb Ethernet Networking and Converged Networking Adapters for ThinkServer builds on the foundation of
Doubling the I/O Performance of VMware vsphere 4.1
White Paper Doubling the I/O Performance of VMware vsphere 4.1 with Broadcom 10GbE iscsi HBA Technology This document describes the doubling of the I/O performance of vsphere 4.1 by using Broadcom 10GbE
Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices
Ethernet 301: 40/100GbE Fiber Cabling and Migration Practices Robert Reid (Panduit) Frank Yang (CommScope, Inc) 1 The Presenters Robert Reid Sr. Product Development Manager at Panduit Frank Yang Marketing
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet
Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.1 June 2010 Authors: Mark Nowell, Cisco Vijay Vusirikala, Infinera Robert Hays, Intel 1. This work represents
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center
A Comparison of Total Costs of Ownership of 10 Gigabit Network Deployments in the Data Center White Paper November 2009 www.commscope.com Contents Introduction 2 Objectives 2 Executive Summary 3 Background
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Upgrading Data Center Network Architecture to 10 Gigabit Ethernet
Intel IT IT Best Practices Data Centers January 2011 Upgrading Data Center Network Architecture to 10 Gigabit Ethernet Executive Overview Upgrading our network architecture will optimize our data center
IP ETHERNET STORAGE CHALLENGES
ARISTA SOLUTION GUIDE IP ETHERNET STORAGE INSIDE Oveview IP Ethernet Storage Challenges Need for Massive East to West Scalability TCP Incast Storage and Compute Devices Interconnecting at Different Speeds
Michael Kagan. [email protected]
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies [email protected] Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation
SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel
High Speed I/O Server Computing with InfiniBand
High Speed I/O Server Computing with InfiniBand José Luís Gonçalves Dep. Informática, Universidade do Minho 4710-057 Braga, Portugal [email protected] Abstract: High-speed server computing heavily relies on
Next Gen Data Center. KwaiSeng Consulting Systems Engineer [email protected]
Next Gen Data Center KwaiSeng Consulting Systems Engineer [email protected] Taiwan Update Feb 08, kslai 2006 Cisco 2006 Systems, Cisco Inc. Systems, All rights Inc. reserved. All rights reserved. 1 Agenda
Data Center Bridging Attributes. John Fastabend LAN Access Division, Intel Corp.
Data Center Bridging Attributes John Fastabend LAN Access Division, Intel Corp. Agenda History & Background Knowledge Use Cases (Do we need a single API) DCB Infrastructure net_device model DCB Infrastructure
Building Tomorrow s Data Center Network Today
WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
The ABC of Direct Attach Cables
The ABC of Direct Attach Cables Thomas Ko, RCDD Product Manager TE Connectivity Network Speed and Technology 10/40/100 Gbps the transition is occurring right now What are Customers Saying about their 10/40/100
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
Next Generation Storage Networking for Next Generation Data Centers
Next Generation Storage Networking for Next Generation Data Centers Dennis Martin President, Demartek Tuesday, September 16, 2014 Agenda About Demartek Increased Bandwidth Needs for Storage Storage Interface
Fibre Channel over Ethernet: Enabling Server I/O Consolidation
WHITE PAPER Fibre Channel over Ethernet: Enabling Server I/O Consolidation Brocade is delivering industry-leading oe solutions for the data center with CNAs, top-of-rack switches, and end-of-row oe blades
Unified Computing Systems
Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified
Ethernet, and FCoE Are the Starting Points for True Network Convergence
WHITE PAPER Opportunities and Challenges with the Convergence of Data Center Networks 10GbE, Standards-Based DCB, Low Latency Ethernet, and FCoE Are the Starting Points for True Network Convergence Copyright
Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center
Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center [email protected] www.globalknowledge.net Planning for the Redeployment of
Accelerating Development and Troubleshooting of Data Center Bridging (DCB) Protocols Using Xgig
White Paper Accelerating Development and Troubleshooting of The new Data Center Bridging (DCB) protocols provide important mechanisms for enabling priority and managing bandwidth allocations between different
David Lawler Vice President Server, Access & Virtualization Group
Data Center & Cloud Computing David Lawler Vice President Server, Access & Virtualization Group 2009 Cisco Systems, Inc. All rights reserved. 1 We Are Facing Unparalleled Growth 1.7 billion+ people on
N5 NETWORKING BEST PRACTICES
N5 NETWORKING BEST PRACTICES Table of Contents Nexgen N5 Networking... 2 Overview of Storage Networking Best Practices... 2 Recommended Switch features for an iscsi Network... 2 Setting up the iscsi Network
Network Configuration Example
Network Configuration Example Configuring DCBX Application Protocol TLV Exchange Published: 2014-01-10 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net
10GBASE-T for Broad 10 Gigabit Adoption in the Data Center
WHITE PAPER 10GBASE-T Ethernet Network Connectivity 10GBASE-T for Broad 10 Gigabit Adoption in the Data Center The increasing use of virtualization and unified networking places extreme I/O demands on
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems
Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity
Long-Haul System Family Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Mellanox continues its leadership by providing RDMA Long-Haul Systems
The evolution of Data Center networking technologies
0 First International Conference on Data Compression, Communications and Processing The evolution of Data Center networking technologies Antonio Scarfò Maticmind SpA Naples, Italy [email protected]
Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches
Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches Panduit and Cisco Accelerate Implementation of Next-Generation Data Center Network Architecture 2013 Cisco Panduit.
Enterasys Data Center Fabric
TECHNOLOGY STRATEGY BRIEF Enterasys Data Center Fabric There is nothing more important than our customers. Enterasys Data Center Fabric Executive Summary Demand for application availability has changed
