Fibre Channel over Ethernet: Enabling Server I/O Consolidation



Similar documents
Data Center Evolution without Revolution

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

The Business Case for Software-Defined Networking

The Brocade SDN Controller in Modern Service Provider Networks

The Road to SDN: Software-Based Networking and Security from Brocade

Multitenancy Options in Brocade VCS Fabrics

Scalable Approaches for Multitenant Cloud Data Centers

Brocade Fabric Vision Technology Frequently Asked Questions

Building Tomorrow s Data Center Network Today

WHITE PAPER. Cloud Networking: Scaling Data Centers and Connecting Users

Ethernet Fabrics: An Architecture for Cloud Networking

Brocade Network Advisor: CLI Configuration Manager

BASCS in a Nutshell Study Guide for Exam Brocade University Revision

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Brocade and McAfee Change the Secure Networking Landscape with High Performance at Lowest TCO

Facilitating a Holistic Virtualization Solution for the Data Center

WHITE PAPER. Enhancing Application Delivery and Load Balancing on Amazon Web Services with Brocade Virtual Traffic Manager

Brocade VCS Fabrics: The Foundation for Software-Defined Networks

Brocade One Data Center Cloud-Optimized Networks

University of Arizona Increases Flexibility While Reducing Total Cost of Ownership with Cisco Nexus 5000 Series Switches

FIBRE CHANNEL OVER ETHERNET

NETWORK FUNCTIONS VIRTUALIZATION. The Top Five Virtualization Mistakes

Introducing Brocade VCS Technology

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures

DEDICATED NETWORKS FOR IP STORAGE

Scale-Out Storage, Scale-Out Compute, and the Network

Ensuring a Smooth Transition to Internet Protocol Version 6 (IPv6)

3G Converged-NICs A Platform for Server I/O to Converged Networks

Fibre Channel over Ethernet in the Data Center: An Introduction

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Brocade Premier and Premier-Plus Support

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Brocade Virtual Traffic Manager and Microsoft IIS Deployment Guide

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

Data Center Convergence. Ahmad Zamer, Brocade

Virtualizing the SAN with Software Defined Storage Networks

Brocade Monitoring Services Security White Paper

Brocade Network Advisor High Availability Using Microsoft Cluster Service

NETWORK FUNCTIONS VIRTUALIZATION. Segmenting Virtual Network with Virtual Routers

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

VCS Monitoring and Troubleshooting Using Brocade Network Advisor

Data Center Networking Designing Today s Data Center

Virtual Connect Enterprise Manager Server Guide

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

BROCADE FABRIC VISION TECHNOLOGY FREQUENTLY ASKED QUESTIONS

FCoE Deployment in a Virtualized Data Center

Brocade Virtual Traffic Manager and Microsoft Outlook Web Access Deployment Guide

Technical Brief: Introducing the Brocade Data Center Fabric

SummitStack in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Brocade SDN/OpenFlow. Norival Figueira Office of the CTO. January 9, /2015 BROCADE COMMUNICATIONS SYSTEMS, INC. ALL RIGHTS RESERVED.

Global Load Balancing with Brocade Virtual Traffic Manager

Global Headquarters: 5 Speen Street Framingham, MA USA P F

SummitStack in the Data Center

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Enterasys Data Center Fabric

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Benefits of Brocade Gen 5 Fibre Channel

Brocade Campus LAN Switches: Redefining the Economics of

Cloud-Optimized Performance: Enhancing Desktop Virtualization Performance with Brocade 16 Gbps

COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE

How To Make Your Phone A Mobile Device Safe And Secure

Building Private Cloud Storage Infrastructure with the Brocade DCX 8510 Backbone

Fiber Channel Over Ethernet (FCoE)

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Get the Most Out of Data Center Consolidation

Brocade Virtual Traffic Manager and Magento Deployment Guide

Brocade Solutions for iscsi Storage Area Networks

Brocade SAN Scalability Guidelines: Brocade Fabric OS v7.x

ETHERNET FABRICS REVOLUTIONIZES VIRTUALIZED DATA CENTERS. Phillip Coates - pcoates@brocade.com Brocade Systems Engineer Manager ANZ

Brocade Network Monitoring Service (NMS) Helps Maximize Network Uptime and Efficiency

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Understanding The Brocade SDN Controller Architecture

Ten Ways to Optimize Your Microsoft Hyper-V Environment with Brocade

THE PATH TO A GREEN DATA CENTER. Hitachi Data Systems and Brocade. Joint Solution Brief

Simplifying Data Center Network Architecture: Collapsing the Tiers

BROCADE OPTICS FAMILY

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Brocade Technical Assistance Center Frequently Asked Questions

Brocade Fabric OS DATA CENTER. Target Path Selection Guide January 4, 2016

BROCADE NETWORK ADVISOR

How To Get A Virtual Managed Enterprise Router From Overure And Brocade

IMPLEMENTING VIRTUALIZED AND CLOUD INFRASTRUCTURES NOT AS EASY AS IT SHOULD BE

over Ethernet (FCoE) Dennis Martin President, Demartek

WHITE PAPER. Best Practices in Deploying Converged Data Centers

Solution Brief Network Design Considerations to Enable the Benefits of Flash Storage

Transcription:

WHITE PAPER Fibre Channel over Ethernet: Enabling Server I/O Consolidation Brocade is delivering industry-leading oe solutions for the data center with CNAs, top-of-rack switches, and end-of-row oe blades that make server I/O consolidation a reality.

Table of Contents Overview of and oe... 3 New Adapters and Switches... 4 Benefits of Server I/O Consolidation...7 oe Usage Models... 8 Consolidating Servers and I/O Links...8 Top-of-Rack Deployment...8 End-of-Row Deployment...9 Summary...10

The development of Fibre Channel over Ethernet (oe) opens the door to the real possibility of transing more than one I/O traffic flow over a Converged Enhanced Ethernet (), also called a Data Center Bridging (DCB), link. Moving Storage Area Network (SAN) and Local Area Network (LAN) traffic onto a single trans benefits data centers in many ways. It reduces the initial capital outlays for equipment (CapEx) and lowers the running costs of maintaining and cooling data centers (OpEx). and oe combine the traditional Ethernet and storage s into a single 10 GbE, thereby reducing the number of s in servers. That in turn reduces cabling configurations and simplifies data center environments. Another value is that oe achieves this without disrupting existing Fibre Channel () installations, as it preserves constructs and management tools. oe and deployment will enhance the evolution of data centers by promoting more simplified cabling, more integrated management, and greener environments. This paper provides an overview of oe and and explains their benefits in the area of server I/O consolidation. Overview of and oe oe is an encapsulation protocol that wraps frames into Ethernet frames and prepares them for trans over a Converged Enhanced Ethernet link. The oe protocol stack (see Figure 1) is constructed by taking upper services (Layers 2, 3, and 4) and placing them on top of Ethernet physical and Data Link layers (Ethernet). Sandwiched between the and Ethernet layers is the oe layer, which serves as a translator between the two and Ethernet protocols. The oe layer encapsulates to Converged Enhanced Ethernet traffic and performs the reverse function on -to- traffic. -4-3 -2 Fibre Channel Services oe Physical Ethernet Figure 1: oe stack As defined by industry standards bodies, oe simply wraps the entire frame as is without any modifications. The fact that the payload remains intact throughout its oe journey allows oe to preserve the constructs and services and utilize existing management applications. As a result, oe solutions are expected to integrate seamlessly into existing environments without introducing disruptions or incompatibilities. Note, however, that oe will not alleviate existing incompatibilities in existing products or environments. 3

Figure 2: Ethernet frame with frame insertion. As an encapsulation protocol, oe builds on the success of Fibre Channel in the data center and serves to extend its presence. Contrary to some beliefs, oe does not compete with for encapsulation protocols; it supplements rather than compete with storage interface or networking protocols such as. It is hard to talk about oe without covering, as each one complements the other and they need each other to deliver their promised benefits. When distilled to its most basic form, can be viewed as a set of enhancements to traditional Ethernet. Industry standards bodies have created three enhancements that provide the lossless trans of oe data over a link. They are: Priority Flow Control (P). Allows the flow of higher-priority traffic to continue while lower-priority traffic may be temporarily paused when congestion occurs. Enhanced Transmission Selection (ETS). It is easier to understand this feature in terms of bandwidth management. ETS enables the grouping of each type of I/O flow on the wire into separate groups such as storage, networking, and so on. Each group is then assigned a percentage of the available bandwidth. For example, the storage group may be assigned 60 percent of the bandwidth, while LAN traffic is given 30 percent. The purpose of ETS is to provide Quality of Service (QoS) capabilities to applications that may need it and to offer bandwidth flexibility. Data Center Bridging exchange (DCBX). A discovery and configuration protocol that allows the discovery of nodes connected to the fabric, configuration of the nodes, and assignment of a virtual MAC (Media Access Controller) address to each note. DCBX negotiates the capabilities of the nodes during the discovery process. Industry standards bodies will continue to add new enhancements to in the near future. Two such features that are in process are congestion mangement that adds QoS to the protocol and Transparent Interconnection of Lots of Links (TRILL), which adds Layer 2 (L2) multipathing capabilities to. New Adapters and Switches Translating new technologies into reality always generates new products and oe and are no exception. Traditional Host Bus Adapters (HBAs) used in storage networks and Network Interface Cards (NICs) used in Ethernet networks will be updated to a single new device, called a Converged Network Adapter (CNA). 4

Figure 3: CNA architecture By avoiding changes to, oe CNAs are designed with HBA and NIC entities that perform the same functions found in today s HBAs and NICs. In addition, CNAs contain an oe entity that is internal to the adapter and not visible to the server Operating System (OS). Unlike HBA and NIC entities, which are exposed to the server, the oe entity is not exposed to the server nor is it seen by the server OS. In other words, when a CNA is installed in a server, the server OS will not see any oe devices On the other hand, the server OS will see NIC and the HBA entities and expect device drivers for and Ethernet, so that the CNA can function as a storage and LAN interface adapter. It is imant to notice that as far as the server OS is concerned, nothing that is oe in nature is needed. In other words, oe has no impact on the server environment when it comes to device drivers or software architecture, and it does not require anything new from the user. It is also significant to note that and NIC drivers remain unchanged from today s environment. The result of preserving the existing server device drivers is that servers with CNAs can be added to today s environments without the need for any changes. IT professionals can continue to use the same drives they are familiar with and more imantly continue to use the same management applications they are accustomed to working with, such as Brocade Data Center Fabric Manager (DCFM ). Figure 4: Server drivers from HBAs and NICs to CNAs. In short, CNAs, such as Brocade CNAs when they are available, are designed with a traditional HBA entity, a NIC entity where both are exposed to the server OS. CNAs also include an oe entity that is internal and does not alter the view of the server OS. The main role of the oe entity is to encapsulate traffic and send it to the links and to do the reverse when traffic is flowing in the other direction. All traffic flows in and out of CNAs through eh 10 GbE s (see Figure 6). 5

There are three main functional block modules in an oe switch such as the Brocade 8000 Switch, as shown in Figure 5. The module in the Brocade 8000 is a fully functional switch suping the features and services that are typically suped by Brocade switches. Connected to the switch is a bank of s that are exposed to the outside for connecting to SANs. The Ethernet block of the oe switch is made up of a switch suping traditional 10 GbE Ethernet and enhancement. A bank of 10 GbE s is exposed externally for connecting to corporate LANs. The s can be connected to other s or to traditional 10 s. In the latter case, the services on the s are downgraded as they communicate with traditional 10 GbE links. switch oe Enhanced Ethernet switch Figure 5: oe switch architecture oe switches contain an oe module or entity. As was the case in CNAs, the oe entity on the switch is also internal, is not visible to the outside and has no external interfaces. The role of the oe entity is to extract payload form oe traffic and forward the frames to the switch on their way to storage targets. It also encapsulates traffic heading to the switch to be transed back over links. It is imant to note that: Incoming traffic from the servers (CNAs) arrives at the switch over links. Outgoing switch traffic leaves over s to storage SANs and over s to LANs. Upon receiving incoming traffic, the oe switch inspects the Ethertype of the incoming packets and uses it to determine the destination. If the Ethertype of the packet is oe, then the switch recognizes that the frame contains payload and forwards it to the oe entity. From there it is extracted and transmitted out over F s. If the Ethertype is not oe, then the switch handles the traffic as traffic and forwards it over s. 6

Servers CNA initiator LAN SAN Figure 6: oe traffic flows from server to switch and then out to the SAN and LAN In short, oe switches receive incoming traffic over links, but send it out to intended destinations over s to SANs and s to LANs. Since the oe entity in the oe switches is internal, oe switches can be seamlessly integrated into existing and Ethernet environments without the need for further modifications. Although nothing needs to be done to accommodate oe on the side, proper cables (CAT-6A) are needed on the side. Benefits of Server I/O Consolidation The benefits of oe and will be most visible to data center professionals in consolidating server s and the related cabling and switch s. The promised savings will be realized when the deployment of the new technologies reaches a point where the economics of scale drive the cost of 10 GbE s and related optics down to a price point that will accelerate adoption. Unless the cost of 10 GbE s and optics becomes very attractive relative to technologies deployed today, the promised savings will remain as a promise and not reality. Figure 7: Server I/O consolidation 7

The positive impact of oe and deployment will be reflected in many aspects of data center operations and management: Cost of acquisition. Reduced upfront capital outlays with oe and, since they will be required to use fewer server CNAs, fewer cables, and fewer switch s (switches) compared to technologies used today. The financial benefits will be magnified in the future when 10 GbE costs go down. However, it is imant to note that initially oe and will not be low-cost technologies Operating costs. oe and offer relief in the area of power consumption and cooling, one of the highest expenditures in data centers, as fewer hardware components in each server will result in lower power consumption and cooling needs. The same applies on the switch side. Other operating costs that will start to decrease with oe and are maintenance, SKU management costs, and other asset management expenses. Decreased power. Due to decreased server sprawl via a single adapter with the performance and capabilities of a separate HBA and NIC. Other cost savings. When oe is deployed, data centers will be able to continue to use existing management tools and not incur education or retraining costs. Less often cited are the time savings realized from dealing with simpler configurations and a much less cluttered environment a result of reduced cabling and cable management needs. Troubleshooting and diagnostics can be performed more easily in environments in which technicians can identify and correct problems in shorter times. The reality is that simpler cabling helps reduce the potential for human errors. oe Usage Models The benefits of oe and deployment are best noted in three areas: consolidation using server virtualization and link consolidation, top-of-rack switch deployment, and end-of-row deployment using oe blades for backbone environments. Consolidating Servers and I/O Links Server virtualization is one of the drivers for higher I/O needs in servers. CNAs with 10 GbE links provide virtualized applications with the IP links they expect and allow data centers to deploy multiple applications on individual servers. In addition to consolidating server I/O links over s, virtualization gives data centers the opunity to consolidate applications over a smaller number of servers while benefiting from added application mobility. IT managers will also take advantage of the 10 GbE links to consolidate slower Ethernet and connections onto the much faster connections offered by the new generations of CNAs. The combination of reducing server and I/O link counts results in noticeable savings and greater efficiencies in data centers. Top-of-Rack Deployment This deployment model offers the clearest example of how data centers can take advantage of the benefits of oe and built into the Brocade 8000 Switch. The access layer is the most suitable area for this model of deployment either in a greenfield data center deployment or as additions to existing configurations. Today, most server racks are connected to company LANs using a pair of Ethernet switches for redundancy. The server racks are also connected to corporate SANs via redundant pairs of switches. 8

Using the Brocade 8000, data centers can replace the Ethernet and switches with a single pair of oe switches that simultaneously connect the rack to corporate LANs and SANs. Savings in switch s and rack space are evident, since rack servers are currently deployed with LAN and SAN separate connections and adapters. The Brocade 8000 connections eliminate the needs for separate s and cables, as both storage and LAN traffic travel over the same cables utilizing a single server and a single switch for both types of traffic. The result is a reduced number of server adapters and a much simplified cabling inside the server cabinet. 2 Brocade 8000 Switches at the top of the rack LAN Server Server SAN SAN Figure 8: Top of rack deployment. End-of-Row Deployment Driving oe and capabilities to the backbone completes the reach of the new technologies from servers all the way to the core of the network. Brocade oe blades for the Brocade DCX and DCX-4S Backbone make that reach a reality. Brocade oe blades deliver the features and capabilities found in traditional oe top of rack switches, but rely on other blades to provide the s used for connecting SANs. With the oe blades, Brocade offers customers a complete solution that enables them to use Brocade DCFM framework to manage the organization resources from server initiators all the way to storage targets. 9

Summary oe and are two evolutionary technologies that help evolve the data center into a more efficient and simpler to manage environment. They also bring the industry closer to its goal of building greener data centers. The ability to trans SAN and LAN traffic over a single wire opens the door for true server I/O consolidation and reduced cabling clutter. oe deployments in the near and mid terms will preserve existing purpose-built SANs and LANs as oe switches continue to direct I/O traffic to its appropriate, but separate, destinations. As oe is designed to encapsulate without altering its frames, oe solutions can be added to existing environments without any disruptions, thus preserving investments and extending their value. oe also allows customers to enhance the value of their management applications as they can extend their reach to oe solutions. Brocade offers full oe and solutions that extend from server CNAs to top-of -rack switches and end-of-row oe/ blades to a unified management tool that brings all of the elements together. With the new oe and offerings, Brocade extends it data center industry leadership and continues its drive to transform the data center of the future. 10

Corporate Headquarters San Jose, CA USA T: +1-408-333-8000 info@brocade.com European Headquarters Geneva, Switzerland T: +41-22-799-56-40 emea-info@brocade.com Asia Pacific Headquarters Singapore T: +65-6538-4700 apac-info@brocade.com 2015 Brocade Communications Systems, Inc. All Rights Reserved. 05/15 GA-WP-1299-01 ADX, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, HyperEdge, ICX, MLX, MyBrocade, OpenScript, The Effortless Network, VCS, VDX, Vplane, and Vyatta are registered trademarks, and Fabric Vision and vadx are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of others. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment features, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This information document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Ex of technical data contained in this document may require an ex license from the United States government. 11