Data Center Convergence. Ahmad Zamer, Brocade

Similar documents
Introducing Brocade VCS Technology

Ethernet Fabrics: An Architecture for Cloud Networking

Best Practice and Deployment of the Network for iscsi, NAS and DAS in the Data Center

Brocade One Data Center Cloud-Optimized Networks

Enterasys Data Center Fabric

Data Center Networking Designing Today s Data Center

FIBRE CHANNEL OVER ETHERNET

Unified Fabric: Cisco's Innovation for Data Center Networks

Server and Storage Consolidation with iscsi Arrays. David Dale, NetApp

Cisco Datacenter 3.0. Datacenter Trends. David Gonzalez Consulting Systems Engineer Cisco

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Brocade Solution for EMC VSPEX Server Virtualization

The evolution of Data Center networking technologies

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

Data Center Evolution and Network Convergence. Joseph L White, Juniper Networks

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

3G Converged-NICs A Platform for Server I/O to Converged Networks

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Virtualizing the SAN with Software Defined Storage Networks

Data Center Evolution without Revolution

WHITE PAPER. Best Practices in Deploying Converged Data Centers

Optimizing Data Center Networks for Cloud Computing

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

BUILDING A NEXT-GENERATION DATA CENTER

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

WHITE PAPER Ethernet Fabric for the Cloud: Setting the Stage for the Next-Generation Datacenter

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Block based, file-based, combination. Component based, solution based

CloudEngine 6800 Series Data Center Switches

over Ethernet (FCoE) Dennis Martin President, Demartek

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Unified Storage Networking

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

FCoE Deployment in a Virtualized Data Center

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Multi-Chassis Trunking for Resilient and High-Performance Network Architectures

Building Tomorrow s Data Center Network Today

Fibre Channel over Ethernet in the Data Center: An Introduction

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

VMware and Brocade Network Virtualization Reference Whitepaper

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

TRILL Large Layer 2 Network Solution

Scalable Approaches for Multitenant Cloud Data Centers

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

Building the Virtual Information Infrastructure

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

DEDICATED NETWORKS FOR IP STORAGE

Juniper Networks QFabric: Scaling for the Modern Data Center

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Network Virtualization for Large-Scale Data Centers

IP SAN Best Practices

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Scale and Availability Considerations for Cluster File Systems. David Noy, Symantec Corporation

Technology Overview for Ethernet Switching Fabric

UNDERSTANDING DATA DEDUPLICATION. Tom Sas Hewlett-Packard

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

VXLAN: Scaling Data Center Capacity. White Paper

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Ethernet Fabric Requirements for FCoE in the Data Center

Voice Over IP. MultiFlow IP Phone # 3071 Subnet # Subnet Mask IP address Telephone.

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

Blade Switches Don t Cut It in a 10 Gig Data Center

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Cloud Computing and the Internet. Conferenza GARR 2010

The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud

Lecture 02a Cloud Computing I

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

HP Virtual Connect. Tarass Vercešuks / 3 rd of October, 2013

FCoE Enabled Network Consolidation in the Enterprise Data Center

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Scaling 10Gb/s Clustering at Wire-Speed

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

Transcription:

Ahmad Zamer, Brocade

SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. 2

Abstract This session will appeal to Data Center Managers, and IT professionals seeking a fundamental understanding of the data center convergence trends. The session will define convergence as it applies to data centers, review upcoming changes and their impact on data centers. The audience will get acquainted with the emerging trend towards flatter data center networks and how to deploy them in existing environments or new deployments. 3

What if I Don t Converge? Your data center will not become obsolete Continue to manage and operate separate purpose built networks Take advantage of new technologies 16G FC for FC SANs 10GbE, 40G and 100G for enterprise networks and iscsi SANs Deploy virtualization Upgrade management tools VM VM Dedicated VMs Servers LAN VM VM VM Shared Resource P SAN 4

Convergence of Storage & Data Networks Today, data and storage networks are separate STORAGE NETWORK DATA NETWORK 5

Convergence of Storage & Data Networks The goal is to consolidate data and storage networks into one CONVERGED NETWORK 6

Data Center Trends Virtualization is driving convergence Virtual data centers require highly portable VMs that move between compute nodes and distant data centers Data center resources need to be available on demand Simpler and scalable infrastructure is needed to support virtualization Apps/Early VMs Portable VMs COMPUTE NETWORK STORAGE Present Future Higher VM density drives increased pressure on I/Os & architectures 7

Challenges of Today s Network Layer 2 performance, scalability, reliability Limitations of Spanning Tree Protocol (STP) Scaling virtual server environment Virtual Machine mobility Infrastructure complexity Lots of switches to manage Layer 3 protocols to the edge Management overhead High OpEx 8

Challenges of Scaling VM Environments Layer 2: single active path STP disables other paths Not Virtualization optimized Add Virtual Machines Move to 10 GbE for simplicity and performance Uplinks are stressed Increase utilization using MSTP (spanning tree per VLAN) Increases complexity Creates multiple single-path networks Link failure slow reconvergence STP reconfiguration may be too slow Broadcast storms stress network Layer 3 as an alternative Even more complexity; higher cost VM mobility limited Elements of network layers are simplified for the purpose of this presentation 9

What we need? L2 networks with no STP All paths in the networks are utilized with traffic automatically balanced Link failures not result in temporary outage and shortest paths are always used Networks with lossless transmission & low latency Networks that are built for convergence of data and storage networks NAS iscsi FCoE 10

Challenges of VM Mobility L3 to Agg. Layer L2 STP!!!!???? Distributed vswitch Limited sphere of mobility STP limits flexibility to a minimized, defined tree of switches L3 limits mobility to a single rack VM migration may break network access Manual adjustment of destination hosts and switches required Services (VLANs, QoS, security) mapped to all physical ports Eases mobility, but breaks network and security best practices Distributed Virtual Switch Service configuration consistency But high overhead Limited insight into where VMs are running Automation results in VMs existing anywhere in the HA cluster 11

What do we need? No physical barriers of VMs migration Networks that are aware of where VMs are running at all times Data centers with automated ability to balance utilization and increase application availability Fully distributed control plane Arbitrary topology with auto configuration or zero configuration NAS iscsi FCoE 12

Challenges of Network Management Core Layer 3 BGP, EIGRP, OSPF, PIM Aggregation/ Distribution Layer 2/3 IS-IS, OSPF, PIM, RIP LAN Mgmt. SAN Mgmt. SAN Too many network layers Utilize many L2/L3 protocols Lots of small-form-factor switches at the edge Each switch has to be managed Because of the number, they need to be aggregated Configuration time when deploying new switches Switch has to be set up Access (fixed & bladed) Layer 2/3 STP, OSPF, PLD, UDLD Blade Switch Mgmt. NIC Mgmt. HBA Mgmt. Templates have to be loaded Separate management tools for LAN, SAN, NICs/HBAs Management silos do not fit in a virtualized data center Drives up OpEx Elements of network layers are simplified for the purpose of this presentation 13

What do we need? Networks with fewer logically layers to deal with Switches that can be grouped together and managed as a single switch or unit Centralized or distributed management Universal or common tools to manage all converged network resources 14

Today s Common Architecture Core Layer 3 only Aggregation/ Distribution Layer 2/3 Access VCS L2 edge switches STP Devices 10 Gb DCB 1 Gb Ethernet Servers with 10 Gb CNAs Mix 1 Gb, 10 Gb Servers FCoE Storage iscsi, NAS Storage 10 Gb Ethernet Elements of network layers are simplified for the purpose of this presentation 15

Next Architecture Core Layer 3 only Edge VCS L2 switches Layer 3 to Core layer Managed as one, auto configure, No STP Devices 10 Gb DCB 1 Gb Ethernet Servers with 10 Gb CNAs Mix 1 Gb, 10 Gb Servers FCoE Storage iscsi, NAS Storage 10 Gb Ethernet Elements of network layers are simplified for the purpose of this presentation 16

What Do I get With Convergence? Lower costs Consolidate & optimize resources; simplify configurations Increased performance and reliability Faster, consistent access; more with less Minimize disruption, recover quickly with more resilient L2 infrastructures Agility & Scalability Deploy/re-deploy resources quickly Scale based on business needs Improved Virtualization Applications deployment and mobility LESS is BETTER 17

Convergence Technologies Storage FCoE: encapsulation of FC over Ethernet - see tutorial iscsi: encapsulation of SCSI over TCP/IP - see tutorial Networking DCB: Lossless Ethernet see tutorial TRILL: Layer 2 multi-path and multi-hop capabilities New L2 architectures: flatter networks 18

802.1Qbb PFC Priority-based Flow Control During periods of heavy congestion Ensures delivery of critical data Latency sensitive traffic continues normal operation Transmit Queues Receive Buffers 8 Virtual Lanes 0 1 2 3 4 5 6 7 19

802.1Qaz ETS Enhanced Transmission Selection Capability to apply differentiated treatment to different traffic within the same traffic class enabled by ETS Virtual Lanes IPC VoIP 0 1 Group 7 10% FCoE 2 3 Group 6 802.1Qaz ETS 60% 4 LAN 5 6 Group 0 30% 7 20

TRILL Transparent Interconnection of Lots of Links A proposed data center L2 protocol being developed by an Internet Engineering Task Force (IETF) workgroup Mission The TRILL WG will design a solution for shortest-path frame routing in multi-hop IEEE 802.1-compliant Ethernet networks with arbitrary topologies, using an existing link-state routing protocol technology. - source IETF Scope TRILL solutions are intended to address the problems of, inability to multipath, within a single Ethernet link subnet - source IETF 21

TRILL no STP Multi-path Layer 2 switching Multiple active paths Reclaim network bandwidth and improve utilization Establishes shortest paths through Layer 2 networks Fast response to failures Backward-compatible and connects into existing infrastructures Deliver multiple hops for all traffic types (including FCoE) Layer 2 Multiple Paths Elements of network layers are simplified for the purpose of this presentation 22

Deployment: Server Edge Top of Rack Adding FCoE and DCB at the edge or top of rack switches Replace top of rack switches, but preserve the rest of LAN & SAN configurations Non-disruptive addition to existing environments Source: FCIA and InfoStor 23

Deployment: End-to-end FCoE End-to-end FCoE, from edge to storage. Utilize converged switches throughout the network Add native FCoE storage, which connects to converged switches DCB and FCoE added to existing infrastructure. In this environment, FC, iscsi, NAS and FCoE storage devices may coexist Source: FCIA and InfoStor 24

Converged Data Center Expanded L2 layer deployments will be added to existing infrastructure New L2 deployments may not accommodate STP configurations New bridging devices will eventually enable integration of disparate data center networking devices LAN iscsi NAS iscsi NAS FCoE New Flatter L2 configurations SAN FC 25

Q&A / Feedback Please send any questions or comments on this presentation to SNIA: tracknetworking@snia.org TBD Many thanks to the following individuals for their contributions to this tutorial. - SNIA Education Committee 26