Non-blocking Switching in the Cloud Computing Era

Similar documents
TRILL Large Layer 2 Network Solution

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

CloudEngine 6800 Series Data Center Switches

CloudEngine Series Data Center Switches

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

CloudEngine 5800 Series Data Center Switches

SummitStack in the Data Center

Simplify Your Data Center Network to Improve Performance and Decrease Costs

Ethernet Fabrics: An Architecture for Cloud Networking

CloudEngine 7800 Series Data Center Switches

Virtual Machine in Data Center Switches Huawei Virtual System

SDN and Data Center Networks

Scaling 10Gb/s Clustering at Wire-Speed

Data Center Networking Designing Today s Data Center

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

VMDC 3.0 Design Overview

CloudEngine 7800 Series Data Center Switches

Brocade Solution for EMC VSPEX Server Virtualization

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Advanced Computer Networks. Datacenter Network Fabric

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Scalable Approaches for Multitenant Cloud Data Centers

SummitStack in the Data Center

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

TRILL for Data Center Networks

BUILDING A NEXT-GENERATION DATA CENTER

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

ProgrammableFlow for Open Virtualized Data Center Network

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012

Optimizing Data Center Networks for Cloud Computing

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS

Brocade Data Center Fabric Architectures

A Whitepaper on. Building Data Centers with Dell MXL Blade Switch

Addressing Scaling Challenges in the Data Center

Brocade Data Center Fabric Architectures

Large-Scale Distributed Systems. Datacenter Networks. COMP6511A Spring 2014 HKUST. Lin Gu

Data Center Solution V100R001C00. DC Technical Proposal. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

Virtual PortChannels: Building Networks without Spanning Tree Protocol

I D C M A R K E T S P O T L I G H T

Data Center Network Topologies

Introduction to Cloud Design Four Design Principals For IaaS

Cloud Computing and the Internet. Conferenza GARR 2010

Technology Overview for Ethernet Switching Fabric

Data Center Convergence. Ahmad Zamer, Brocade

Chapter 3. Enterprise Campus Network Design

Virtualizing the SAN with Software Defined Storage Networks

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

Lecture 7: Data Center Networks"

Brocade One Data Center Cloud-Optimized Networks

Huawei Cloud Fabric Data Center Solution

I D C M A R K E T S P O T L I G H T

How to Monitor a FabricPath Network

A 10 GbE Network is the Backbone of the Virtual Data Center

Cloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD.

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Malaysia State Government Data Center Construction

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

Multi-site Datacenter Network Infrastructures

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Data Center Network Topologies: FatTree

Voice Over IP. MultiFlow IP Phone # 3071 Subnet # Subnet Mask IP address Telephone.

Building Virtualization-Optimized Data Center Networks

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Network Topology. White Paper

SDN, a New Definition of Next-Generation Campus Network

Network Virtualization for Large-Scale Data Centers

Data Center Network Evolution: Increase the Value of IT in Your Organization

The Future of Cloud Networking. Idris T. Vasi

Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0

Cloud Fabric Overall Market Performance

VMware Virtual SAN 6.2 Network Design Guide

ENABLING THE PRIVATE CLOUD - THE NEW DATA CENTER NETWORK. David Yen EVP and GM, Fabric and Switching Technologies Juniper Networks

Architecting Data Center Networks in the era of Big Data and Cloud

Juniper Networks QFabric: Scaling for the Modern Data Center

How To Design A Data Centre

Data Center Design IP Network Infrastructure

Building Tomorrow s Data Center Network Today

Cisco s Massively Scalable Data Center

Introducing Brocade VCS Technology

VIABILITY OF DEPLOYING ebgp AS IGP IN DATACENTER NETWORKS. Chavan, Prathamesh Dhandapaani, Jagadeesh Kavuri, Mahesh Babu Mohankumar, Aravind

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

TÓPICOS AVANÇADOS EM REDES ADVANCED TOPICS IN NETWORKS

Router Architectures

InfiniBand Clustering

Technical Design Guide Data Center Networking: the Mesh & POD architecture

CONTROL LEVEL NETWORK RESILIENCY USING RING TOPOLOGIES. Joseph C. Lee, Product Manager Jessica Forguites, Product Specialist

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

Data Centre White Paper Summary. Application Fluency In The Data Centre A strategic choice for the data centre network

Eudemon8000 High-End Security Gateway HUAWEI TECHNOLOGIES CO., LTD.

Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter

Chapter 1 Reading Organizer

Cisco Catalyst 4500-X Series Switch Family

Flattening the Data Center Architecture

Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

Transcription:

Non-blocking Switching in the Cloud Computing Era

Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network... 4 3.1 No Bandwidth Convergence in the Fat-tree Architecture... 4 3.2 Fat-tree Needs to Be Tailored to Meet Data Center Service Requirements... 5 4 Huawei Switches Help Build a Non-blocking Network... 6 5 Summary... 8 2013-03-18 Huawei confidential. No spreading without permission. Page 2 of 8

1 Foreword Data centers are at core of the cloud computing era that is now beginning. How data centers can better support the fast growing demand for cloud computing services is of great concern to data center owners. Customers will construct larger data centers, purchase more servers with higher performance, and develop more applications as they move towards the cloud computing era. If data center networks cannot adapt to these changes, they will become bottlenecks to data center growth. 2 Networks Must Go With the Flow in the Cloud Computing Era The data center is the core of a cloud platform. Growing numbers of services and applications are deployed and demand for more such services and applications is driving data center construction. The service and resource planning needs of cloud-computing data centers differ significantly from those of traditional data centers. Data center networks must change to meet the needs of cloud-computing data centers. Changes to the traffic model used by cloud-computing data centers have been especially profound and this has brought new challenges to data center networks. It is estimated that about 70% of traffic in a cloud-computing data center is east-west traffic, while about 80% of traffic in a traditional data center is north-south traffic. North Data center Data center North-south traffic: 80% East-west traffic: 70% North-south traffic: data forwarded between external users and internal servers, also called vertical traffic East-west traffic: data forwarded between internal servers of the data center, also called horizontal traffic Figure 1 Traffic model evolution in data centers What is causing these changes to the traffic model? Traditional data centers mainly provide access for external users, so most traffic moves in a north-south direction. Based on how services are delivered and overall limits on egress bandwidth, bandwidth is allocated to each layer with a 2013-03-18 Huawei confidential. No spreading without permission. Page 3 of 8

convergence ratio between layers. Bandwidth on the access layer is usually several times that on the aggregation layer or the core layer. The common bandwidth convergence ratio ranges from 1:3 to 1:20. Cloud computing increases demand for both the variety of services available and the volume of service traffic. This increased demand has a great impact on the data center traffic model. Applications such as searching and parallel computing require multiple servers to collaborate and this increases traffic between servers. Complex and ever-changing service demands make it difficult to predict traffic on servers or plan network bandwidth. Dynamic virtual machine migration creates complexity and higher east-west traffic volume. Traditional data center networks are unable to meet the demands of this new traffic model. A cloud-computing data center needs a non-blocking network for line-speed traffic transmission between servers within the data center. 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network The fat-tree architecture developed by Charles E. Leiserson in the 1980s is a widely accepted kind of non-blocking network architecture. This architecture uses a large number of low-performance switches to construct a large-scale non-blocking network. 3.1 No Bandwidth Convergence in the Fat-tree Architecture In a traditional tree topology, bandwidth is allocated on each layer and there is a certain convergence rate between layers. The amount of bandwidth at the root is much less than the total sum of bandwidth at all the leaves. The fat-tree architecture looks more like a real tree. The root has more bandwidth than the leaves. This non-convergence bandwidth planning is the basis for a non-blocking network. Traditional network topology Fat-tree network topology Figure 2 Comparison between a traditional network and a fat-tree network In the fat-tree network, the uplink bandwidth and the downlink bandwidth on all nodes except the root are the same. Each node on the tree has a line-speed forwarding capability. 2013-03-18 Huawei confidential. No spreading without permission. Page 4 of 8

Figure 3 shows a 2-element 4-layer fat-tree architecture, in which each leaf switch connects to two terminals and the switches are deployed in four layers. Switches in this networking all have the same hardware performance specifications. Root node 2-element 4-layer fat-tree Internal node Internal node Leaf node Fat-tree logical node Switch Figure 3 A four-layer fat-tree networking In Figure 3, each leaf node represents a switch connecting to two terminals. Two switches constitute a logical node in Layer 2 and four switches constitute a logical node on Layer 3. The logical nodes on Layer 2 and Layer 3 are also called internal nodes. There are eight switches on the root node. The uplink bandwidth and the downlink bandwidth on each logical node are the same. Network bandwidth does not converge. Only half of the bandwidth on the root node is used for downlink access. Uplink bandwidth is reserved for network scaling in the future, so the network can be easily expanded. 3.2 Fat-tree Needs to Be Tailored to Meet Data Center Service Requirements The root node in fat-tree architecture allows for network scalability by reserving uplink bandwidth equal to the amount of downlink bandwidth in use. The scale of the whole network is usually planned in advance. There are, however, limits (for example, the size of the equipment room) to how much a network can be expanded. The root node does not need to reserve so much uplink bandwidth. 4-layer fat-tree 3-layer fat-tree ⅹ ⅹ Figure 4 Simplified fat-tree architecture Figure 4 illustrates a four layer fat-tree architecture that has been simplified to a three layer fat-tree. In the simplified tree, the root is only responsible for non-blocking switching in the network. Bandwidth that was reserved on the four 2013-03-18 Huawei confidential. No spreading without permission. Page 5 of 8

layer fat-tree can also be used for downlink access. The three layer fat-tree requires fewer switches to achieve the same non-blocking switching capability. In theory, all switches deployed in a fat-tree network have the same hardware performance specifications. In real data center networking, however, access switches only provides services to a few servers and require less forwarding capability than aggregation or core switches. Box-shaped switches are usually used on the access layer, and chassis switches are used on the aggregation layer and core layer. Same switches used on the root and leave High performance switch used on the root Figure 5 High performance switch used on the root Figure 5 shows a network that has a high performance chassis switch deployed at the root node. This high performance switch provides larger bandwidth for the servers at the internal nodes. Besides, this networking requires fewer switches. It simplifies network deployment and maintenance, and at the same time improves network performance. Loops exist in a fat-tree network. Fat-tree networks cannot use Spanning Tree Protocol (STP) technology to implement non-blocking switching because link utilization efficiency is low when STP is used to block redundant links. A technology that makes full use of link resources is required. Routing protocols that run on Layer 3 IP networks and the Transparent Interconnection of Lots of Links (TRILL) protocol that runs on Layer 2 Ethernet networks are commonly used technologies. 4 Huawei Switches Help Build a Non-blocking Network Huawei provides next-generation data center switches, including chassis switch CE12800, and box-shaped switches CE6800 and CE5800. These switches support standard TRILL protocol and other widely used routing protocols. They provide GE, 10GE, 40GE, and 100GE access capabilities to support non-blocking switching in data centers. 2013-03-18 Huawei confidential. No spreading without permission. Page 6 of 8

Figure 6 Huawei next-generation data center switches (1) Each slot of the CE12800 switch supports 24 x 40GE or 96 x 10GE line-speed cards to provide large capacity forwarding capability needed by fat-tree architecture. (2) CE6800 and CE5800 support 40GE uplink interfaces to simplify cable layout and GE or 10GE downlink interfaces to meet a variety of access needs. (3) All CE series switches support standard TRILL protocol. The CE12800 provides Equal-Cost Multiple Path (ECMP) routing and supports a maximum of 32 equal-cost routes, enabling a Layer 2 Ethernet fat-tree network to provide 720 Tbit/s of bidirectional forwarding performance. Networks using CE series switches are highly scalable. Figure 7 High performance non-blocking fat-tree network constructed with CE12800 switches Figure 7 shows a high performance non-blocking fat-tree network constructed with CE12800 switches. With the TRILL protocol, a large-scale Layer 2 Ethernet network can be constructed to: (1) Provide 4608 x 40GE line-speed interfaces or 18432 x 10GE line-speed interfaces for external users. 2013-03-18 Huawei confidential. No spreading without permission. Page 7 of 8

(2) Provide 40GE interfaces for internal communication. Flattened network deployments that use only core switches and TOR switches are the preferred way to reduce network delay in data centers. A fat-tree network can be deployed between the cores switches and TOR switches to form a non-blocking Layer 2 Ethernet network. Figure 8 Flattened fat-tree network constructed with CE series switches Figure 8 shows a flattened non-blocking fat-tree network that is constructed with CE12800 and CE6800 (or CE12800 and CE5800) switches. The flattened network uses TRILL protocol to provide high-density access for 10GE/GE servers. Because CE12800 supports 32 equal-cost routes and TOR switches support stacking, it is also possible to build a larger flattened fat-tree network that can meet the requirements of large or super large data center networks. 5 Summary Cloud computing presents new challenges to data center networks. Non-blocking technology has become the solution of choice to these challenges. Huawei next-generation data center switches offer a rich mix of service functions and the superior forwarding performance needed to build a non-blocking network for cloud-computing data centers. 2013-03-18 Huawei confidential. No spreading without permission. Page 8 of 8