TRILL Large Layer 2 Network Solution

Similar documents
TRILL for Data Center Networks

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

Non-blocking Switching in the Cloud Computing Era

Virtual Machine in Data Center Switches Huawei Virtual System

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

VMDC 3.0 Design Overview

Network Virtualization for Large-Scale Data Centers

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

CloudEngine 6800 Series Data Center Switches

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

Advanced Computer Networks. Datacenter Network Fabric

Virtual PortChannels: Building Networks without Spanning Tree Protocol

CloudEngine 5800 Series Data Center Switches

Optimizing Data Center Networks for Cloud Computing

Ethernet Fabrics: An Architecture for Cloud Networking

Panel: Cloud/SDN/NFV 黃 仁 竑 教 授 國 立 中 正 大 學 資 工 系 2015/12/26

Extending Networking to Fit the Cloud

Lecture 7: Data Center Networks"

Cloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD.

Data Center Networking Designing Today s Data Center

Shortest Path Bridging IEEE 802.1aq Overview

VXLAN: Scaling Data Center Capacity. White Paper

Avaya VENA Fabric Connect

How To Manage A Virtualization Server

Data Center Convergence. Ahmad Zamer, Brocade

Multi-site Datacenter Network Infrastructures

WHITE PAPER. Network Virtualization: A Data Plane Perspective

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

SDN and Data Center Networks

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Huawei Enterprise A Better Way VM Aware Solution for Data Center Networks

Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Brocade Solution for EMC VSPEX Server Virtualization

Pre$SDN era: network trends in data centre networking

Analysis of Network Segmentation Techniques in Cloud Data Centers

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE

How To Make A Network Cable Reliable And Secure

Cisco FabricPath Technology and Design

CloudEngine Series Data Center Switches

Next-Gen Securitized Network Virtualization

Chapter 3. Enterprise Campus Network Design

Scalable Approaches for Multitenant Cloud Data Centers

Network Technologies for Next-generation Data Centers

Network Virtualization and Data Center Networks Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

Understanding Fundamental Issues with TRILL

Objectives. The Role of Redundancy in a Switched Network. Layer 2 Loops. Broadcast Storms. More problems with Layer 2 loops

Simplify Your Data Center Network to Improve Performance and Decrease Costs

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

Chapter 1 Reading Organizer

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Control Tower for Virtualized Data Center Network

Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

Security Technology White Paper

Data Center Content Delivery Network

CHAPTER 10 LAN REDUNDANCY. Scaling Networks

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Scalable Internet Services and Load Balancing

APPLICATION NOTE 210 PROVIDER BACKBONE BRIDGE WITH TRAFFIC ENGINEERING: A CARRIER ETHERNET TECHNOLOGY OVERVIEW

SummitStack in the Data Center

Data Center Network Topologies

Securing Virtualization with Check Point and Consolidation with Virtualized Security

Layer 3 Network + Dedicated Internet Connectivity

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

NVO3: Network Virtualization Problem Statement. Thomas Narten IETF 83 Paris March, 2012

Customer Training Catalog Training Programs IDC

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Using LISP for Secure Hybrid Cloud Extension

Cloud Computing and the Internet. Conferenza GARR 2010

Advanced VSAT Solutions Bridge Point-to-Multipoint (BPM) Overview

Multitenancy Options in Brocade VCS Fabrics

SOFTWARE DEFINED NETWORKING: INDUSTRY INVOLVEMENT

The Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0

Brocade Data Center Fabric Architectures

Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair

BUILDING A NEXT-GENERATION DATA CENTER

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

Lecture 02b Cloud Computing II

SDN, a New Definition of Next-Generation Campus Network

Cloud-Scale Data Center Network Architecture. Cheng-Chun Tu Advisor: Tzi-cker Chiueh

Brocade Data Center Fabric Architectures

Scaling 10Gb/s Clustering at Wire-Speed

Zarząd (7 osób) F inanse (13 osób) M arketing (7 osób) S przedaż (16 osób) K adry (15 osób)

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

CloudLink - The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds

Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Transcription:

TRILL Large Layer 2 Network Solution

Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network Solution... 6 4 Solution Characteristics and Advantages... 6 5 Summary... 8 2013-03-18 Huawei confidential. No spreading without permission. Page 2 of 8

In the cloud computing era, the distributed architecture is used to handle some operations on mass data, such as storing, mining, querying, and searching data in a data center. The distributed architecture leads to a huge amount of collaboration between servers, as well as a high volume of east-to-west traffic. In addition, virtualization technologies are widely used in data centers, greatly increasing the computation workload of each physical server, but the throughput of a physical server is increased by several times. To maximize service reliability, reduce IT and O&M costs, and increase service deployment flexibility in a data center, virtual machines (VMs) must be able to be dynamically migrated within the data center. These data center service requirements in the cloud computing era prompt the evolution of the data center network architecture and give birth to the large Layer 2 network architecture. To build large Layer 2 networks, Transparent Interconnection of Lots of Links (TRILL) is introduced. The following analyzes the network architecture requirements of data centers in the cloud computing era and provides the Huawei TRILL-based solution to help build data center networks that meet cloud computing service requirements. 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era Layer 3 Layer 2 POD Traditional data center architecture POD Layer 3 Layer 2 New-generation data center architecture 2013-03-18 Huawei confidential. No spreading without permission. Page 3 of 8

(1) Smooth VM migration Server virtualization is a core cloud computing technology and is extensively used. To maximize service reliability, reduce IT and O&M costs, and increase service deployment flexibility in a data center, VMs must be able to be dynamically migrated within the data center instead of being restricted to an aggregation or access switch. A traditional data center often uses the Layer 2+Layer 3 network architecture, in which Layer 2 networking is used within a power of delivery (POD) and Layer 3 networking is used between PODs. In such network architecture, VMs can only be migrated within a POD. If VMs need to be migrated across a Layer 2 area, IP addresses of VMs must be changed. In this situation, if the load balancer configuration remains unchanged, applications will be interrupted. Virtualization technologies are used in a data center to improve resource efficiency and utilize large numbers of idle servers. Data center carriers require VMs to be migrated within a data center to make full use of existing data center resources. This can be implemented using TRILL to build a large Layer 2 network. (2) Non-blocking, low-latency data forwarding The data center traffic model in the cloud computing era is different from the traditional carrier traffic model. For example, most traffic of a data center is east-to-west traffic between servers, and the data center network is like the bus between servers. To ensure that services are running properly, non-blocking and low-latency data forwarding is required, and the flat and fat-tree topology must be supported to make optimal use of multiple data forwarding paths between switches. In traditional Layer 2 networking, xstp is required to help eliminate loops and prevent Layer 2 broadcast storms, allowing only one of N links to be used to forward data. This method reduces bandwidth efficiency and fails to meet data center network requirements in the cloud computing era. (3) Multi-tenancy In the cloud computing era, a physical data center is shared by multiple tenants instead of being exclusively used by one tenant. In this situation, each tenant has a virtual data center instance, making tenants feel like that they have exclusive servers, storage resources, and network resources. To ensure security, data traffic between tenants needs to be isolated. Currently, in traditional Layer 2 networking, the number of supported tenants is limited by the number of VLANs, which is at most 4K. As the cloud computing technology develops, the number of tenants supported by the future data center network architecture must exceed 4K. (4) Large-scale network In the cloud computing era, a large data center must support potentially millions of servers. To implement non-blocking data forwarding, several hundreds or thousands of switches are required for a large data center. In such large-scale networking, networking protocols must be able to prevent loops. When a fault occurs on a network node or link, fast network convergence can be triggered to enable services to be recovered rapidly. 2013-03-18 Huawei confidential. No spreading without permission. Page 4 of 8

2 TRILL Characteristics TRILL is defined by the IETF to implement Layer 2 routing by extending IS-IS. TRILL has the following characteristics: (1) High-efficiency forwarding On a TRILL network, each device regards itself as the source node to calculate the shortest path to all other nodes through the shortest path tree (SPT) algorithm. If multiple equal-cost links are available, load balancing can be implemented when unicast routing entries are generated. In data center fat-tree networking, forwarding data along multiple paths can maximize network bandwidth efficiency. A traditional Layer 2 network using xstp to prevent loops is like a single-lane highway, whereas a TRILL network is more like a multi-lane highway. On a TRILL network, data packets can be forwarded using equal cost multipath (ECMP) or forwarded along the shortest path. Therefore, using TRILL networking can greatly improve the data forwarding efficiency of data centers and increase the data center network throughput. (2) Loop prevention TRILL can automatically select the root of a distribution tree, allowing each router bridge (RB) node to use the root as the source node to calculate the shortest paths to all other RBs. In this manner, the multicast distribution tree to be shared by the entire network can be automatically built. This distribution tree connects all nodes on the network to each other and prevents loops when Layer 2 unknown unicast, multicast, or broadcast data packets are transmitted on the network. When the network topology changes, the route convergence time of nodes may be different. You can use reverse path forwarding (RPF) check to discard the data packets received from an incorrect interface, preventing loops. In addition, a TRILL header carries the Hop-Count field, which reduces the impact of temporary loops and effectively prevents broadcast storms. (3) Fast convergence On a traditional Layer 2 network, an Ethernet header does not carry the TTL field, and the xstp convergence mechanism is inadequate. As a result, when the network topology changes, network convergence is slow and may last tens of seconds, failing to provide high service reliability for data centers. TRILL uses routing protocols to generate forwarding entries, and a TRILL header carries the Hop-Count field that allows temporary loops. These advantages allow for fast network convergence when a fault occurs on a network node or link. (4) Convenient deployment The TRILL protocol is easy to configure because many configuration parameters, such as the nickname and system ID, can be automatically generated, and many protocol parameters can retain default settings. On a TRILL network, the TRILL protocol manages unicast and multicast services in a unified manner. This is different from the situation on a Layer 3 network where multiple sets of protocols such as IGP and PIM manage unicast and multicast services respectively. Moreover, a TRILL network is still a Layer 2 network, which has the same characteristics as a traditional Layer 2 network: plug-and-play and ease of use. (5) Easy support of multi-tenancy 2013-03-18 Huawei confidential. No spreading without permission. Page 5 of 8

Currently, TRILL uses the VLAN ID as the tenant ID and isolates tenant traffic using VLANs. During the initial phase of the cloud computing industry and large Layer 2 network operation, a maximum of 4K VLAN IDs will not become a bottleneck. As the cloud computing industry develops, TRILL must break through the limitation of 4K tenant IDs. To address this issue, TRILL will use FineLable to identify tenants. A FineLable is 24 bits long and can support a maximum of 16M tenants, which can meet tenant scalability requirements. (6) Smooth migration Networks using the traditional Layer 2 Ethernet technology MSTP can seamlessly connect to a TRILL network. Servers connected to MSTP networks can communicate with servers connected to a TRILL network at Layer 2, and VMs can be migrated within a TRILL network. The preceding TRILL characteristics enable TRILL-based network architecture to better meet data center service requirements in the cloud computing era. 3 Huawei TRILL-based Large Layer 2 Network Solution The preceding figure shows a Huawei TRILL-based large Layer 2 solution. TRILL allows VMs to dynamically migrate within the data center in a wide range. The TRILL network functions as the bus of a data center. To ensure that services are running properly in a data center, the TRILL network must efficiently forward traffic between servers, and between servers and Internet users. 4 Solution Characteristics and Advantages TRILL is the highway in a data center network and has the following characteristics: (1) Flexible networking, supporting TOR networking and EOR networking 2013-03-18 Huawei confidential. No spreading without permission. Page 6 of 8

CloudEngine series switches all support the TRILL protocol, TOR networking, and EOR networking, with all boards supporting TRILL forwarding. This advantage ensures networking flexibility. To increase reliability, access switches support stacking. The TRILL protocol can be deployed from core switches to edge access switches of a data center network. In a data center, the TRILL network functions as a bridging fabric, providing stable core network architecture. As the cloud computing technology keeps developing, you can easily add physical servers, IPv4/IPv6 gateways, firewalls, and load balancing devices to data center networks. (2) Flexible gateway deployment TRILL supports two gateway deployment modes: Deploying Layer 3 gateways independently: Layer 3 gateways and core RBs are directly connected. In a large data center, multiple gateways can be deployed to implement VLAN-based load balancing. Deploying Layer 3 gateways on core RBs: The virtual switch (VS) technology virtualizes a core RB into two VSs, one VS running the Layer 3 gateway function and the other running the TRILL protocol. Gateways need to forward data traffic between Internet users and virtual servers in a data center and inter-segment traffic within a data center. The gateway deployment mode directly affects the forwarding performance of south-to-north traffic and inter-segment east-to-west traffic of a data center. Deploying Layer 3 gateways on core RBs applies to small- and medium-scale data centers, and deploying Layer 3 gateways independently applies to large-scale data centers. You can select the gateway deployment mode according to service requirements. (3) Various operation, maintenance, and management methods CloudEngine series switches support a variety of operation, maintenance, and management methods. Telnet, SNMP, and Netconf administrators can log in to VLANIF interfaces of RBs through a TRILL network. The management network can share the same physical network with a TRILL service network. To detect the link connectivity within a TRILL network, the TRILL ping can be used for fault location. (4) More multicast distribution trees Link load balancing can be performed on TRILL unicast traffic using the ECMP algorithm. The ingress RB of multicast traffic can select different multicast distribution trees for VLANs to implement link load balancing for multicast traffic. A large Layer 2 network built by CloudEngine series switches supports a maximum of four multicast distribution trees, facilitating the multicast traffic processing of different tree roots and allowing for fine-grained load balancing on multicast traffic of the entire TRILL network. (5) Large network scale and optimal performance CloudEngine series switches can build a super-large Layer 2 network that supports more than 500 nodes, provides less than 500 ms network convergence in the case of a node or link fault, and supports load balancing among a maximum of 16 links. These advantages can meet network scale and performance requirements of large data centers. (6) Two data center evolution modes TRILL supports two data center evolution modes: 2013-03-18 Huawei confidential. No spreading without permission. Page 7 of 8

Mode 1: If a data center uses the traditional Layer 2 Ethernet technology MSTP and devices newly added for network expansion support TRILL, CloudEngine series switches can seamlessly connect to the traditional MSTP network, forming a large Layer 2 network. This mode maximizes the return on investment. Mode 2: The type of network that the user VLAN connects to can be specified to implement smooth data center evolution. In new data centers, devices support TRILL. TRILL can be initially deployed only for specific services in some VLANs while MSTP is deployed on other services. This provides a method to trial the operation of TRILL, and to gain experience before migrating all services to a TRILL network. 5 Summary A TRILL network functions as the high-speed bus in a data center and has many advantages, such as smooth VM migration, non-blocking and low-latency data forwarding, multi-tenancy, and large network scale. Therefore, a TRILL network can meet data center service requirements in the cloud computing era. Huawei works together with its partners to promote the application, development, and evolution of the TRILL technology in data centers. 2013-03-18 Huawei confidential. No spreading without permission. Page 8 of 8