Understanding Fundamental Issues with TRILL



Similar documents
Simplifying the Data Center Network to Reduce Complexity and Improve Performance

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

Demonstrating the high performance and feature richness of the compact MX Series

DEPLOYING IP TELEPHONY WITH EX SERIES ETHERNET SWITCHES

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

Monitoring Network Traffic Using sflow Technology on EX Series Ethernet Switches

MONITORING NETWORK TRAFFIC USING sflow TECHNOLOGY ON EX SERIES ETHERNET SWITCHES

SoLuTIoN guide. CLoud CoMPuTINg ANd ThE CLoud-rEAdy data CENTEr NETWork

Optimizing VoIP Applications with Juniper Networks EX3200 and EX4200 Line of Ethernet Switches

Flattening the Data Center Architecture

Voice Modules for the CTP Series

Deploying IP Telephony with EX-Series Switches

PERFORMANCE VALIDATION OF JUNIPER NETWORKS SRX5800 SERVICES GATEWAY

IF-MAP FEDERATION WITH JUNIPER NETWORKS UNIFIED ACCESS CONTROL

ENTERPRISE SOLUTION FOR DIGITAL AND ANALOG VOICE TRANSPORT ACROSS IP/MPLS

New Data Centers Require a New Network

Security That Ensures Tenants Do Not Pose a Risk to One Another In Terms of Data Loss, Misuse, or Privacy Violation

J-Flow on J Series Services Routers and Branch SRX Series Services Gateways

Reasons Enterprises. Prefer Juniper Wireless

Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems

Introduction to Automatic Multicast Tunneling as a Transition Strategy for Local Service Providers

EVOLVING ENTERPRISE NETWORKS WITH SPB-M APPLICATION NOTE

SECURE ACCESS TO THE VIRTUAL DATA CENTER

STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS

TRILL Large Layer 2 Network Solution

White Paper. Protect Your Virtual. Realizing the Benefits of Virtualization Without Sacrificing Security. Copyright 2012, Juniper Networks, Inc.

CONFIGURATION OPTIONS FOR HARDWARE RULE SEARCH (RMS) AND SOFTWARE RULE SEARCH (SWRS)

Introduction to Carrier Ethernet VPNs: Understanding the Alternatives

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards

VXLAN: Scaling Data Center Capacity. White Paper

Configuring and Implementing A10

Juniper Networks QFabric: Scaling for the Modern Data Center

Optimizing Data Center Networks for Cloud Computing

Data Center Networking Designing Today s Data Center

Ethernet Fabrics: An Architecture for Cloud Networking

TOPOLOGY-INDEPENDENT IN-SERVICE SOFTWARE UPGRADES ON THE QFX5100

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

VMWARE VIEW WITH JUNIPER NETWORKS SA SERIES SSL VPN APPLIANCES

Extending Networking to Fit the Cloud

JUNIPER NETWORKS WIRELESS LAN SOLUTION

Ethernet Fabric Requirements for FCoE in the Data Center

Networks that know data center virtualization

Implementing Firewalls inside the Core Data Center Network

NETWORK AND SECURITY MANAGER APPLIANCES (NSMXPRESS AND NSM3000)

Secure, Mobile Access to Corporate , Applications, and Intranet Resources

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Network and Security. Product Description. Product Overview. Architecture and Key Components DATASHEET

MIGRATING IPS SECURITY POLICY TO JUNIPER NETWORKS SRX SERIES SERVICES GATEWAYS

Key Strategies for Long-Term Success

Technology Overview. Class of Service Overview. Published: Copyright 2014, Juniper Networks, Inc.

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

Juniper Networks WX Series Large. Integration on Cisco

WHITE PAPER. Network Virtualization: A Data Plane Perspective

Fiber Channel Over Ethernet (FCoE)

JUNIPER CARE PLUS ADVANCED SERVICES CREDITS

Virtual Private LAN Service (VPLS)

Protecting Physical and Virtual Workloads

Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER

Limitation of Riverbed s Quality of Service (QoS)

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

PRODUCT CATEGORY BROCHURE. Juniper Networks SA Series

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Product Description. Product Overview

VMDC 3.0 Design Overview

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

TRILL for Data Center Networks

Network Virtualization for Large-Scale Data Centers

Junos Space Virtual Control

Simplify the Data Center with Junos Fusion

Network Configuration Example

Juniper Networks Solution Portfolio for Public Sector Network Security

Scaling 10Gb/s Clustering at Wire-Speed

Using Multicast Call Admission Control for IPTV Bandwidth Management

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Network Virtualization Solutions

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

NETWORK AND SECURITY MANAGER

Junos Pulse Secure Access Service Enables Service Providers to Deliver Scalable and On-Demand, Cloud-Based Deployments with Simplicity and Agility

Addressing Scaling Challenges in the Data Center

Fibre Channel over Ethernet in the Data Center: An Introduction

EX SERIES ETHERNET SWITCHES: QOS-ENABLING THE ENTERPRISE

Juniper Networks QFX3500

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

Transcription:

WHITE PAPER TRILL in the Data Center: Look Before You Leap Understanding Fundamental Issues with TRILL Copyright 2011, Juniper Networks, Inc. 1

Table of Contents Executive Summary........................................................................................................ 3 Introduction................................................................................................................ 3 What is TRILL?............................................................................................................. 3 What Do Modern Data Centers Need?...................................................................................... 4 Fundamental Issues with TRILL............................................................................................. 5 Early Adopters of TRILL Face Vendor Lock-In................................................................................ 6 Conclusion................................................................................................................. 6 About Juniper Networks.................................................................................................... 6 Table of Figures Figure 1: s interconnected through an Ethernet cloud................................................................. 4 2 Copyright 2011, Juniper Networks, Inc.

Executive Summary Juniper Networks has a long history of supporting industry standards that are mature and enable customers to effectively solve their networking problems. Therefore, before jumping on the Transparent Interconnection of Lots of Links (TRILL) bandwagon, Juniper took a close look at how the protocol proposes to solve the fundamental issues plaguing the data center, specifically as they relate to connecting infrastructure designed for cloud computing. This white paper outlines the results of those reviews and presents Juniper s conclusions about TRILL. Introduction TRILL was originally intended to fix specific scaling deficiencies inherent to Ethernet bridging while retaining its plug-andplay nature. While its primary application was for large distributed campus networks or large Layer 2 networks, TRILL is now being heralded by some experts as a data center solution. However, the requirements of the modern data center such as exceedingly high levels of performance, any-to-any connectivity, virtual machine (VM) mobility, and more are dramatically different than the needs of the campus. The current hype around TRILL is primarily generated by legacy vendors who have a significant presence in the data center an installed base they are desperate to protect. Vendors promoting TRILL in the data center will eventually discover that there are a number of problems that it doesn t solve, and they will attempt to fix those problems through patches, resulting in the same level of complexity that disqualified legacy Ethernet as a data center solution in the first place. Customers who implement TRILL in their data centers will pay dearly for this incremental approach, since they are required to replace their existing infrastructure, which merely increases their costs. While TRILL is a step in the right direction, it increases complexity and prevents fundamental advances that could truly improve the data center network. Customers considering TRILL must ask themselves: Why should we invest in a technology that only supports Layer 2 one that introduces additional cost and complexity to the network when the majority of our data center application traffic is Layer 3? What is TRILL? TRILL is a network protocol invented by Radia Perlman to remove certain deficiencies of bridged Ethernet networks in large Layer 2 campuses. The context of TRILL s creation is important, because it speaks directly to the problems it can and cannot solve. The following quote, from a 2006 interview with Perlman in Network World, is revealing: A couple of years back there was this Boston Globe article about a hospital network melting down and in the middle of it was mentioned the spanning tree algorithm eventually we tracked down the company providing the switches and indeed it was a giant bridged network One of the things I m trying to do now, given that we re stuck with IP, is come up with something that gives you the advantages of bridging so it can be all zero-configuration within a campus and all look like one big prefix and not be confined to just transmitting data along the spanning tree TRILL has since been taken up by the IETF in an attempt to create a new standard. In the meantime, various implementations by various Ethernet switching vendors include a number of differences, even as they operate under the cloak of a single open standard called TRILL. Ironically, contrary to what a standard is supposed to do, these different implementations don t even interoperate with one another. TRILL introduces a new encapsulation layer, a set of associated control plane protocols, and a new network device type called an that sits midway between a router and a bridge. The encapsulation layer adds a hop count, a VLAN tag, and ingress and egress identifiers, along with a few additional control bits. Figure 1 shows a typical TRILL-based network deployment in the data center. Copyright 2011, Juniper Networks, Inc. 3

Layer 2 Only Core Core Edge Edge Edge Servers Compute and Storage Servers Compute and Storage Figure 1: Typical TRILL-based network in the data center In a typical TRILL application, an IP packet encapsulated in an Ethernet frame would have two additional encapsulation layers added a TRILL header and an outer Ethernet header, assuming that an Ethernet cloud were used to connect the s. What Do Modern Data Centers Need? Today s data centers are complex, inefficient, unreliable, power hungry, expensive, and difficult to operate. To a large extent, these problems arise from limitations in the networking technologies used to interconnect computing, storage, and services inside the data center. A careful examination of these problems shows that any networking technology designed for the data center today must provide the ability to treat infrastructure resources as fully fungible pools that can be dynamically and rapidly partitioned without the infrastructure or the applications knowing details about each other 1. This capability is the key to simplicity, efficiency, and security. The networking technology in the data center must also be able to connect resources to each other at very high speeds with no apparent limitations in the network. This capability is the key to high performance as well as efficiency improvements. Finally, the technology must be able to scale to provide these capabilities across a wide range of data centers of all sizes without requiring a redesign. These high-level capabilities can be broken down into seven specific characteristics that a data center network technology must provide: 1. Any-to-any connectivity with fairness and full non-blocking: A set of interfaces must be able to send and receive packets to and from any other set with no restrictions or preplanning. Specifically, this includes the ability to absorb and adapt to rapid changes in the rate of transmission, the number of active senders, and the number of active receivers. In all cases, the full bandwidth of a target interface (or interfaces) is shared equally by all contending interfaces instantaneously and continuously, including the special case of one interface sending to just one other interface; this equal sharing is referred to as fairness. Finally, the only apparent congestion is due to the limited bandwidth of ingress and egress interfaces. Any congestion of egress interfaces does not affect ingress interfaces sending to uncongested interfaces; this noninterference is referred to as non-blocking. 2. Low latency and jitter: The technology must provide low interface-to-interface latency on the order of a handful of microseconds. Latency should also grow slowly with offered traffic load. Finally, the technology should provide low jitter, which is an instantaneous variation of latency. 1 In a multi-tenant data center, it is completely impractical to assume that the infrastructure knows about the details of any application, or that applications know about details of the infrastructure. 4 Copyright 2011, Juniper Networks, Inc.

3. No packet drops under congestion: When the instantaneous rate of incoming packets exceeds the instantaneous rate of outgoing packets, the technology must signal the source (e.g., servers or VMs) causing the congestion to slow down sufficiently so that the arrival rate matches the departure rate. This throttling should occur rapidly and continuously to ensure that input and output rates are matched in a smooth manner. 4. Linear cost and power scaling: The cost and power consumption of the network infrastructure must increase linearly with the number of server ports N it needs to support. This is in sharp contrast to traditional approaches, where cost and power grow nonlinearly as they scale up 5. Support of virtual networks and services: The technology must implement virtual Layer 2 and Layer 3 networks to support multiple tenants, each running multitier applications. Complex security and services requirements should be supported by the insertion of Layer 4-7 processing at any point in an application s workflow. Full mobility of VMs from any interface to any other interface must also be supported. Virtual network support should not compromise any of the other properties. 6. Modular distributed implementation that is highly reliable and scalable: The technology should be built using modular hardware and software components that are distributed and federated to provide high levels of redundancy. The modular implementation should be designed to permit increasing or decreasing the number of interfaces while the system is running, a property called dynamic scalability. 7. Single logical device: Despite its distributed implementation, the technology should act as a single, logical packet switching device. The complexity of its distributed implementation should be transparent without removing any of the desirable properties such as high reliability or dynamic scalability. Fundamental Issues with TRILL The capabilities provided by any networking technology span four dimensions: The data plane is responsible for forwarding bits. The control plane is responsible for producing the data structures that the data plane uses to forward packets. The services plane specifies how Layer 4-7 services are provided. The management plane supports the way the networking technology is operated by the facilities manager. While TRILL does address some of the issues for the data and control planes, it does nothing for the services and management planes. Specifically, when considered as a solution for data center networking problems, TRILL exhibits the following deficiencies: Layer 2 only: TRILL completely ignores IP packet forwarding. This is a problem, since most packets in the data center originate as IP. One-armed routers attached to the TRILL core perform IP forwarding (unicast and multicast), where they quickly become a bottleneck. The additional cost of these IP routers is never mentioned by vendors pushing TRILL. Multitier architecture with poor economics: Any TRILL-based solution has at least two tiers edge and core where packet processing occurs. Packet processing in the s is more complex than Ethernet forwarding because it involves additional processing steps on both ingress and egress. In real-world deployments, packet processing in the core is just as complex as with traditional core Ethernet switches as requirements that go beyond Layer 2 unicast are added. In large networks, the core itself requires multiple levels of forwarding. This additional complexity means that TRILL can never compete economically with a clean sheet design that is not burdened with this extra processing. Multicast scaling issues: While TRILL does reduce the Layer 2 unicast forwarding state in the network core, it does not reduce the multicast forwarding state. Additionally, the control plane complexity associated with setting up multicast trees is bound to present severe operational and troubleshooting challenges in real deployments. Large broadcast domains: Since crossing VLANs is expensive in a TRILL solution, TRILL inherently pushes a network designer to artificially increase the size of VLANs. This has two consequences: first, large VLANs create flooding issues that are hard to handle; second, this artificial inflation diminishes the effectiveness of VLANs as a mechanism for separating resources belonging to different applications, organizations, or tenants. Security is a grave concern in TRILLbased architectures. Copyright 2011, Juniper Networks, Inc. 5

Congestion management remains unsolved: TRILL says nothing about how congestion is handled in the network. In fact, it makes the congestion problem much harder to solve by creating a network with multiple layers of forwarding. The TRILL core elements are in a different address plane than the TRILL edge elements and endpoints, so the core has no idea how to forward congestion messages to endpoints. Additionally, TRILL networks will encounter the same fundamental problem that early Backward Congestion Notification (BCN) schemes faced the time constant of the congestion control loop is greater than the time constant of the congestion events themselves. Multi-tenancy out of scope: Until today, TRILL has not tackled the problem of overlapping IP and Ethernet addresses between organizations, nor has it addressed how to provide for a large number of VLANs. Both of these are prerequisites for data centers that host multiple tenants. Operational simplicity of a fabric: TRILL edges and cores don t present the abstraction of a single logical device. Instead, these TRILL network elements must be configured and managed as individual devices, posing a significant operational burden to data center operators. Orchestration is also a challenge in a TRILL architecture, as the complexity of the network is fully exposed to management applications. Host of other inefficiencies: Dual homing servers, flooding, VM mobility within and across data centers, multi-pathing for Layer 3 and Fibre Channel over Ethernet (FCoE) these are just some of the areas where TRILL is insufficient. Early Adopters of TRILL Face Vendor Lock-In Vendors frequently hijack the standards setting process in order to give the impression of openness for a new technology. In fact, legacy vendors have no interest in creating open standards because this would create a level playing field for their competitors. Customers, on the other hand, like open standards, so the challenge for vendors is to operate under the guise of an open standard while making incompatible or proprietary changes to the technology. Such is the case with TRILL. Incumbent vendors are encouraging customers to rip and replace their existing network gear with proprietary implementations of TRILL, promising that they will be compatible with the eventual TRILL standard in the future. In doing so, incumbent vendors lock customers into their proprietary version of the protocol, effectively forcing them to buy their gear in the future, since the proprietary technology won t cleanly interoperate with the final standard. Conclusion The problem that TRILL sets out to solve was relevant in the days of switch when you can, route when you must a time when routing was so much more expensive than switching that Ethernet networks had to be made as large as possible. Today, forwarding technology has advanced to the point where IP packets can be forwarded as easily and efficiently as Ethernet packets, so there is no need to maximize the scale of Layer 2 networks. In the final analysis, the problems that TRILL was designed to solve represent only a fraction of those faced by modern data centers. While TRILL is a positive step in the right direction, it favors an incremental approach that increases complexity while leaving many issues unresolved. In its present form, TRILL is an immature technology that unnecessarily creates new issues without substantially solving any of the problems in the data center. Juniper will reevaluate this technology when its scope has been broadened to address the networking challenges found in cloud data centers. 6 Copyright 2011, Juniper Networks, Inc.

About Juniper Networks Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. The company serves customers and partners worldwide. Additional information can be found at www.juniper.net. Corporate and Sales Headquarters APAC Headquarters EMEA Headquarters To purchase Juniper Networks solutions, Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA Phone: 888.JUNIPER (888.586.4737) or 408.745.2000 Fax: 408.745.2100 www.juniper.net Juniper Networks (Hong Kong) 26/F, Cityplaza One 1111 King s Road Taikoo Shing, Hong Kong Phone: 852.2332.3636 Fax: 852.2574.7803 Juniper Networks Ireland Airside Business Park Swords, County Dublin, Ireland Phone: 35.31.8903.600 EMEA Sales: 00800.4586.4737 Fax: 35.31.8903.601 please contact your Juniper Networks representative at 1-866-298-6428 or authorized reseller. Copyright 2011 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. 2000408-002-EN May 2011 Printed on recycled paper Copyright 2011, Juniper Networks, Inc. 7