Scaling IP Mul-cast on Datacenter Topologies. Xiaozhou Li Mike Freedman



Similar documents
Extending Networking to Fit the Cloud

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

VXLAN: Scaling Data Center Capacity. White Paper

Torii-HLMAC: Torii-HLMAC: Fat Tree Data Center Architecture Elisa Rojas University of Alcala (Spain)

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

TRILL Large Layer 2 Network Solution

SDN. Roadmap to Operating SDN-based Networks Workshop July 15, Kireeti Kompella CTO, JDI. Copyright 2014 Juniper Networks, Inc.

DCB for Network Virtualization Overlays. Rakesh Sharma, IBM Austin IEEE 802 Plenary, Nov 2013, Dallas, TX

Panopticon: Reaping the benefits of Incremental SDN Deployment in Enterprise Networks

Performance Management in Big Data Applica6ons. Michael Kopp, Technology

Portland: how to use the topology feature of the datacenter network to scale routing and forwarding

Overview of SDN Terminology & Concepts

Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer

Network Technologies for Next-generation Data Centers

Data Center Network Topologies: FatTree

Network Virtualization for Large-Scale Data Centers

How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014

Software Defined Networks

Wireless Networks: Network Protocols/Mobile IP

Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat

Project Overview. Collabora'on Mee'ng with Op'mis, Sept. 2011, Rome

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

TRILL for Data Center Networks

Avaya VENA Fabric Connect

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Software-Defined Networking for the Data Center. Dr. Peer Hasselmeyer NEC Laboratories Europe

Advanced Computer Networks. Datacenter Network Fabric

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

HPAM: Hybrid Protocol for Application Level Multicast. Yeo Chai Kiat

Roman Hochuli - nexellent ag / Mathias Seiler - MiroNet AG

A Case for Overlays in DCN Virtualization Katherine Barabash, Rami Cohen, David Hadas, Vinit Jain, Renato Recio and Benny Rochwerger IBM

Definition. A Historical Example

Using Network Virtualization to Scale Data Centers

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

CLOUD NETWORKING THE NEXT CHAPTER FLORIN BALUS

Distributed Systems Interconnec=ng Them Fundamentals of Distributed Systems Alvaro A A Fernandes School of Computer Science University of Manchester

Open Source Networking for Cloud Data Centers

Lecture 7: Data Center Networks"

Defining SDN. Overview of SDN Terminology & Concepts. Presented by: Shangxin Du, Cisco TAC Panelist: Pix Xu Jan 2014

Datacenter architectures

Data Center Convergence. Ahmad Zamer, Brocade

From Active & Programmable Networks to.. OpenFlow & Software Defined Networks. Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S.

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012

Data Center Use Cases and Trends

Software Defined Network (SDN)

Towards Predictable Datacenter Networks

Software Defined Networking & Openflow

20. Switched Local Area Networks

Security in the Software Defined Data Center

VXLAN Bridging & Routing

VMDC 3.0 Design Overview

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

How To Understand The Power Of The Internet

Content Distribu-on Networks (CDNs)

SDN CONTROLLER. Emil Gągała. PLNOG, , Kraków

Why Software Defined Networking (SDN)? Boyan Sotirov

Analysis of Network Segmentation Techniques in Cloud Data Centers

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Introduction to IP v6

Virtual Machine in Data Center Switches Huawei Virtual System

Internetworking II: VPNs, MPLS, and Traffic Engineering

Multitenancy Options in Brocade VCS Fabrics

This presentation covers virtual application shared services supplied with IBM Workload Deployer version 3.1.

Enterprise Data Center Networks

VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

SDN. What's Software Defined Networking? Angelo Capossele

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture

Multi-site Datacenter Network Infrastructures

BROADCOM SDN SOLUTIONS OF-DPA (OPENFLOW DATA PLANE ABSTRACTION) SOFTWARE

Scalable Approaches for Multitenant Cloud Data Centers

Network Virtualization Solutions

SDN Controller Requirement

May 13-14, Copyright 2015 Open Networking User Group. All Rights Reserved Not For

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Software Defined Networking What is it, how does it work, and what is it good for?

Stability of QOS. Avinash Varadarajan, Subhransu Maji

OpenDaylight Network Virtualization and its Future Direction

RIDE THE SDN AND CLOUD WAVE WITH CONTRAIL

AIN: A Blueprint for an All-IP Data Center Network

SDN and Data Center Networks

ENOS: a Network Opera/ng System for ESnet Testbed

ICN based Scalable Video Conferencing on Virtual Edge Service Routers (VSER) Platform

CS244 Lecture 5 Architecture and Principles

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

Optimizing Data Center Networks for Cloud Computing

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

Definition of a White Box. Benefits of White Boxes

Transcription:

Scaling IP Mul-cast on Datacenter Topologies Xiaozhou Li Mike Freedman

IP Mul0cast Applica0ons Publish- subscribe services Clustered applica0ons servers Distributed caching infrastructures

IP Mul0cast Applica0ons Network virtualiza0on overlays Emerging standards (VXLAN, NVGRE) encapsulate L2 MAC frames into a UDP header Virtualize broadcast within each virtual network using IP mul0cast in the physical network

Problems with IP Mul0cast Reliability NAK / gossip / error correc0on Stability rate limi0ng / mul0cast conges0on control Scalability ² number of supported groups

Why is IP mul-cast hard to scale? Control Plane Data Plane

Challenges with scaling control plane IGMP + PIM Switches maintain info about all groups Periodically send queries about group membership Communica0on and memory complexity

Challenges with scaling data plane Switches must maintain per- group forwarding rules Mul0cast addresses cannot be aggregated by IP prefix Limited mul0cast forwarding table size O(100s 1000s) on commodity switches Prior work scaled up # groups per switch compression: FRM [SIGCOMM 06], LIPSIN [SIGCOMM 09], ESM [ToN 10] Transla0on mul0cast to unicast: Dr. Mul0cast [EuroSys 10] Insufficient for large scale datacenter network

Our Approach: Leverage unique topology of datacenter networks to scale out

Datacenter Mul0- rooted Tree Topologies Core Aggrega0on Edge Pod

Topology simplifies mul0cast tree construc0on Core Aggrega0on Edge Pod

Our contribu0ons for scalable DC mul0cast 1. Par00on and distribute the mul0cast address space Increase # of groups at core and aggrega0on layers 2. Enable local mul0cast address aggrega0on Increase # of groups in each pod 3. Handle network failures with fast local rerou0ng Quick response to topology changes

Par00oning the mul0cast address space Core Aggrega0on Edge 00/ 01/ 10/ 11/

Par00oning the mul0cast address space Core Aggrega0on Edge 00/ 01/ 10/ 11/

Par00oning the mul0cast address space Core Aggrega0on Edge 00/ 01/ 10/ 11/

Par00oning the mul0cast address space Many switches to distribute mul0cast addresses at core layer Core Aggrega0on Edge Fewer switches to distribute mul0cast addresses in each pod

Pod s address capacity is the bofleneck Core Aggrega0on Edge

Reduce # of entries in the bofleneck switch G w x y z addr ac-on 000 to A2, 001 to A2, 010 to A2, 011 to A2, C6 C10 upper layer w x y z 000 to E0, E2 001 to E1, E2, E3 010 to E1, E2, E3 011 to E0, E2 A2 bofleneck switch w z 000 011 E0 E1 E2 E3 lower layer

Local address transla0on and aggrega0on G w x y z addr ac-on 000 to A2 (- >100), 001 to A2 (- >110), 010 to A2 (- >111), 011 to A2 (- >101), C6 C10 upper layer wz 10* to E0, E2 xy 11* to E1, E2, E3 A2 bofleneck switch w z 100 (- >000) 101 (- >011) E0 E1 E2 E3 lower layer

Is group aggrega0on easy to compute? Op0miza0on problem: given fixed # of aggregated groups, minimize network overhead NP- Hard channeliza0on problem Our approach: local aggrega0on at the boflenecks Independent local sub- problems Computa0ons can be distributed Reduce network and computa0onal overhead

Pulng it all together...

Fault tolerance Fast path: Reroute traffic through other paths Slow path: Reconstruct mul0cast tree (See paper)

Manage Mul0cast using SDN Sense VMs group subscrip0ons Network topology App #1 App #2 Mul-cast Network Opera-ng System Compute Local aggrega0on Generate mul0cast tree Control Proac0vely install mul0cast forwarding rules into switches

Evalua0on How well do our techniques help a datacenter: Support greater number of mul0cast groups? Handle common mul0cast group dynamics? Survive moderate network failures? (see paper)

Simula0on Environment 3-0ered fat tree with 48- port switches (27,468 end hosts) Mul0- tenant environment # of VMs per tenant follows exponen0al distribu0on VM placement Placed on hosts near to one another Distributed uniformly at random across network Mul0cast communica0on environment Most groups are small, few groups have most servers Generated from trace of IBM WebSphere Virtual Enterprise Group sizes: mean = 51, min = 5, median = 11, max = 5000 Groups size uniform at random (see paper)

Local aggrega0on reduces bofleneck limits C Core A Agg Edge E Edge (la) E 100,000 groups (group size: mean = 51, max = 5000) A tenant s VMs are placed on nearby hosts 0 500 1000 1500 2000 2500 3000 3500 No. of multicast addresses on a switch Low traffic overhead (see paper)

Local aggrega0on increases maximum # of groups in the datacenter network # of groups (group size: mean = 51, max = 5000) 120,000 100,000 80,000 No aggrega0on Loal aggrega0on switch capacity = 1000 60,000 40,000 20,000 Most entries has one outport (60% in agg and 80% in edge) 0 Nearby Random VM Placement Policy

SDN performance sufficient for dynamism Performance of commodity network controllers and switches Controller: 1.6 million request per second [HotICE 2012] Switch: 600 1000 updates per second [NEC Jan 2012] Average switch update rates with group dynamics Assume each pod has 1000 join/leave events per second VM placement: Switch Edge Agg Core nearby 42 31 0.83 random 42 78 133 # of updates per switch per second on average

Conclusion Goal: Support large # of IP mul0cast groups in datacenters Reduce the barrier of adop0ng IP mul0cast Contribu0ons: Leveraged mul0- rooted topology to scale- out by dividing mul0cast address space across mul0ple switches. Introduced local aggrega0on algorithms to overcome boflenecks in pods. Proposed mechanisms for fast failover and mul0cast tree management, prac0cal with today s SDN