B4: Experience with a Globally-Deployed Software Defined WAN TO APPEAR IN SIGCOMM 13



Similar documents
CS6204 Advanced Topics in Networking

B4: Experience with a Globally-Deployed Software Defined WAN

SDN CENTRALIZED NETWORK COMMAND AND CONTROL

ProgrammableFlow for Open Virtualized Data Center Network

Cisco CCNP Optimizing Converged Cisco Networks (ONT)

On the effect of forwarding table size on SDN network utilization

This topic lists the key mechanisms use to implement QoS in an IP network.

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

SDN and Data Center Networks

Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES

System Interconnect Architectures. Goals and Analysis. Network Properties and Routing. Terminology - 2. Terminology - 1

TRILL Large Layer 2 Network Solution

SDN, a New Definition of Next-Generation Campus Network

How To Understand The Power Of The Internet

Definition. A Historical Example

Software Defined Networking What is it, how does it work, and what is it good for?

The Internet: A Remarkable Story. Inside the Net: A Different Story. Networks are Hard to Manage. Software Defined Networking Concepts

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Mesh VPN Link Sharing (MVLS) Solutions

A Link Load Balancing Solution for Multi-Homed Networks

Site2Site VPN Optimization Solutions

CS 457 Lecture 19 Global Internet - BGP. Fall 2011

Introduction to Software Defined Networking (SDN) and how it will change the inside of your DataCentre

SDN. What's Software Defined Networking? Angelo Capossele

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Network Architecture and Topology

Testing Challenges for Modern Networks Built Using SDN and OpenFlow

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

Software Defined Networking & Openflow

Large-Scale Distributed Systems. Datacenter Networks. COMP6511A Spring 2014 HKUST. Lin Gu

Data Center Convergence. Ahmad Zamer, Brocade

An Overview of OpenFlow

WAN Optimization Integrated with Cisco Branch Office Routers Improves Application Performance and Lowers TCO

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012

Cisco Dynamic Multipoint VPN: Simple and Secure Branch-to-Branch Communications

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

SOFTWARE DEFINED NETWORKS REALITY CHECK. DENOG5, Darmstadt, 14/11/2013 Carsten Michel

WAN Performance Analysis A Study on the Impact of Windows 7

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Simplify Your Data Center Network to Improve Performance and Decrease Costs

IP SLAs Overview. Finding Feature Information. Information About IP SLAs. IP SLAs Technology Overview

I. ADDITIONAL EVALUATION RESULTS. A. Environment

Getting started with O3 Project Achievement ~ Innovating Network Business through SDN WAN Technologies~

Non-blocking Switching in the Cloud Computing Era

Ring Protection: Wrapping vs. Steering

MPLS: Key Factors to Consider When Selecting Your MPLS Provider

DESIGN AND ANALYSIS OF TECHNIQUES FOR MAPPING VIRTUAL NETWORKS TO SOFTWARE- DEFINED NETWORK SUBSTRATES

Using SDN-OpenFlow for High-level Services

VXLAN: Scaling Data Center Capacity. White Paper

MPLS-TP. Future Ready. Today. Introduction. Connection Oriented Transport

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

NETWORK ISSUES: COSTS & OPTIONS

November Defining the Value of MPLS VPNs

Limitations of Current Networking Architecture OpenFlow Architecture

ONOS Open Network Operating System

Why Software Defined Networking (SDN)? Boyan Sotirov

Longer is Better? Exploiting Path Diversity in Data Centre Networks

SDN in the Campus LAN Offers Immediate Benefits

Private Cloud Solutions Virtual Onsite Data Center

Data Center Network Topologies: FatTree

Frequently Asked Questions

THE REVOLUTION TOWARDS SOFTWARE- DEFINED NETWORKING

Autonomous Fast Rerouting for Software Defined Network

SDN Software Defined Networks

Symbiosis in Scale Out Networking and Data Management. Amin Vahdat Google/UC San Diego

Introduction to Routing

Chapter 3. Enterprise Campus Network Design

MPLS: Key Factors to Consider When Selecting Your MPLS Provider Whitepaper

From Active & Programmable Networks to.. OpenFlow & Software Defined Networks. Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S.

CloudLink - The On-Ramp to the Cloud Security, Management and Performance Optimization for Multi-Tenant Private and Public Clouds

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet

Preparing Your IP network for High Definition Video Conferencing

software networking Jithesh TJ, Santhosh Karipur QuEST Global

Virtual PortChannels: Building Networks without Spanning Tree Protocol

Saisei FlowCommand FLOW COMMAND IN ACTION. No Flow Left Behind. No other networking vendor can make this claim

Network Virtualization for the Enterprise Data Center. Guido Appenzeller Open Networking Summit October 2011

Ethernet Fabrics: An Architecture for Cloud Networking

Software Defined Networks

SDN Interfaces and Performance Analysis of SDN components

Disaster-Resilient Backbone and Access Networks

Monitoring Service Delivery in an MPLS Environment

Software Defined Networking What is it, how does it work, and what is it good for?


Transformation of the enterprise WAN with dynamic-path networking

Software-Defined Networking for Wi-Fi White Paper

ISTANBUL. 1.1 MPLS overview. Alcatel Certified Business Network Specialist Part 2

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Panopticon: Incremental SDN Deployment in Enterprise Networks

SuperAgent and Siebel

Open Networking User Group SD-WAN Requirements Demonstration Talari Test Results

Windows Server Performance Monitoring

Highlighting a Direction

SDN IN WAN NETWORK PROGRAMMABILITY THROUGH CENTRALIZED PATH COMPUTATION. 1 st September 2014

Voice Over IP. MultiFlow IP Phone # 3071 Subnet # Subnet Mask IP address Telephone.

Whitepaper. Controlling the Network Edge to Accommodate Increasing Demand

Network Considerations for IP Video

Transcription:

B4: Experience with a Globally-Deployed Software Defined WAN TO APPEAR IN SIGCOMM 13

Google s Software Defined WAN

Traditional WAN Routing Treat all bits the same 30% ~ 40% average utilization Cost of bandwidth, High-end routing gear

Traffic Priority User data copies to remote data centers for availability/durability (lowest volume, most latency intensive, highest priority) Remote storage access for computation over distributed data sources Large-scale data push synchronizing state across multiple data centers (highest volume, least latency intensive, lowest priority) Centralized Traffic Engineering (TE) Drive links to near 100% utilization Fast, global convergence for failures

SDN Architecture Switch hardware Forwards traffic. Does NOT run complex control software. OpenFlow controllers (OFC) Maintain network state based on network control application directive and switch events. Instruct switches to set forwarding entries. Central application (logical) Central control of the entire network.

Switch Design

Routing

Applications are aggregated to Flow Group: {source site, dest site, QoS} Bandwidth function: linear to application weight, becomes flat at required bandwidth. (discuss later)

TE Optimization Algorithm Target: Achieve max-min fairness. Tunnel Selection selects the tunnels to be considered for each FG. Tunnel Group Generation allocates bandwidth to FGs using bandwidth functions to prioritize at bottleneck links. Tunnel Group Quantization changes split ratios in each FG to match the granularity supported by switch hardware tables.

Tunnel Selection Find the k shortest tunnels in the topology graph. Example: Assume k = 3. FG[1]: A B T[1][1] = A B T[1][2] = A C B T[1][3] = A D C B FG[2]: A C T[2][1] = A C T[2][2] = A B C T[2][3] = A D C B A C D

25 Composition of FG level bandwidth functions 20 15 10 5 0 0 1 2 3 4 5 6 7 8 9 10 FG2 = App3 App2 App1 FG1 = App1 + App2

Tunnel Group Generation Allocate bandwidth to FGs based on demand and priority. 1. Initialize each FG with their most preferred tunnels. 2. Allocate bandwidth by giving equal fair share to each preferred tunnel. 3. Freeze tunnels containing any bottlenecked link. 4. If every tunnel is frozen, or every FG is fully satisfied, end. 5. Select the most preferred non-frozen tunnel for each non-satisfied FG, goto 2.

FG1 prefer FG2 prefer Fair share FG1 get/need FG2 get/need Bottle neck links Freeze tunnels 1 A B A C 0.9 10 / 20 0.45 / inf A B A B, A B C 2 A C A C 3.33 8.33 / 10 1.21 / inf A C A C B, A C 3 A D C B A D C 1.67 1.67 / 1.67 3.34 / inf all all 25 20 15 10 5 0 0 1 2 3 4 5 6 7 8 9 10 B 10G 10G A 10G C 5G 5G D Result: FG1 (20/20): A B: 10 A C B: 8.33 A D C B: 1.67 FG2 (5/inf): A C: 1.67 A B C: 0 A D C: 3.34 FG2 = App3 FG1 = App1 + App2

Tunnel Group Quantization Determining the optimal split: integer programming problem. Greedy Approach: 1. Down quantize (round) each split. 2. Add a remaining quanta to a non-frozen tunnel that makes the solution max-min fair (with minimum fair share). 3. If there are still remaining quantas, goto 2.

Tunnel Group Quantization Example split: FG2: 0.3:0.0:0.7 FG1: 0.5:0.4:0.1 Assume quanta is 0.5. FG2 (A C): A C, A B C, A D C Down quantize: 0.0:0.0:0.5 Add remaining: 0.0:0.0:1.0 FG1 (A B): A B, A C B, A D C B Down quantize: 0.5:0.0:0.0 Add remaining: 0.5:0.5:0.0 25 20 15 10 5 0 0 1 2 3 4 5 6 7 8 9 10 FG2 = App3 FG1 = App1 + App2

Impact of Quantization

TE as overlay Integrated, centralized service combining routing and traffic engineering? Traffic Engineering (central) Standard Routing (ISIS) (per switch) Use prioritized switch forwarding table entries Big red button : disable TE service and fall back to shortest-path forwarding at any time

Ops Dependency In order to avoid packet drops, not all ops can be issued simultaneously. Rules: Configure a tunnel at all affected sites before sending TG and FG. A tunnel cannot be deleted until all referencing entries are removed. Enforce dependencies (in case of network delays / reordering): OFC maintains the highest session sequence number. OFC rejects ops with smaller sequence number. TE Server retries any rejected ops after a timeout.

Deployment Statistics 13 topology changes per minute Trimming maintenance updates: 0.2 changes per minute Edge add/delete events 7 changes per day (TE algorithm rums on aggregated topology view) Takeaways: Topology aggregation significantly reduces path churn and system load. Even with topology aggregation, edge removals happen multiple times a day. WAN links are susceptible to frequent port flaps and benefit from dynamic centralized management.

Impact of Failures Centralized TE is not a cure-all.

Link Utilization

Packet Drops

Future Work Overheads in hardware programming. Each multipath table operation is typically slow (~100ms), forming the principal bottleneck in reliability. Scalability and latency of the packet I/O path between OFC and OFA. OpenFlow might support two communication channels to separate highpriority operations from throughput-oriented operations.

Thanks Q&A