Leveraging Advanced Load Sharing for Scaling Capacity to 100 Gbps and Beyond



Similar documents
100Gigabit and Beyond: Increasing Capacity in IP/MPLS Networks Today Rahul Vir Product Line Manager Foundry Networks

Boosting Capacity Utilization in MPLS Networks using Load-Sharing MPLS JAPAN Sanjay Khanna Foundry Networks

Steve Worrall Systems Engineer.

100 GIGABIT ETHERNET TECHNOLOGY OVERVIEW & OUTLOOK. Joerg Ammon <jammon@brocade.com> Netnod spring meeting 2012

Introducing Basic MPLS Concepts

IP/MPLS-Based VPNs Layer-3 vs. Layer-2

MikroTik RouterOS Introduction to MPLS. Prague MUM Czech Republic 2009

Computer Network Architectures and Multimedia. Guy Leduc. Chapter 2 MPLS networks. Chapter 2: MPLS

MPLS-based Virtual Private Network (MPLS VPN) The VPN usually belongs to one company and has several sites interconnected across the common service

MPLS Layer 2 VPNs Functional and Performance Testing Sample Test Plans

APPLICATION NOTE 211 MPLS BASICS AND TESTING NEEDS. Label Switching vs. Traditional Routing

IPv6 over IPv4/MPLS Networks: The 6PE approach

ISTANBUL. 1.1 MPLS overview. Alcatel Certified Business Network Specialist Part 2

Tackling the Challenges of MPLS VPN Testing. Todd Law Product Manager Advanced Networks Division

Transition to IPv6 in Service Providers

MPLS Environment. To allow more complex routing capabilities, MPLS permits attaching a

- Multiprotocol Label Switching -

ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES

MPLS-TP. Future Ready. Today. Introduction. Connection Oriented Transport

MP PLS VPN MPLS VPN. Prepared by Eng. Hussein M. Harb

Implementation of Traffic Engineering and Addressing QoS in MPLS VPN Based IP Backbone

MikroTik RouterOS Workshop Load Balancing Best Practice. Warsaw MUM Europe 2012

ETHERNET VPN (EVPN) OVERLAY NETWORKS FOR ETHERNET SERVICES

Introduction to MPLS-based VPNs

Demonstrating the high performance and feature richness of the compact MX Series

MPLS. A Tutorial. Paresh Khatri. paresh.khatri@alcatel-lucent.com.au

OPNET simulation of voice over MPLS With Considering Traffic Engineering

Department of Communications and Networking. S /3133 Networking Technology, Laboratory course A/B

MPLS Concepts. Overview. Objectives

Internet Firewall CSIS Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS net15 1. Routers can implement packet filtering

How To Make A Network Secure

Configuring a Load-Balancing Scheme

MPLS L2VPN (VLL) Technology White Paper

QoS Strategy in DiffServ aware MPLS environment

Understanding MPLS Hashing

Cisco Configuring Basic MPLS Using OSPF

Design of MPLS networks VPN and TE with testing its resiliency and reliability

Troubleshooting Bundles and Load Balancing

Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks

Performance Evaluation for VOIP over IP and MPLS

How Routers Forward Packets

RFC 2547bis: BGP/MPLS VPN Fundamentals

BFD. (Bidirectional Forwarding Detection) Does it work and is it worth it? Tom Scholl, AT&T Labs NANOG 45

MPLS in Private Networks Is It a Good Idea?

Junos MPLS and VPNs (JMV)

WAN Topologies MPLS. 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr Cisco Systems, Inc. All rights reserved.

Enterprise Network Simulation Using MPLS- BGP

Routed VPLS using BGP draft-sajassi-l2vpn-rvpls-bgp-00.txt

CS 457 Lecture 19 Global Internet - BGP. Fall 2011

Virtual Private Networks. Juha Heinänen Song Networks

How To Understand The Benefits Of An Mpls Network

SBSCET, Firozpur (Punjab), India

MPLS RSVP-TE Auto-Bandwidth: Practical Lessons Learned

Implementing MPLS VPN in Provider's IP Backbone Luyuan Fang AT&T

Testing Edge Services: VPLS over MPLS

MPLS RSVP-TE Auto-Bandwidth: Practical Lessons Learned. Richard A Steenbergen <ras@gt-t.net>

Project Report on Traffic Engineering and QoS with MPLS and its applications

MPLS-based Layer 3 VPNs

Requirements for VoIP Header Compression over Multiple-Hop Paths (draft-ash-e2e-voip-hdr-comp-rqmts-01.txt)

Juniper Networks NorthStar Controller

Master Course Computer Networks IN2097

MPLS Layer 3 and Layer 2 VPNs over an IP only Core. Rahul Aggarwal Juniper Networks. rahul@juniper.net

DD2491 p BGP-MPLS VPNs. Olof Hagsand KTH/CSC

MPLS over IP-Tunnels. Mark Townsley Distinguished Engineer. 21 February 2005

l.cittadini, m.cola, g.di battista

Juniper Exam JN0-343 Juniper Networks Certified Internet Specialist (JNCIS-ENT) Version: 10.1 [ Total Questions: 498 ]

Internetworking II: VPNs, MPLS, and Traffic Engineering

Transforming Evolved Programmable Networks

PROTECTION ALGORITHMS FOR BANDWIDTH GUARANTEED CONNECTIONS IN MPLS NETWORKS WONG SHEK YOON

MPLS VPN Services. PW, VPLS and BGP MPLS/IP VPNs

DD2490 p Routing and MPLS/IP. Olof Hagsand KTH CSC

Bandwidth Management in MPLS Networks

Multi Protocol Label Switching (MPLS) is a core networking technology that

Overlay Networks and Tunneling Reading: 4.5, 9.4

Kingston University London

Virtual Private LAN Service (VPLS) Conformance and Performance Testing Sample Test Plans

Virtual Leased Lines - Martini

DD2491 p MPLS/BGP VPNs. Olof Hagsand KTH CSC

MPLS for ISPs PPPoE over VPLS. MPLS, VPLS, PPPoE

Cisco IOS Flexible NetFlow Technology

Technical Brief: Offering Scalable Layer 2 Services with VPLS and VLL

RSVP- A Fault Tolerant Mechanism in MPLS Networks

MPLS for Dummies. Richard A Steenbergen <ras@nlayer.net> nlayer Communications, Inc.

MPLS Based Recovery Mechanisms

Multiprotocol Label Switching Load Balancing

Content CHAPTER 1 MPLS OVERVIEW

APNIC elearning: Introduction to MPLS

VPLS Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

Protection Methods in Traffic Engineering MPLS Networks

MPLS is the enabling technology for the New Broadband (IP) Public Network

Internet Packets. Forwarding Datagrams

MPLS VPN Security BRKSEC-2145

ADAPTIVE RESOURCE ALLOCATION AND INTERNET TRAFFIC ENGINEERING ON DATA NETWORK

Quidway MPLS VPN Solution for Financial Networks

WHITE PAPER. Multi-Protocol Label Switching (MPLS) Conformance and Performance Testing

Network Architecture Validated designs utilizing MikroTik in the Data Center

Data Networking and Architecture. Delegates should have some basic knowledge of Internet Protocol and Data Networking principles.

HP Networking BGP and MPLS technology training

MPLS VPN over mgre. Finding Feature Information. Prerequisites for MPLS VPN over mgre

Transcription:

Leveraging Advanced Load Sharing for Scaling Capacity to 100 Gbps and Beyond Ananda Rajagopal Product Line Manager Service Provider Solutions Foundry Networks arajagopal@foundrynet.com

Agenda 2

Why Load Sharing Need more bandwidth Utilize investment in existing infrastructure Ability to add bandwidth in small increments Cost-effectively add bandwidth Increased protection End to End protection with diverse paths 1+N link protection Avoid idling of backup paths Want to upgrade to next bandwidth level Transport carrier not offering higher bandwidth links 100GbE not available yet 3

Must Scale Beyond 10-G Ethernet, NOW R2 R5 R1 R4 R7 R3 R6 Ever increasing demand for bandwidth in backbones, transit links, Internet peering points, data centers, 100 Gigabit Ethernet is still ~18-24 months away OC-768 POS for many providers is an unaffordable alternative Equal Cost Multi-Path (ECMP) with nx10ge Link Aggregation Groups (LAG) is a far more affordable way of scaling capacity 4

Factors affecting Load Sharing Protocols: Determine multiple paths for ECMP Routing Protocols: IGP, BGP Provide path diversity Link Aggregation: Offer multiple links for load-sharing Link Aggregation/bundling/trunks Provide link diversity Data Forwarding: Decision on how packets are loadshared Load Balancing Algorithm Provide efficient utilization Fields in the packet used for load balancing Ability to tune to various traffic types 5

Factors affecting Load Sharing Additional factors for MPLS Protocols: Determine multiple paths for ECMP MPLS Protocols: LDP, RSVP-TE Provide LSP path diversity MPLS payload mapping: Assign MPLS services to LSPs BGP/MPLS-VPN & IPv4/v6 mapping to LSPs PW mapping to LSPs Provides per service/customer load balancing 6

Methods to boost capacity

Routing Protocols ECMP Routing Protocols determine multiple equal cost paths to a destination IGP (ISIS/OSPF) ECMP: Affects paths taken by IP traffic Affects paths taken by MPLS LSPs LDP paths follow IGP topology RSVP-TE LSPs follow IGP and IGP-TE topologies BGP ECMP: Affects paths taken by IP traffic Affects paths taken by IP & IP-VPN traffic in MPLS networks Multiple equal cost BGP next-hops reachable by diverse LSPs Multiple LSP paths to a BGP next-hop 8

Routing Protocols ECMP Considerations Number of ECMP paths per prefix supported by a router More paths give better path diversity Support of ECMP with link aggregation Very common that each path can contain LAG groups LAG bandwidth changes should optionally be automatically reflected in Layer 3 interface metrics allowing routing protocols to choose better paths Does the router support even distribution over any number of paths? For better utilization of network resources, must support even distribution for any number of paths (2, 3, 4, 5, 6,..) 9

MPLS Signaling Protocols ECMP MPLS signaling allows multiple LSPs to the same destination RSVP-TE: Selects a path for a LSP from multiple equal cost paths that satisfy the LSP constraints, as determined through CSPF Typical criteria used: Hops: Pick the path with least number of hops Less probability of failure Least-fill: Pick the path with highest available bandwidth Even spread of traffic Most-fill: Pick the path with lowest available bandwidth Leave room for higher bandwidth LSPs LDP: Allows a prefix to be reachable through multiple equal cost label paths 10

IP Mapping to LSPs: For IPv4/v6 Routing and BGP/MPLS-VPNs Pick from equal cost paths to routers Y & Z Router X BGP advertisement from Router Y LSP C LSP B LSP A Router Y Network N Router Z BGP advertisement from Router Z Typical mapping criteria used: Assign a prefix to single LSP Better predictability Map prefixes within a VRF to single LSP Better operator control Load-share on per flow basis Better traffic distribution 11

PW Mapping to LSPs: For VPWS and VPLS Pick from equal cost LSPs A & B PE1 PE2 Local Circuit PW Local Circuit Typical mapping criteria used: Bind PW to least used LSP (LSP with lowest number of PWs) Good distribution of traffic Bind PW to LSP with most available bandwidth or same class of service Useful for services with dedicated bandwidth requirements Explicitly bind PW to LSP Better operator control PW traffic split across multiple LSPs Better distribution of traffic based on flows 12

Link Aggregation/Bundling A very popular method of bundling multiple physical links between 2 devices Used on both local side and network side CE PE PE Typically, higher layer protocols unaware of the link bundling Doesn t increase routing overhead as opposed to ECMP 13

Link Aggregation Considerations IEEE 802.3 LAG (LACP) support Dynamic configuration Increased availability Static Link Aggregation Groups (LAG) support No need for control protocol Works in multi-vendor scenario LAG capacity Number of links in a LAG More the links, higher the bandwidth of the LAG Number of LAG groups 14

Link Aggregation Considerations The path to 100Gbps - Today LAG must support bundling of multiple 10GE links to reach 100Gbps Large number of links in a LAG is the way to scale network capacity until 100GE arrives Bundling of 10GE links is compatible with current optical infrastructure LAG should scale beyond 100Gbps to 200 and 300Gbps Already, 100Gbps is not sufficient in certain networks 100GE is still ~24 months away Bundling with 10GE links is the way to scale beyond 100Gbps 15

Methods for efficient utilization

Load-Sharing in the Forwarding Plane 2 common schemes to load-share traffic over equal cost paths/links Packet based Flow based 17

Packet based Forwarding Each packet in turn is sent on the next link Pkt 4 Pkt 3 Pkt 2 Pkt 1 Pkt 3 Pkt 1 Pkt 4 Pkt 2 Perfect load balancing Potential packet reordering issues Possible increase in latency and jitter for some flows 18

Flow based Forwarding Identifies packets as flows Based on packet content such as IP header Keeps flows on the same path Maintains packet ordering Flow A Flow B Flow B Flow A Pkt 4 Pkt 3 Pkt 2 Pkt 1 Flow A Pkt 4 Flow A Pkt 1 Flow B Flow B Pkt 3 Pkt 2 Hashing is one of the most popular load sharing scheme for flow based forwarding 19

Load Sharing for L3 Flows (1) IPv4 & IPv6 Flows based on Source IP & Destination IP addresses Works in most scenarios Issue: Traffic between 2 hosts gets relegated to one path Can lead to over-utilization of one path Source IP Address Destination IP Address All traffic between Host A and Host B takes the same path Packet Fields Inspected IM Telnet HTTP FTP Pkt Pkt Pkt Pkt Host A Host B 20

Load Sharing for L3 Flows (2) IPv4 & IPv6 Flows based on L2, L3 and L4 information Better traffic distribution for applications between 2 hosts Source MAC Address Destination MAC Address VLAN-Id Source IP Address Destination IP Address IP Protocol / IPv6 next hdr Source TCP/UDP Port Destination TCP/UDP Port Packet Fields Inspected HTTP Pkt FTP Pkt Traffic between Host A and Host B now utilizes different paths IM Telnet Pkt Pkt Host A Host B 21

Load Sharing for L3 Flows (3) Layer 4 usage considerations Fragmentation Issue: If payload is fragmented in IP packets, only the first IP packet carries the L4 information which may lead to packet ordering issues Solution: Set TCP segment size lower than the IP MTU to avoid fragmentation Load balance fragmented packets using L3 information only Load balance non-fragmented packets using L3 & L4 information 22

Use Case 1: Load Sharing in a IP network IPv4 Throughput Test 16000 routes advertised 16000 routes advertised 16000 routes advertised Router-1 Router-2 IXIA-1 IXIA-2 32 unbundled 1GE links = 32 Routed Interfaces Transmit 64 Bytes packets @ 98% line rate 32-1GE ports Link Aggregation Group = 1 Routed Interface Packets load balanced across 32-port LAG 32 unbundled 1GE links = 32 Routed Interfaces Ixia receives packets across 32 links Random Distribution 127,000 Source IP addr. 127,000 Destination IP addr. Monitor Throughput on IXIA-2 23

Use Case 1: Load Sharing in a IP network IPv4 Throughput Test Results Virtually no packet loss Load Sharing can result in almost line rate throughput (for any random traffic distribution) Tx stream from Ixia-1 port 1 Other 31 streams transmitting at same rate Rx streams on Ixia-2 from port 1 to 32 24

Load Sharing for L2 flows (1) Flows based on Layer 2 information in packet However, IP packets between 2 routers will always take the same path Source MAC Address Destination MAC Address Vlan-Id, Inner Vlan-Id Etype Layer 2 Packet Fields Inspected IM Pkt Telnet Pkt HTTP Pkt FTP Pkt IP Traffic between Router X and Router Y will take the same path since MAC addresses in the packets will be of the routers themselves Router X Router Y Host A Host B 25

Load Sharing for L2 flows (2) Consideration for IP packets Determine IPv4/v6 packets in L2 flows for better distribution: Flows based on Layer2/3/4 information for IPv4/v6 packets Flows based on Layer 2 information for other packets Source MAC Address Destination MAC Address Vlan-Id, Inner Vlan-Id Etype Layer 2 Packet Fields Inspected HTTP Pkt FTP Pkt IP Traffic between Router X and Router Y now distributed over different paths Host A IPv4/v6 Packet Fields Inspected Router X IM Pkt Telnet Pkt Router Y Source MAC Address Destination MAC Address VLAN-Id Source IP Address Destination IP Address IP Protocol / IPv6 next hdr Source TCP/UDP Port Destination TCP/UDP Port 26 Host B

Load Sharing on MPLS PE router Ingress & Egress PE L3VPN endpoint LSP VPWS endpoint VPLS endpoint Ingress PE Transit LSR Egress PE At Ingress PE (packets entering a MPLS LSP): Can load share across multiple LSPs and multiple links in a LAG Apply load sharing principles of L2 & L3 flows At Egress PE (packets exiting a MPLS LSP): Can load share per LSP/VC label: High usage PWs/VPN labels will over-utilize one path Per flow: Better distribution of traffic Using LSP/VC label and load sharing principles of L2 & L3 flows 27

Load Sharing on MPLS LSRs Packet Speculation Transit LSRs (and PHP nodes) have no information on packet payload Originating LER load balances using L2/L3/L4 hashing How will Transit LSR load-share over a LAG using Flows? Terminating LSR load balances using LSP/VC label/l2/l3/l4 hashing LSP Transit LSR speculates on the packet type Checks first nibble after bottommost label If 4/6, speculates on packet as IPv4/IPv6 Else (optionally) speculates on packet as Ethernet Can now load-share using LSP Label/VC label/l2/l3/l4 headers 28

Load Balancing Algorithm Considerations for flow based forwarding A good load balancing algorithm is essential for efficiently utilizing the increased capacity of LAG/ECMP paths Must Distribute Traffic Evenly For example, a good algorithm needs to ensure that effective capacity of a 32-port 10GE LAG should be close to 320Gbps Other Considerations: Number of fields in packet header that can be used for load balancing More the fields, better the distribution Number of hash buckets More hash buckets result in better distribution Minimal correlation of ECMP with LAG Correlation will lead to over-utilization of some paths/links Can treat each packet type differently For example, L2 & L3 flows have to be treated differently 29

Use Case 2: Load Sharing across a 32-port LAG Group IPv4 Traffic Distribution Test 100,000 routes advertised 100,000 routes advertised 100,000 routes advertised Router-1 Router-2 IXIA-1 IXIA-2 One 1GE link = 1 Routed Interface 32-1GE ports Link Aggregation Group = 1 Routed Interface One 1GE link = 1 Routed Interface Transmit 64 Bytes packets @ 1Gbps Packets load balanced across 32-port LAG Ixia receives packets on 1GE link Random Distribution 127,000 Source IP addr. 16,645,890 Destination IP addr. Monitor Traffic Distribution on 32-port LAG on Router-1 30

Use Case 2: Load Sharing across a 32-port LAG Group IPv4 Traffic Distribution Test Results Number of transmitted packets per port Very small difference between packet rates across links Traffic distributed evenly across 32-port LAG group 31

Hash based forwarding issues and solutions: Polarization Effect In a multi-stage network, similar routers pick the same path for flows with identical hash Leads to over-utilization of some parts of the network Flows A & B have the same hash In this example, each router picks the first link for flows with same hash Flow A Flow B Packets to Network X Network X 32

Hash based forwarding issues and solutions: Basic Hash Diversification (Neutralizes Polarization Effect) Each router uses a unique-id per router in hash calculations Alternatively, hashing using Source and Destination MACs may give comparable results in most scenarios Similar routers now pick different links However, flows are still together on same links In this example, each router doesn t pick the first link for flows with same hash, thus achieving link diversity but not flow diversity Flow A Flow B Packets to Network X Network X Flows A & B have the same hash 33

Hash based forwarding issues and solutions: Advanced Hash Diversification (Neutralizes Polarization Effect) Routers in each stage of the network run a different variant of the hash algorithm and neutralize polarization effect Flows can now be distributed In this example, each router may pick a different link for flows with same hash, thus achieving both link diversity and flow diversity Flow A Flow B Flow C Packets to Network X Network X Router X Flows A, B & C have the same hash on Router X but different hash on other routers 34

Summary Load-Sharing is a cost-effective technique to improve network utilization Works over multiple paths and links Multiple methods to boost capacity at various layers Can effectively increase throughput beyond the current limits of physical link capacity Flow based forwarding offers many advantages for efficient utilization of the increased capacity Watch out for polarization effects and neutralize them Not a one size fits all approach Choose optimal schemes based on traffic types and operator policy 35

Thank You!