MPLS Traffic Engineering with Auto-Bandwidth: Operational Experience and Lessons Learned. Mohan Nanduri Mark Kasten Naoki Kitajima



Similar documents
MPLS Traffic Engineering with Auto-Bandwidth: Operational Experience and Lessons Learned. Mohan Nanduri Mark Kasten Naoki Kitajima Microsoft

MPLS RSVP-TE Auto-Bandwidth: Practical Lessons Learned

MPLS RSVP-TE Auto-Bandwidth: Practical Lessons Learned. Richard A Steenbergen <ras@gt-t.net>

MAXIMIZE BANDWIDTH UTILIZATION WITH JUNIPER NETWORKS TE++

Juniper Networks NorthStar Controller

MPLS for Dummies. Richard A Steenbergen <ras@nlayer.net> nlayer Communications, Inc.

PCEP Extensions for MPLS-TE LSP Automatic Bandwidth Adjustment with Stateful PCE

QoS Strategy in DiffServ aware MPLS environment

Performance Tuning Guide for ECM 2.0

THIS WEEK: DEPLOYING MPLS

MikroTik RouterOS Introduction to MPLS. Prague MUM Czech Republic 2009

100Gigabit and Beyond: Increasing Capacity in IP/MPLS Networks Today Rahul Vir Product Line Manager Foundry Networks

Leveraging Advanced Load Sharing for Scaling Capacity to 100 Gbps and Beyond

HP Networking BGP and MPLS technology training

Configuring NetFlow Secure Event Logging (NSEL)

SDN IN WAN NETWORK PROGRAMMABILITY THROUGH CENTRALIZED PATH COMPUTATION. 1 st September 2014

MPLS Network Design & Monitoring

MENTER Overview. Prepared by Mark Shayman UMIACS Contract Review Laboratory for Telecommunications Science May 31, 2001

MPLS L2VPN (VLL) Technology White Paper

Bandwidth Management in MPLS Networks

Chapter 7 outline. 7.5 providing multiple classes of service 7.6 providing QoS guarantees RTP, RTCP, SIP. 7: Multimedia Networking 7-71

Task Manager. Tasks. Starting Task Manager CHAPTER

Project Report on Traffic Engineering and QoS with MPLS and its applications

Chapter 1 Implement EIGRP operations Chapter 2 Implement multiarea OSPF operations Chapter 4 Implement Cisco IOS routing features...

Voice over IP: RTP/RTCP The transport layer

Managing Dynamic Configuration

Quick Start for Network Agent. 5-Step Quick Start. What is Network Agent?

Chapter 28 Denial of Service (DoS) Attack Prevention

Advanced Networking Voice over IP: RTP/RTCP The transport layer

Chapter 15: Advanced Networks

Analysis of traffic engineering parameters while using multi-protocol label switching (MPLS) and traditional IP networks

VoIP from A to Z. NAEO 2009 Conference Cancun, Mexico

Performance Evaluation of Linux Bridge

SNMP OIDs. Content Inspection Director (CID) Recommended counters And thresholds to monitor. Version January, 2011

Monitoring and analyzing audio, video, and multimedia traffic on the network

Network Agent Quick Start

IP Office Technical Tip

Testing VoIP on MPLS Networks

MPLS Environment. To allow more complex routing capabilities, MPLS permits attaching a

Configuring WMI Performance Monitors

Improving QOS in IP Networks. Principles for QOS Guarantees. Principles for QOS Guarantees (more) Principles for QOS Guarantees (more)

Voice over IP. Demonstration 1: VoIP Protocols. Network Environment

Common Core Network Readiness Guidelines Is your network ready? Detailed questions, processes, and actions to consider.

An Architecture for Application-Based Network Operations

MPLS Layer 2 VPNs Functional and Performance Testing Sample Test Plans

IxNetwork TM MPLS-TP Emulation

Quick Start for Network Agent. 5-Step Quick Start. What is Network Agent?

IMPLEMENTING CISCO MPLS V3.0 (MPLS)

Enterprise QoS. Tim Chung Google Corporate Netops Architecture Nanog 49 June 15th, 2010

Distributed Monitoring Pervasive Visibility & Monitoring, Selective Drill-Down

Configuring NetFlow Secure Event Logging (NSEL)

Traffic Engineering Management Concepts

Jerry Ash AT&T Bur Goode AT&T George Swallow Cisco Systems, Inc.

How-To Configure NetFlow v5 & v9 on Cisco Routers

DD2490 p Routing and MPLS/IP. Olof Hagsand KTH CSC

Smart Tips. Enabling WAN Load Balancing. Key Features. Network Diagram. Overview. Featured Products. WAN Failover. Enabling WAN Load Balancing Page 1

Using a Generic Plug and Play Performance Monitor for SoC Verification

Distributed Explicit Partial Rerouting (DEPR) Scheme for Load Balancing in MPLS Networks

Smart Queue Scheduling for QoS Spring 2001 Final Report

BFD. (Bidirectional Forwarding Detection) Does it work and is it worth it? Tom Scholl, AT&T Labs NANOG 45

J-Flow on J Series Services Routers and Branch SRX Series Services Gateways

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

TE in action. Some problems that TE tries to solve. Concept of Traffic Engineering (TE)

Disjoint Path Algorithm for Load Balancing in MPLS network

Network Simulation Traffic, Paths and Impairment

Gentran Integration Suite. Archiving and Purging. Version 4.2

Evolution of QoS routing in the Internet

Enabling NetFlow on Virtual Switches ESX Server 3.5

Cisco Configuring Basic MPLS Using OSPF

SDN. Roadmap to Operating SDN-based Networks Workshop July 15, Kireeti Kompella CTO, JDI. Copyright 2014 Juniper Networks, Inc.

Chapter 51 WAN Load Balancing

Cisco Quality of Service and DDOS

Chapter 4 Rate Limiting

Implementing Cisco MPLS

Configuring a Load-Balancing Scheme

NetFlow: What is it, why and how to use it? Miloš Zeković, ICmyNet Chief Customer Officer Soneco d.o.o.

CMA5000 SPECIFICATIONS Gigabit Ethernet Module

Check 21 Guide to Connectivity Options

2.0 System Description

Traffic Engineering & Network Planning Tool for MPLS Networks

Requirements for VoIP Header Compression over Multiple-Hop Paths (draft-ash-e2e-voip-hdr-comp-rqmts-01.txt)

How To Manage A Network With A Network Management System (Qoe)

Implementing MPLS VPN in Provider's IP Backbone Luyuan Fang AT&T

Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain

Figure 1: Network Topology

CS640: Introduction to Computer Networks. Why a New Service Model? Utility curve Elastic traffic. Aditya Akella. Lecture 20 QoS

Question: 3 When using Application Intelligence, Server Time may be defined as.

Chapter 4. VoIP Metric based Traffic Engineering to Support the Service Quality over the Internet (Inter-domain IP network)

Fundamentals of VoIP Call Quality Monitoring & Troubleshooting. 2014, SolarWinds Worldwide, LLC. All rights reserved. Follow SolarWinds:

SolarWinds Technical Reference

Improving Quality of Service

Real-Time Traffic Engineering Management With Route Analytics

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

QoS in PAN-OS. Tech Note PAN-OS 4.1. Revision A 2011, Palo Alto Networks, Inc.

Technical and Troubleshooting. Guide AIRAVE 2.5

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

Comparison of Wireless Protocols. Paweł Ciepliński

Transcription:

MPLS Traffic Engineering with Auto-Bandwidth: Operational Experience and Lessons Learned Mohan Nanduri Mark Kasten Naoki Kitajima

Agenda Auto-Bandwidth basics and overview Operational Experience Lessons Learned Configuration examples Show command output examples

MPLS Auto-Bandwidth Basic and Overview Automates the process of monitoring and adjusting LSP bandwidth MPLS auto-bandwidth uses traffic flows through the LSP and automatically adjusts the bandwidth based on the measured traffic flowing through the LSP LSP bandwidth is adjusted on a per-lsp based on configurable values Automatic bandwidth adjustment feature treats each LSP independently to adjust bandwidth according to the adjustment frequency configured for the LSP without regard for any adjustments previously made or pending for other LSPs

MPLS Auto-Bandwidth Basic and Overview Auto-Bandwidth Calculation mechanism: Uses average per-lsp traffic utilization samples and picks the highest measured bandwidth value observed during the configured interval At the end of each interval the highest measured bandwidth value is compared with the existing LSP bandwidth value If the delta is between existing LSP bandwidth and highest measured bandwidth value chosen is equivalent or greater than adjust-threshold value, the LSP tries to re-signal with the new bandwidth, otherwise the LSP bandwidth remains the same Make-before-break mechanism kicks into place and LSP re-signals for the new bandwidth. Once it s successful in setting up the LSP, the maximum bandwidth value is reset to zero. The above process is repeated for the next adjust-interval Auto-Bandwidth uses two mechanisms to adjust the bandwidth Adjust-Interval Overflow/underflow

MPLS Auto-Bandwidth With Adjust- Interval The highest measured bandwidth value is 100 Mbps for a given adjust-interval period, the LSP is signaled with 100 Mbps and its reset to zero. As the new adjust-interval starts, the highest measured bandwidth value is 50 Mbps. LSP is signaled with 50 Mbps if the adjust-threshold value is met. If for any reason the LSP cannot signal 50 Mbps, the LSP s bandwidth will remain at 100 Mbps.

MPLS Auto-Bandwidth With Overflow If the highest measured bandwidth value is higher than the current LSP signaled bandwidth value. The delta between highest measured bandwidth value and current LSP signaled bandwidth value is greater than or equal to configured adjust-threshold value. The number of occurrences must reach the configured overflow count consecutively. Overflow counter resets to the configured overflow value if the bandwidth decreases during one the sampling cycles. LSP is signaled with the new bandwidth value before the configured adjustinterval. Adjust-interval timer is reset to configured value.

Benefits of Auto-Bandwidth Network can react faster to sudden burst of traffic in near real-time and not rely on manually intervention Effective use of bandwidth resources by minimizing the over-subscription/padding of LSP bandwidth Maximizes the usage of available bandwidth and optimizes the network effectively to signal better paths

Why auto-bandwidth for us? Extensive use of traffic engineering using offline bandwidth calculation and adjusting bandwidth based on historical traffic data Lagging behind bandwidth adjustments Not reacting to sudden spikes in traffic shifts

Operational Experience Auto-bandwidth deployed since early 2009 globally Some skepticism initially letting the device automatically adjust bandwidth Engineers who came from ISP background and were used to offline bandwidth calculation Worked fairly well so far Ran into some issues early on Smear timers not spacing LSPs properly Bloated traffic stats used to setup LSPs Phased rollout of auto-bandwidth globally Extensive analysis was performed on network after auto-bandwidth was introduced to optimize Significant churn by smaller sized LSPs Tiered LSP model was introduce to reduce churn RSVP interface subscriptions were modified to account spike in traffic Sudden spike in traffic was still a issue Automatic LSP creation and deletion using even scripts

Lessons Learned Significant amount of time was spent to understand internals, was tough to get info initially from vendor Many hours were spent testing the feature, it was a big change on the network, test all the available knob thoroughly, ran into multiple issues Thorough planning and training made the deployment very smooth Correlation scripts to check traffic stats and MPLS bandwidth setup Show commands lacks historical bandwidth adjustment information, have to glean information from multiple log files Multiple enhancements have been submitted to display info to make troubleshooting easier, to name a few: show previous bandwidth adjusted values, maximum bandwidth ever used and counter to keep track of bandwidth adjustments.

Configuration groups { autobw_lsp_setup { protocols { mpls { label-switched-path <*> { ldp-tunneling; soft-preemption; admin-group include-any [ gold]; adaptive; auto-bandwidth { adjust-interval 900; adjust-threshold 10; minimum-bandwidth 1m; maximum-bandwidth 2g; autobw_lsp_setup_1m_profile { protocols { mpls { label-switched-path <*> { ldp-tunneling; admin-group include-any [ gold ]; adaptive; auto-bandwidth { adjust-interval 900; adjust-threshold 50; minimum-bandwidth 1m; maximum-bandwidth 50m; label-switched-path rtr1-to-rtr2 { apply-groups autobw_lsp_setup; to 10.1.1.2; admin-group exclude [bad]; label-switched-path rtr1-to-rtr3 { apply-groups autobw_lsp_setup_1m; to 10.1.1.3; admin-group exclude [bad]; mpls { statistics { file mpls_stats.log size 1m files 10; interval 60; auto-bandwidth;

Show Commands mnanduri@sn1-96cb-1a> show mpls lsp ingress extensive Ingress LSP: 119 sessions 10.1.1.1 From: 10.1.1.2, State: Up, ActiveRoute: 86, LSPname: rtr1-to-rtr2 ActivePath: standard_path (primary) LSPtype: Static Configured LoadBalance: Random Autobandwidth MinBW: 5Mbps MaxBW: 250Mbps AdjustTimer: 900 secs AdjustThreshold: 50% Max AvgBW util: 716.416kbps, Bandwidth Adjustment in 806 second(s). Overflow limit: 0, Overflow sample count: 0 Encoding type: Packet, Switching type: Packet, GPID: IPv4 *Primary standard_path State: Up Priorities: 3 3 Bandwidth: 7.53702Mbps OptimizeTimer: 900 SmartOptimizeTimer: 180 Include Any: transpacific favored core Exclude: transatlantic Reoptimization in 226 second(s). mohan@sat-96c-poc-netwk-1a# run show log mpls-new.stats Jan 2 04:53:33 trace_on: Tracing to "/var/log/mpls-new.stats" started HiPri-96c-to-16c (LSP ID 2, Tunnel ID 23040) 18663575116 pkt 9145151806840 Byte 49709 pps 24357859 Bps Reserved Bw 0 Bps 96c-to-16c-1 (LSP ID 2, Tunnel ID 23044) 54876339 pkt 26889406110 Byte 0 pps 0 Bps Reserved Bw 0 Bps Jan 2 04:54:33 Total 34 sessions: 7 success, 0 fail, 27 ignored HiPri-96c-to-16c (LSP ID 2, Tunnel ID 23040) 18669639725 pkt 9148123465250 Byte 51366 pps 25169789 Bps Reserved Bw 0 Bps 96c-to-16c-1 (LSP ID 2, Tunnel ID 23044) 54876339 pkt 26889406110 Byte 0 pps 0 Bps Reserved Bw 0 Bps