To Add Paths or not to Add Paths



Similar documents
Introduction Inter-AS L3VPN

IPv6 over IPv4/MPLS Networks: The 6PE approach

Introducing Basic MPLS Concepts

MPLS-based Layer 3 VPNs

For internal circulation of BSNLonly

Implementing Cisco MPLS

Introduction to MPLS-based VPNs

MPLS VPN over mgre. Finding Feature Information. Prerequisites for MPLS VPN over mgre

Transitioning to BGP. ISP Workshops. Last updated 24 April 2013

How Routers Forward Packets

IMPLEMENTING CISCO MPLS V2.3 (MPLS)

How To Make A Network Secure

MPLS Inter-AS VPNs. Configuration on Cisco Devices

IMPLEMENTING CISCO MPLS V3.0 (MPLS)

Implementing MPLS VPN in Provider's IP Backbone Luyuan Fang AT&T

Customized BGP Route Selection Using BGP/MPLS VPNs

IPv6 over MPLS VPN. Contents. Prerequisites. Document ID: Requirements

IP/MPLS-Based VPNs Layer-3 vs. Layer-2

MPLS VPN. Agenda. MP-BGP VPN Overview MPLS VPN Architecture MPLS VPN Basic VPNs MPLS VPN Complex VPNs MPLS VPN Configuration (Cisco) L86 - MPLS VPN

Notice the router names, as these are often used in MPLS terminology. The Customer Edge router a router that directly connects to a customer network.

WAN Topologies MPLS. 2006, Cisco Systems, Inc. All rights reserved. Presentation_ID.scr Cisco Systems, Inc. All rights reserved.

MPLS VPN Services. PW, VPLS and BGP MPLS/IP VPNs

Cisco Exam CCIE Service Provider Written Exam Version: 7.0 [ Total Questions: 107 ]

Network Working Group Request for Comments: March 1999

Kingston University London

Why Is MPLS VPN Security Important?

MPLS Concepts. Overview. Objectives

S ITGuru Exercise (3: Building the MPLS BGP VPN) Spring 2006

Computer Network Architectures and Multimedia. Guy Leduc. Chapter 2 MPLS networks. Chapter 2: MPLS

ETHERNET VPN (EVPN) NEXT-GENERATION VPN FOR ETHERNET SERVICES

MPLS VPN Route Target Rewrite

RFC 2547bis: BGP/MPLS VPN Fundamentals

DD2491 p BGP-MPLS VPNs. Olof Hagsand KTH/CSC

BGP Convergence in much less than a second Clarence Filsfils - cf@cisco.com

Increasing Path Diversity using Route Reflector

Cisco Configuring Basic MPLS Using OSPF

Implementing Cisco Service Provider Next-Generation Edge Network Services **Part of the CCNP Service Provider track**

UNDERSTANDING JUNOS OS NEXT-GENERATION MULTICAST VPNS

MPLS-based Virtual Private Network (MPLS VPN) The VPN usually belongs to one company and has several sites interconnected across the common service

Multihoming and Multi-path Routing. CS 7260 Nick Feamster January

In this chapter, you learn about the following: How MPLS provides security (VPN separation, robustness against attacks, core hiding, and spoofing

HP Networking BGP and MPLS technology training

Exterior Gateway Protocols (BGP)

DD2491 p MPLS/BGP VPNs. Olof Hagsand KTH CSC

AMPLS - Advanced Implementing and Troubleshooting MPLS VPN Networks v4.0

Implementing VPN over MPLS

Transition to IPv6 in Service Providers

Advanced BGP Policy. Advanced Topics

PRASAD ATHUKURI Sreekavitha engineering info technology,kammam

MPLS VPN Security BRKSEC-2145

MPLS L3 VPN Supporting VoIP, Multicast, and Inter-Provider Solutions

MikroTik RouterOS Introduction to MPLS. Prague MUM Czech Republic 2009

MPLS multi-domain services MD-VPN service

l.cittadini, m.cola, g.di battista

APNIC elearning: BGP Attributes

How To Fix Bg Convergence On A Network With A Bg-Pic On A Bgi On A Pipo On A 2G Network

BGP Vector Routing. draft-patel-raszuk-bgp-vector-routing-01

How To Understand Bg

GregSowell.com. Mikrotik Routing

Implementing MPLS VPNs over IP Tunnels on Cisco IOS XR Software

MP PLS VPN MPLS VPN. Prepared by Eng. Hussein M. Harb

Implementing MPLS VPNs over IP Tunnels

Tackling the Challenges of MPLS VPN Testing. Todd Law Product Manager Advanced Networks Division

Service Peering and BGP for Interdomain QoS Routing

WHITEPAPER. Bringing MPLS to Data Center Fabrics with Labeled BGP

MPLS Implementation MPLS VPN

BGP Link Bandwidth. Finding Feature Information. Prerequisites for BGP Link Bandwidth

- Multiprotocol Label Switching -

Demonstrating the high performance and feature richness of the compact MX Series

Table of Contents. Cisco Configuring a Basic MPLS VPN

MPLS Layer 2 VPNs Functional and Performance Testing Sample Test Plans

MPLS WAN Explorer. Enterprise Network Management Visibility through the MPLS VPN Cloud

Expert Reference Series of White Papers. Cisco Service Provider Next Generation Networks

Understanding Virtual Router and Virtual Systems

Multi Protocol Label Switching (MPLS) is a core networking technology that

MPLS L2VPN (VLL) Technology White Paper

MPLS. Cisco MPLS. Cisco Router Challenge 227. MPLS Introduction. The most up-to-date version of this test is at:

IPv4/IPv6 Transition Mechanisms. Luka Koršič, Matjaž Straus Istenič

Virtual Private LAN Service on Cisco Catalyst 6500/6800 Supervisor Engine 2T

Using OSPF in an MPLS VPN Environment

MPLS VPN Security in Service Provider Networks. Peter Tomsu Michael Behringer Monique Morrow

Addressing Inter Provider Connections With MPLS-ICI

Building A Cheaper Peering Router. (Actually it s more about buying a cheaper router and applying some routing tricks)

Overlay Networks and Tunneling Reading: 4.5, 9.4

Department of Communications and Networking. S /3133 Networking Technology, Laboratory course A/B

APNIC elearning: BGP Basics. Contact: erou03_v1.0

BGP Best Path Selection Algorithm

Outline. Internet Routing. Alleviating the Problem. DV Algorithm. Routing Information Protocol (RIP) Link State Routing. Routing algorithms

MPLS over IP-Tunnels. Mark Townsley Distinguished Engineer. 21 February 2005

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

WHITE PAPER. Addressing Inter Provider Connections with MPLS-ICI CONTENTS: Introduction. IP/MPLS Forum White Paper. January Introduction...

S ITGuru Excercise 3: BGP/MPLS VPN. Spring Timo-Pekka Heikkinen, Juha Järvinen and Piia Töyrylä

Load balancing and traffic control in BGP

Understanding Route Redistribution & Filtering

Building VPNs. Nam-Kee Tan. With IPSec and MPLS. McGraw-Hill CCIE #4307 S&

--BGP 4 White Paper Ver BGP-4 in Vanguard Routers

Module 12 Multihoming to the Same ISP

Load balancing and traffic control in BGP

MPLS Layer 3 and Layer 2 VPNs over an IP only Core. Rahul Aggarwal Juniper Networks. rahul@juniper.net

Frame Mode MPLS Implementation

Transcription:

To Add Paths or not to Add Paths ftp://ftpeng.cisco.com/raszuk/addpaths/ Robert Raszuk IOS Routing Development raszuk@cisco.com 1

Objective To present how to achieve fast connectivity restoration ti and BGP level lload dbl balancing for both IP (and 3107) networks without the need for BGP protocol extention ti and entire network wide upgrade to new operating system. More important it is to show how the above is possible without need to increasing amount of BGP state in your routers. 2

Agenda A perspective on building IPv4 or IPv6 networks with fast connectivity restoration and load balancing Add paths reality Different approach to distribute N paths in BGP with no need for bgp protocol changes How to provide path redundancy in RFC3107 networks (if time permits) How to operatewithoutnewpos wheninternet table size exploads on us (if time permits) 3

Anti Agenda Agenda We assume different RD per vrf for VPNv4 & VPNv6 so no issue there. Not a reason to look into add paths Out of scope Same RD or Inter as ASBR redundancy issues for L3VPNs can be addressed without add paths Out of scope 4

Hierarchical IGP ISP design Basic design: POP area POP area Core area POP area Hierarchical IGP design Each POP in a separate IGP area s acting as RRs RRs in the data plane on the POP to core boundary Fully meshed RRs in the core So what is missing for Fast Convergence or BGP level lload dbl balancing: POP area PE3 In fact NOT MUCH! Enabling best external on edge routers and on RRs is all what is needed in this typical design to provide full BGP paths distribution where functionallyneededneeded Enable IGP domain wide BGP next hop state propagation 5

Hierarchical IGP ISP design Design assumptions: RR1a RR1b RR4a RR4b Full mesh of /RRs RR3a PE3 RR3b In the POP: ASBRs/PEs advertise their external paths to POP s RRs (even if IBGP path is selected as overall best) thx to best external feature on ASBRs and PEs IBGP full mesh Last resort POP s local default route to s/rrs interfacing with the core with admin distance over 200 (IBGP) In the core: RRs advertise POP s best paths to other RRs If best external is enabled on RRs towards the core local/pop s best path will be advertised even if overall path received from other RR s cluster is selected as best. IBGP full mesh 6

Let s walk through main design scenarios for basic IPv4/IPv6 Internet Scenario A no best external on RRs and no next hop propagation Scenario B no best external on RRs and next hops event propagation Scenario C enabled best external on RR s and next hops event propagation Scenario C similar case like C, but with flat IGP architecture Scenario D DMZ or single POP exit architecture 7

Hierarchical IGP ISP design Intra domain Pthdi Path distribution: ib ti POP4 RR4a RR4b Let s observe how paths are being distributed: Let s assume we have 4 paths for a given prefix P:,, P3a, P3b. PO RR1a RR1b POP3 Full mesh of /RRs RR3a ASBR3a P3a RR3b ASBR3b P3b PO Let s assume is best due to local preference. Best external on edges. Scenario A no best external on RRs and no next hop event propagation beyond local (best path s) POP When goes down RR1s need to withdraw in BGP then subsequent best will be advertised by RR2 or RR3 End 2 end connectivity restoration: SLOW f(bgp) + NO PIC + best POP LB 8

Hierarchical IGP ISP design Intra domainpath distribution: Let s observe how to relax the need for service impacting BGP withdraw POP4 NH IGP adv. PO RR1a RR1b POP3 RR4a NH IGP adv. RR4b Full mesh of /RRs RR3a ASBR3a P3a NH IGP adv. NH IGP adv. RR3b ASBR3b P3b Scenario B no best external on RRs and next hop event propagation via IGP domain wide When goes down IGP will flood it s next hop down event network wide. (PO and POP4 still needs to wait itfor BGP!) PO This will trigger simultaneus invalidation of on all RRs as well PIC where backup paths are present RRs/ASBRs/PEs BGP will advertise or P3a/P3b as best Note... PIC here takes place primarly on POP to core boundary s/rrs as well as on ASBRs/PEs End 2 end connectivity restoration: within the POP. FASTER + SOME PIC f(igp) + best POP LB 9

Hierarchical IGP ISP design Intradomain Path distribution: POP4 Let s observe how to restore connectivity without waiting for BGP: NH IGP adv. PO RR1a RR1b POP3 RR4a RR4b Full mesh of /RRs RR3a ASBR3a P3a RR3b. ASBR3b P3b. PO Scenario C enabled best external on RR s & advertise next hops events domain wide Benefits: Fast connectivity restoration BGP PIC on RRs, ASBRs and PEs IBGP loadbalancing (local and remote) No need for upgrade allthe ASBRs/PEs No need to push all the paths to all ASBRs/PEs Note... PIC here takes place primarly on POP to End 2 end connectivity restoration: core boundary s/rrs as well as on ASBRs/PEs FASTEST + FULL PIC + FULL LB! within the POP. 10

Flat IGP ISP design Scenario C enabled best external on RR s, advertise next hop state domain wide (flat IGP no need for configured leaking core to POP) RR1a RR1b RR4a Full mesh of RRs RR3a ASBR3a P3a RR4b RR3b ASBR3b P3b Design guide: ASBRs/PEs are peering to two different RRs which are in the data path ASBRs/PEs have static default towards RRswith admin distance over 200 ASBRs/PEs have best external and PIC on RRs have best external and PIC on Full mesh of RRs directly connected, in a ring or using some encapsulation technology Works with next hop self on peering points or with next hop unchanged 11

Original/typical ISP design P3/nh RR3 Scenario D DMZ or single POP exit architecture POP4 RR4a P3/nh R3 RR4b P3/nh R3 Specific architecture t where your entire exit traffic goes via the same POP or same ASBRs P3/nh RR3 P3/nh RR3 PO RR1a P3/nh R3 RR1b Full mesh of /RRs P3/nh RR3 P3/nh RR3 PO Solution A: Split your upstream/peering ASBRs into different POPs Use scenario C or C RR3a RR3b Solution B: POP3 = DMZ ASBR3a P3a P3b/nh unch. ASBR3b P3b P3a/nh unch. Set next hop self on the POP (on RRs or on ASBRs) optionally via anycast address ghost loopback. Remove fooding and leaking original bgp nh reachability in IGP (not needed) 12

Why networks got flatten... Now pictures of the networks started to looked like this: And there are reasons behind it... RR1a For any MPLS application end to end LSP needs to be created. RR1b LDP requires the exact match with ihthe RIB (practically /32 host route) when you have areas you need to leak all /32s across end to end. ASBR3a ASBR3b And let me just notice that all nice applications like L3VPN (aka 2547) or L2VPNs or multicast VPNs work very well over automatic IP encapsulation in any IGP model. No need to create manually any tunnels, no need to flat your network! Draw your own conclusions TE requires end to end flooding of information about link bandwith reservations to calculate and signal the EROs Inter IGP Area TE is a project on it s own. Another reason for flat IGP. Is this off topic??? Nope... 13

Consequences for BGP design Concept of flat IGP domains and race between vendors who s IGP code can accomodate bigger areas has started As leading application was L3VPN and core could not afford to maintain any VPN state VPN s RRs were naturally control plane devices. Some customers had/still have two different networks: flat for VPNs & hierarchical for IPv4/v6, but the price to maintain those is high. Some did not flow with the river and started to run L3/L2 VPN services over IP encapsulation very successfully maintaining i i their IGP hierarchical model. dl Others following the VPN/MPLS model moved their IPv4/IPv6 RRs into the core control plane devices... No IP lookup were performed there anyway as LSPs where edge to edge. / RR1a RR1b ASBR3a ASBR3b Very difficult to provide true hot patato routing without pushing all paths everywhere! The real issue is that you can not benefitfrom from BGP path virtualization 14

So now let s compare our previous designs with the add paths model: Our Internet model C: /nh unch. POP4 NH IGP leaking PO RR1a RR1b POP3 /nh unch. RR4a P3 /nh unch. RR4b P3 Full mesh of /RRs RR3a ASBR3a /nh unch. P3a /nh unch. RR3b 1/nh unch. ASBR3b P3b /nh unch. PO /nh unch. /nh unch P3b/nh ASBR3b P3a/nh ASBR3a Let s asume all routers are upgraded to support add paths: /nh /nh P3a/nh ASBR3a P3b/nh ASBR3b P3a/nh ASBR3a P3b/nh ASBR3b P3b/nh ASBR3b P3a/nh ASBR3a RR1a,, P3a, P3b /nh RR1b,, P3a, P3b,, P3a, P3b,, P3a, P3b ASBR3a ASBR3b /nh /nh P3b/nh ASBR3b P3a/nh ASBR3a /nh P3a/nh ASBR3a P3b/nh ASBR3b Best external also enabled on all PEs/ASBRs Let s pay attention to difference in the amount of control plane BGP state 15 information on all edge routers (ASBRs & PEs) and this is only for net with 3 exit points

So now let s compare our previous designs with the add paths model: Our Internet model C : Let s asume all routers are upgraded to support add paths: /nh P3a/nh ASBR3a P3b/nh ASBR3b P3b/nh ASBR3b P3a/nh ASBR3a RR1a,, P3a, P3b /nh RR1a RR1b RR4a Full mesh of RRs RR3a ASBR3a P3a RR4b RR3b ASBR3b P3b P3b/nh ASBR3b P3a/nh ASBR3a /nh P3a/nh ASBR3a P3b/nh ASBR3b RR1b,, P3a, P3b,, P3a, P3b,, P3a, P3b ASBR3a ASBR3b /nh /nh P3b/nh ASBR3b P3a/nh ASBR3a /nh P3a/nh ASBR3a P3b/nh ASBR3b Best external also enabled on all PEs/ASBRs Let s pay attention to difference in the amount of control plane BGP state 16 information on all edge routers (ASBRs & PEs) and this is only for net with 3 exit points

All paths x 2! Due to peerings to two RRs Add paths reality... /nh P3a/nh ASBR3a P3b/nh ASBR3b,, P3a, P3b /nh P3a/nh ASBR3a New BGP protocol encoding, new capability, new network wide upgrade. New design questions... Which additional paths to distribute? /nh All paths? P3a/nh ASBR3a P3b/nh ASBR3b Nth best? RR1a,, P3a, P3b P3b/nh ASBR3b P3a/nh ASBR3a P3b/nh ASBR3b /nh All AS wide best paths? P3a/nh ASBR3a RR1b,, P3a, P3b Neighbour AS group best paths?,, P3a, P3b Best local pref/second local pref? Paths at best path penultimate /nh P3a/nh ASBR3a P3b/nh ASBR3b /nh P3a/nh ASBR3a P3b/nh ASBR3b ASBR3a P3a /nh ASBR3b P3b /nh P3b/nh ASBR3b decision Would all edge routers will be able to carry the additional control plane load? P3b/nh ASBR3b P3a/nh ASBR3a What is the driver? Fast connectivity restoration (FC/PIC), load balancing, hot Notice paths at the core of &... potato t routing, oscillations suppresion? The only other choice is end to end Those can be addressed with the proper tunneling. network design without add paths. 17

Add paths new NLRI encoding NLRI encodings specified in [RFC4271, RFC4760] are extended as the following: +--------------------------------+ Path Identifier (4 octets) +--------------------------------+ Length (1 octet) +--------------------------------+ Prefix (variable) +--------------------------------+ NLRI encoding specified in [RFC3107] is extended as the following: +--------------------------------+ Path Identifier (4 octets) +--------------------------------+ Length (1 octet) +--------------------------------+ Label (3 octets) +--------------------------------+... +--------------------------------+ + Prefix (variable) +--------------------------------+ 18

Add paths easy alternative In the event of not being able to use design with RRs in the data path... Diverse BGP Path Distribution design 1 /nh RR1 RR1 /nh,, P3a, P3b /nh,, P3a, P3b,, P3a, P3b,, P3a, P3b RR2 RR2 /nh Let s assume that your goal is fast connectivty restoration via FC/PIC and/or IBGP multipath loadbalancing overall best, second best Key benefits: Easy deployment no upgrade of any /nh /nh ASBR3a ASBR3b existing edge router is required, just new IBGP session per each extra path RR1 and RR2 are shadow RRs One additional shadow RR per cluster They are configured to calculate and Works within flat domain or within each advertise Nth best path to it s clients area of hierarchical network They can do it on a per AFI/SAFI basis No new protocol extension required Same IGP metric as best RRs or IGP metric IETF draft: draft raszuk diverse bgp path dist 00 disabled on both 19

Add paths easy alternative In the event of not being able to use design with RRs in the data path... Diverse BGP Path Distribution design 2 /nh RR1 RR1 /nh,, P3a, P3b,, P3a, P3b RR2 RR2 /nh /nh Let s assume that your goal is fast connectivty restoration via FC/PIC and/or IBGP multipath loadbalancing overall best, second best Key benefits: Easy deployment no upgrade of any /nh /nh ASBR3a ASBR3b existing edge router is required, just new IBGP session per each extra path RRs and RRs are same RRs Just few more IBGP sessions and code They are configured to calculate and upgrade on existing RRs advertise Nth best path to it s clients on a No new protocol extension required per neighbor basis IETF draft: draft raszuk diverse bgp path dist 00 di di They can do it on a per AFI/SAFI basis Automatic same IGP metric as best RRs 20

Instead of full IBGP mesh in the core/pop... Design details: RR1a RR1b RR4a L2-RR1 L2-RR1 L2-RR1 RR3a ASBR3a P3a RR4b RR3b ASBR3b P3b Let s assume that our goal is to build a redundant network and distribute 2nd and 3rd best paths with different exit points + Removed need to create a lot of IBGP session in the core Each shadow RR calculates it s own best path and advertises it to it s clients (POP RRs) Any encapsulation can be used within each area IP or MPLS (option). Any new application can be build in a scalable manner in such design. + The very same applies to intra POP design instead of assumed full mesh. 21

Scaling MPLS networks... Next 4 slides... To present what vendors today tell you on how to scale large MPLS networks To address the case of 3107 requirements for add paths with simple IP like hierarchy 22

Scaling large MPLS networks Remember the basic picture about MPLS end to end LSPs? Perhaps you are considering now how your multiservice network should look like and what should be your choice of encapsulation? RR1a RR1b Hint! Networks grow and BRAS/DSLAM boxes are becoming PEs... This results in scale of 10,000 30,000 PEs I don t think anyone intends to keep it flat. ASBR3a ASBR3b And please just guess what is the answer to such MPLS scaling challange... 23

... to come back to traditional IGP design and introduce hierarchy h in MPLS POP area RR1a RR1b POP area Core area POP area ASBR3a ASBR3b POP area PE3 Let s zoom into the MPLS hierarchy... 24

MPLS hierarchy vs traditional ISP model New MPLS hierarchical model for L3VPNs: IPv4 classic ISP mode from slide 4: Core to POP 3107 LDP area RR4a RR4b LDP area Core LDP area + Full ibgp 3107 mesh of s/rrs LDP area RR1a RR1b Full mesh of /RRs Core to POP 3107 LDP area Core to POP 3107 RR3a RR3b PE3 PE3 All PE s loopbacks now carried in BGP SAFI 4! s are RRs doing next hop self for 3107 on /32s Origin POP is redistributing IGP+LDP next hops into 3107 BGP LDP is local to each area Label stack increased +1 due to introduced hierarchy Q: Isn t IP encapsulation with it s native summarization cleaner, simpler, nicer??? 25

Example of 3107 paths distribution... Taken scenario of Inter AS option C to see how it maps to IPv4 ISP scenario C: Each area: IGP/LDP + ibgp 3107 Core to POP 3107 Intra core 3107 POP to core 3107 Inter AS Opt C ebgp 3107 Src A/RR1a/L5 RR1a NHS to clients A/RR1b/L6 RR1b A//L3 A//L4 A//L1 A//L2 a b A//L1 A//L2 4 NH: A Dst NHS to other RRs (*) RR1 s LI LFIB for prefix A: L5 > L3 > via IGP/LDP to L4 > via IGP/LDP to L6 > L3 > via IGP/LDP to L4 > via IGP/LDP to RR2 s LI LFIB for prefix A: L3 > L1 > via IGP/LDP to L2 > via IGP/LDP to L4 > L1 > via IGP/LDP to L2 > via IGP/LDP to 3107 PIC possible at RRs 3107 Load balancing possible at and at each RR Benefit of next hop self on 3107 ibgp sessions from RRs Inter AS option C, 4 next hop: A, s are RRs doing next hop self for 3107 on /32s LDP is local lto each area dli delivers to RR1s, RR2s & ASBRs Label stack increased +1 due to introduced hierarchy NHS on the POP to core only needed in single POP exit architecture 26

Conclusions so far... As we observe there is really close congruency between traditional ISP hierarchical network model and new MPLS scalable scalable backbone model Your IPv4/v6/3107 RRs being in the data plane path do relax the need for add paths deployment. It s all about path virtualization ti through h hierarchyh rather then brute force network wide path s flooding end to end. Diverse BGP Path Distribution can be used to provide more then best path advertisement in the case of hierarchical reflection instead of full meshing within each area. It also works well when your RRs are already control plane only devices. Even for VPNs just consider those RRs to be in VPN data plane executing normal option B especially with rt constrain this may be of significant scaling plus 27

And when Internet table size explodes...... and some of your edge devices can t fit everything into their FIBs any longer POP area 0.0.0.0 You have two choices: 1. Enter a new PO with your vendor 2. Configure within each POP a default route to s/rrs for edge FIB install POP 0.0.0.0 area Core area POP area 0..0.0.0 PE3 POP 0.0.0.0 area Details: Note that still you have much less paths on the edge then in the flat design You still advertise all best nets in the control plane from s/rrs to POPs PEs serving stub/domestic customers with smaller FIBs can continue to operate justfine and in FIBthey just point to s/rrs In flat design comparable solution is very challanging to deploy That is nothing else then Paul s Francis special case of Virtual Aggregation proposal as described in FIB Suppression with Virtual Aggregation draft ietf grow va 01.txt 28

Final conclusions... Add Paths proposal or any other proposal is just a choice of technology to flood more BGP paths around in the network. The presentation was not about that this particular way of flooding is wrong add paths semantics are IMHO just fine well diverse path idea seems much easier and does not require upgrade of all BGP speakers in your network. The talk is about BGP path flooding to the edges being itself a questionable idea regardless of the encoding used. Any ibgp paths flooding could be used between RRs if needed to establish RR hierarchy no objections there. Similary any ebgp path flooding could be employed in IX route servers environments when required. While the research communities are scratching their heads on how much to flood ie how many paths to distribute to the edges of the network... while being an excellent in their research findings the most important point is missed that this applies only to network architectures which removed RRs from data path. By proper hierarchical architecture of the network engineering teams are able to aggregate/virtualize BGP paths at the price of an additional line rate lookup. It also protects networks from exploding their amount of BGP state PEs/ASBRs need to keep in the network. 29

Acknowledgement... My special thanks should be expressed to the below Individuals id for their inspiration i and review of this work: Prof Lixia Zhang Randy Bush Juan Alcaide Christian Cassar Cristel Pelsser Laurent Vanbever Pedro Marques Keyur Patel Rex Fernando Also many thx to a few of anonymous reviewers who due to political reasons prefered not to expose their current names/associations. 30

References... Advertisement of Multiple Paths in BGP draft walton bgp add paths 06 Advertisement of the best external route in BGP draft ietf idr best external 01 Fast Connectivity Restoration Using BGP Add path draft pmohapat idr fast conn restore 00 Analysis of paths selection modes for Add Paths draft vvds add paths analysis 00 Distribution of diverse BGP paths draft raszuk diverse bgp path dist 01 FIB Suppression with Virtual Aggregation draft ietf grow va 01 31

Question s are welcome...... both on line as well as off line raszuk@cisco.com ftp://ftpeng.cisco.com/raszuk/addpaths/ com/raszuk/addpaths/ 32