Provider Backbone Bridging Traffic Engineering of Carrier Ethernet Services Introduction Recently, a number of technologies have emerged for transporting Carrier Ethernet services. One such technology, Provider Backbone Bridging Traffic Engineering (PBB-TE), addresses several existing challenges of natively extending Ethernet services across a provider s network. Until now, other technologies such as SONET/SDH or Multi-Protocol Label Switching (MPLS) have been required to build large-scale networks. However, many carriers are actively seeking native Ethernet networks that reduce operational and capital costs while enabling them to deliver a wide range of existing and emerging applications and services efficiently. Due to its ubiquity and ease-of-use, Ethernet, in the form of PBB-TE, is positioned to capitalize on the sizeable opportunity of delivering and transporting Carrier Ethernet services. Carrier Ethernet Services The telecommunications industry has experienced the continued erosion of legacy technologies such as Frame Relay (FR) and Time Division Multiplexing (TDM), which are being supplanted by an exciting new type of Ethernet called Carrier Ethernet. Defined by the Metro Ethernet Forum (MEF) Carrier Ethernet includes five specific service provider-influenced attributes that distinguishes it from traditional enterprise-class Ethernet. They are: > Standardized services > Quality of Service (QoS) > Service management > Reliability > Scalability Quality of Service Service Management Transitioning from complex routed architectures and protocols, to an easier-to-use switch-based approach has advantages in delivering Carrier Ethernet services; however, there are some drawbacks, particularly in the area of scalability and reliability. PBB-TE has been developed to address these shortcomings. Standardized Services Carrier Ethernet Legacy Ethernet s Scalability Limitations Traditional Ethernet switch-based networks have two defining characteristics that limit their topological size: learning and loop avoidance. When a switch receives a packet of which it is unaware that is destined for an end station, the switch replicates the packet down every link on which the packet is connected (known as flooding). As the switch inspects each packet it receives, the switch remembers or learns the association of sending stations and ingress links. Each switch in the Layer 2 domain learns the address and associated link for every device in the network. While many metro and core devices may be able to support hundreds of thousands of addresses, requiring each device in a provider s network to handle this number of addresses is cost prohibitive and impacts protection switching schemes. When a link or device experiences a failure, the network must react to the changing topology. Often, as the number of addresses increases, the time to failover and to restore network connectivity increases. Reliability One alternative is to introduce routers, which segment the Layer 2 network into multiple sub-networks (subnets). While providing a level of hierarchy, routing also introduces several complex and difficult-to-operate protocols, such as border gateway protocol. Routing also requires more sophisticated Scalability configuration and operation, expensive hardware and ever-faster processors. In addition, routing does not provide the same level of service transparency as switch-based Ethernet service transport. For these reasons, the industry has devoted resources to enhancing several aspects of switch-based Ethernet networks. As mentioned earlier, Layer 2 networks routinely flood traffic to unknown destinations. Many mesh, hub-and-spoke, and ring topologies contain physical loops in the form of redundant and sometimes inadvertent connections between devices. These loops must be logically prevented to allow flooded traffic to propagate through the network properly. If loops were allowed to remain, the flooded traffic would replicate and multiply, wreaking havoc on the network. For these reasons, Spanning Tree Protocol (STP) and later W A S D P Whitepaper
ridge Figure 1. network enhancements such as Multiple STP (MSTP) and Rapid STP (RSTP), were invented to detect and eliminate these loops. Figure 1 shows a provider network with a mesh interconnecting several Ethernet switch-based devices known as Provider Bridges (PBs), which are under the control of the provider. These devices support one or more Ethernet service transport technologies, such as the standard IEEE 802.1ad PB. As shown in Figure 1, the physical topology, while providing some redundancy, contains several loops. Figure 2 shows the same provider network with IEEE 802.1w RSTP detecting the loops and logically blocking a few of the links. Now, no loops exist in the network. Switch-based flooding and learning occur normally. The blocked links remain on standby in the event an active link or device experiences a fault. Figure 3 depicts the path a Carrier Ethernet service may take across the provider network. RSTP blocked links are avoided and all of the customer locations are interconnected. While moderately scaled provider networks are possible using RSTP and MSTP, some carriers remain skeptical about the performance during failover situations. Larger networks have significantly more links and Media Access Control (MAC) addresses to manage, straining the capability of RSTP and MSTP. Carriers want to minimize, and avoid if possible, sources of service disruption. PBB-TE represents one approach for addressing these concerns. ridge Figure 2. Loop prevention using RSTP/MSTP 2
ridge Figure 3. Carrier Ethernet service delivery using IEEE 802.1ad PBs Customer Service Delivery And Transparency One motivation for deploying Layer 2 Virtual Private Networks (L2 VPNs) has been customer demand for interconnecting multiple sites. Customers want inexpensive, high-performance, transparent Local Area Network (LAN) services. In addition, most do not want the additional complexity of switch or router configurations. Customers also resist exchanging route tables with carriers due to security concerns and operational complexity. Increasingly, customers want to use Ethernet to interconnect locations natively. Point-to-Point EVC Figure 4. MEF E-Line service Source: MEF Connecting two sites creates an Ethernet-Line (E-Line) service utilizing a point-to-point Ethernet Virtual Connection (EVC) as shown in Figure 4. Customers with more than two locations want multi-site interconnectivity and would choose an E-LAN service, which supports multipoint-to-multipoint Ethernet Virtual Cicuits (EVCs) as depicted in Figure 5. Multipoint-to-Multipoint EVC Figure 5. MEF E-LAN service Completed in December 2005, IEEE 802.1ad PB is the first Ethernet bridging project expressly created for service provider networks. PB standardizes the use of multiple Virtual Local Area Network (VLAN) tags in the same frame. The format of the 802.1ad PB frame is shown in Figure 6. Highlighted fields are added per IEEE 802.1ad Provider Bridging to delineate services. 802.1ad Customer-DA C-DA Customer-SA C-SA S-Tag EType 88-a8 S-VID, PCP, DE S-Tag C-Tag EType 81-00 C-VID, PCP, CFI C-Tag Data FCS Figure 6. PB frame Source: MEF < Service Identifier The existing fields of the customer frame are preserved. This allows a customer s full 4K VLAN range to be transported seamlessly across a PB network to each of its other locations. As depicted in Figure 7, interconnects three locations using an E-LAN service. connects two sites with an E-Line service. Both techniques provide secure, transparent L2 VPNs across the PB network. 3
ridge Figure 7. L2 VPN service delivery using a PB network Each L2 VPN enables complete customer separation. Both customers have freedom to use internal Customer VLANs (C- VLANs) as they choose. The provider configures up to four thousand Service VLANs (S-VLANs) capable of supporting up to four thousand separate customers. Often, the 4K maximum is not the limiting factor for the provider. Rather, the aggregate number of MAC addresses and/or the physical topology demands placed upon the RSTP and MSTP protocols force the provider to segment or use alternative transport facilities, such as PBB-TE. Network Quality of Service While topology and address scaling are critical issues, an important development in IEEE 802.1ad PB is the inclusion of drop eligibility and packet marking capabilities. Rather than a fixed interpretation of the 3-bit priority field used by the legacy IEEE 802.1Q VLAN standard, PB allows a variety of Priority Code Point (PCP) encodings. As shown in Figures 8 and 9, four distinct priority/drop eligible interpretations are possible. For instance, 6P2D provides six classes of service with two of the classes supporting discard eligible (yellow) marking. Figure 8 shows the usage of the PCP and drop eligibility fields. This Layer 2 coloring allows for efficient mechanisms to handle congestion without requiring inspection of Layer 3 header information. Figure 9 provides an analysis of the benefits and limitations of PB, which can be used to compare with PBB-TE. Benefits > Transparency of full 4K C-VID range > Ability to determine layertwo drop eligibility > Service PCP assigned by provider or determined by C-Tag > Native support of E-LAN Services > Separation of customer and provider control domains > All customer layer two control protocols are transported through provider network Limitations > 4K services > Topology constrained by number of aggregate connected devices > PB devices learn all provider and customer MAC addresses > Customer MAC addresses exposed on provider network > Service ID derived from ingress port and C-VID > Sub-optimal provider network capacity due to RSTP/MSTP loop prevention Figure 9. PB benefits and limitations IEEE 802.1ad Priority Code Point, Drop Eligibility Usage 7 7DE 6 6DE 5 5DE 4 4DE 3 3DE 2 2DE 1 1DE 0 0DE PCP, DE 8P8D 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 0 8P0D 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 0 PCP 7P1D 7 7 6 6 5 4 5 4 3 3 2 2 1 1 0 0 6P2D 7 7 6 6 5 4 5 4 3 2 3 2 1 1 0 0 5P3D 7 7 6 6 5 4 5 4 3 2 3 2 1 0 1 0 Figure 8. IEEE 802.1ad PCP and drop eligibility usage With the popularity, ease-of-use, and enhanced QoS features ensuring service predictability, more and more operators are moving to Carrier Ethernet. Despite some limitations, adoption is growing at a swift pace. Concerns about the inherent scalability issues are being alleviated by the promising MAC header encapsulation techniques. 4
Customer D Customer D Provider Backbone Bridge Figure 10. PBB network ridge Carrier Ethernet Transport Scalability and Resiliency Within the last few years, it has become evident that Ethernet will ultimately succeed in access and metro deployments. Early in the debate over its success, MPLS was considered the most viable option for interconnecting PB networks. Then PBB surfaced. Unlike MPLS, PBB uses MAC header encapsulation to alleviate MAC address and service scalability concerns. Figure 10 shows a typical PBB network. The format of the 802.1ah PB frame is shown in Figure 11. Highlighted fields are added per IEEE 802.1ah Provider Backbone Bridging to delineate PB flood domains and interconnects. 802.1ah Backbone-DA B-DA Backbone-SA B-SA Backbone-Tag B-Tag Instance-Tag I-Tag Customer-DA C-DA Customer-SA C-SA S-Tag EType 88-a8 S-VID, PCP, DE S-Tag C-Tag EType 81-00 C-VID, PCP, CFI C-Tag Data FCS Figure 11. PBB frame The original PB frame is preserved intact. Each of the fields, beginning with the Customer Destination Address (C-DA) and Customer Source Address (C-SA), is transported across the PBB network without modification. More importantly, these customer addresses are not learned by core devices, reducing the cost and complexity of PBB equipment. < Flood Domain/ PB Interconnect Identifier Figure 12 provides an analysis of the benefits and limitations of PBB. Benefits > 16M services > Transparency of multiple 4K S-VID ranges > Customer MAC addresses tunneled on the provider network, enhancing security and scalability > Separation of customer, provider, and backbone control domains > Customer and PB layer two control protocols are transported through PBB network Limitations > Provider Backbone Edge Bridge (PBEB) devices learn all PBB devices and transiting Customer MACs > PBB diameter limited by RSTP/MSTP constraint > Lower PBB network capacity due to RSTP/MSTP loop prevention Figure 12. PBB benefits and limitations 5
Customer D Customer D Provider Backbone Bridge ridge Figure 13. PB transport network Provider Backbone Bridging Traffic Engineering PBB-TE has emerged to address limitations related to scalability and reliability. PBB-TE may be deployed in place of PBB or may run in parallel with PBB. In both cases, PBB-TE eliminates the need for backbone core devices to perform learning and flooding. Instead, point-to-point tunnels to transport L2 VPNs are provisioned using a sophisticated management platform. Rather than using RSTP/MSTP to prevent loops, the management platform traffic engineers the PB network to utilize significantly more capacity. Figure 13 shows the greater utilization of the backbone network with PBB-TE enabled. Primary and backup paths or tunnels are provisioned. Highlighted fields are added per Provider Backbone Bridging Traffic Engineering to identify tunnels and delineate PB flood domains and interconnects. PBB-TE Backbone-DA B-DA Backbone-SA B-SA Backbone-Tag B-Tag Instance-Tag I-Tag Customer-DA C-DA Customer-SA C-SA S-Tag EType 88-a8 S-VID, PCP, DE S-Tag C-Tag EType 81-00 C-VID, PCP, CFI C-Tag Data FCS Figure 14. PB transport frame < Tunnel Identifier < Tunnel Identifier IEEE 802.1 has initiated a project to standardize this innovative and increasingly popular transport technology. An international standard will promote multi-vendor support and interoperability. IEEE 802.1Qay will leverage the existing IEEE 802.1ah PBB frame format, without modification, as shown in Figure 14. Figure 15 shows two PB networks interconnected with a PBB- TE network. Two customer L2 VPNs are shown traversing primary and backup PBB-TE tunnels through the core network. (red) traffic originates at. The PB encapsulates the customer traffic by adding an S-Tag containing the configured S-VID value of 100 reserved for within its domain. The traffic is sent to PB Edge Bridge A (PBEB-A). PBEB-A has been configured to assign traffic (S- VID=100) to a 24-bit Instance Service Identifier (I-SID) value of 10000. The same I-SID value is associated with primary and backup PBB-TE tunnels. Each primary tunnel and backup tunnel is identified using the combination of a PBEB Destination MAC address and a Backbone-VID (B-VID). This represents a significant difference between PBB-TE and PBB. Recall that with PBB, B-VIDs represent flood domains that interconnect multiple PB networks. With PBB-TE, B-VIDs along with B-DAs define the tunnel. 6
Primary Tunnel (PBEB-D, B-VID 4001) (PBEB-A, B-VID 4001) PBCB-C = I-SID 10,000 = I-SID 20,000 PBEB-A PBEB-D = S-VID 100 = S-VID 200 = S-VID 110 = S-VID 220 PBCB-B Backup Tunnel (PBEB-D, B-VID 4002) (PBEB-A, B-VID 4002) Provider Backbone Edge Bridge (PBEB) Provider Backbone Core Bridge (PBCB) (PB) ridge (CB) Figure 15. Primary and backup PBB-TE tunnels In this case, the PBEB-A encapsulates S-VID 100 traffic by adding a B-DA value of PBEB-D, a B-SA value of PBEB-A, a B-VID value of 4001 (primary or purple tunnel), and the I-SID value of 10000. This MAC header encapsulated traffic is forwarded to PB Core Bridge- C (PBCB-C). PBCB-C has been configured to not learn or flood traffic on B-VID 4001, which has been reserved for PBB-TE use. The fact that PBB-TE does not learn or flood is an important point. Each PBCB device must be provisioned with forwarding database entries in order to properly forward traffic within tunnels. The PBCB-C forwarding table contains an entry for {PBEB-D, B- VID 4001} and the traffic is forwarded on the particular port in the direction of PBEB-D. Primary and backup PBB-TE tunnels are pre-configured by a management system. This enables the operator to engineer traffic according to path, bandwidth, and service requirements. Customers and services are associated with tunnels taking into account the aggregate Committed Information Rate (CIR) and Excess Information Rate (EIR) bandwidth requirements. Tunnels are monitored through the use of IEEE 802.1ag Connectivity Fault Management (CFM) Continuity Check Messages (CCM). CCM control frames are sent and received every few milliseconds across PBB-TE tunnels. If the primary tunnel should experience a fault, the tunnel endpoints automatically begin using the backup tunnel. The forwarding database entries are pre-configured along the backup path to minimize the failover and restoration times. PBEB-D receives the traffic and removes the MAC header encapsulation. Since the S-VID values are only locally significant per PB network, a provider has the flexibility to translate the S- VID value. In this case, PBEB-D has been configured to associate I-SID 10000 with S-VID 110. In Figure 15, traffic from the tunnel is de-encapsulated and the S-VID is re-mapped to the value of 110. The traffic is forwarded to the PB attached to. The S-Tag encapsulation is removed by the PB device and the original customer frame from is delivered to. 7
Figure 16 provides an analysis of the benefits and limitations of PBB-TE. Benefits > 16M services > Transparency of multiple 4K S-VID ranges > No learning or flooding in core network > Customer MAC addresses tunneled on provider network enhancing security and scalability > Maximum utilization of core network with engineered paths Limitations > PBEB devices learn all Backbone and transiting Customer MACs Summary Carrier Ethernet represents an exciting, rapidly growing market opportunity. The vast majority of domestic and international service providers and multiple system operators are either deploying or investigating Carrier Ethernet rollouts. As more and more customers and services are turned up, larger scale transport technologies are required. PBB-TE, in conjunction with IEEE 802.1ah PBB, has risen to meet the challenges and overcome the limitations of legacy techniques. PBB-TE and PBB offer large-scale, high-performance native Ethernet alternatives for efficient, transparent Layer 2 provider networks. Ciena offers competitive, innovative True Carrier Ethernet solutions to meet the present and future needs of these deployments. > Separation of Customer, Provider and Backbone Control Domains > Customer and PB layer two control protocols are transported through PBB network > 802.1ag CCMs monitor primary, backup tunnels > Sophisticated management system to provision core tunnels Figure 16. PBB-TE benefits and limitations Occasionally, services are impacted by soft failures. A soft failure usually consists of configuration or operator errors. For instance, a set of VIDs on a particular device or port may be disabled by an administrator. Upon initial inspection, some troubleshooting techniques may conclude the port is active and other traffic is passing normally. This result may lead the operator to look elsewhere for the problem. A proposed project within IEEE 802.1 deals with these configuration errors. It is known as Data Dependent-CFM. These advancements will further enhance the ability of Carrier Ethernet transport technologies, such as PBB-TE, to provide lower-cost of operations and enhanced resiliency. Specialists in enabling new application-driven possibilities over high-performance networks. 1201 Winterson Road Linthicum, MD 21090 1.800.207.3714 (US) 1.410.865.8671 (outside US) +353.1.2436711 (international) www.ciena.com Ciena may from time to time make changes to the products or specifications contained herein without notice. All rights reserved. IEEE is a registered trademark of the IEEE. 2008 Ciena Corporation. WP059 7.2008