CE 2.0 Service Management Life Cycle White Paper July 2014 White Paper Page 1 of 18
Table of Contents 1 Introduction and Overview... 3 1.1 Abstract... 3 1.2 Background... 3 1.3 Document Objective... 3 2 Carrier Ethernet 2.0 Introduction... 4 2.1 CE 2.0 Service Architecture... 5 2.2 CE 2.0 Service OAM Architecture... 6 3 Service Life Cycle for CE 2.0 Services... 7 4 Service Fulfillment... 8 4.1 Service Configuration and Activation... 8 4.2 Service Activation Testing... 8 5 Service Assurance...10 5.1 Performance Management... 10 5.2 Fault Management... 14 6 Summary and Conclusions...15 7 About the...17 8 Glossary and Terms...17 9 References...18 10 Editors and Contributors...18 Figures Figure 1 Metro Ethernet Forum Generations Framework... 4 Figure 2: Basic Network Reference Model... 6 Figure 3: Maintenance Entities... 6 Figure 4: Service Management Life Cycle for Carrier Ethernet 2.0 Services... 7 Figure 5 Service Activation Testing Methodology... 9 Figure 6: Network Management Functions for Service Assurance...10 Figure 7 Performance Management Loss Measurement...11 Figure 8 23.1 Performance Specifications for CE 2.0 E-Access Services...12 Figure 9 Availability and Resiliency Attributes Hierarchy...13 Figure 10 Connectivity Check Overview...14 Figure 11 Ethernet OAM Layer Interaction...16 Page 2 of 18
1 Introduction and Overview 1.1 Abstract The introduced the Carrier Ethernet 2.0 (CE 2.0) generation framework to help service providers with a new standardized approach to delivering Carrier Ethernet-based services. This white paper describes the life cycle of a CE 2.0 service instance from a service management perspective. Specifically the paper discusses: How CE 2.0 service management improves the day-to-day operations of service providers How service providers can use specifications to successfully deliver CE 2.0 services while providing operational efficiencies that translate into OPEX savings. 1.2 Background This paper targets business stakeholders, technical managers, network designers and operations personnel of Ethernet-based service providers worldwide. The main purpose for this white paper is to have service providers understand and use the latest CE 2.0 management concepts. It also aims to explain the different stages of the service management life cycle and to explain the benefits of following the 's standardized approach. The goal of this white paper is to introduce the CE 2.0 service management life cycle in the context of the related management technical specifications (TS) and implementation agreements (IA). Special attention is given to the relationship between the specifications and the work of other standards developing organizations (SDOs), specifically specifications such as IEEE 802.1ag (now incorporated into IEEE 802.1Q-2011), ITU-T Y.1731 and ITU-T Y.1564. 1.3 Document Objective This document provides a summary view of the different stages in the life cycle of an CE 2.0 based service instance from its initial activation, on-going operation including performance monitoring (PM) and fault management (FM) to end of life. Furthermore, the document details the specific advantages of using a standardized approach to service management as defined by the. Page 3 of 18
2 Carrier Ethernet 2.0 Introduction Carrier Ethernet services are enabling service provider evolution of legacy services as well as supporting enterprise business communications, cloud computing and mobility services. Carrier Ethernet 2.0, which was launched February 2012, adds five new service types to CE 1.0 bringing the total service types to eight. These eight service types encompass the port-based and VLAN-based E-Line, E-LAN, E-Tree and E- Access services. Carrier Ethernet 2.0 is founded on three specific tenets; multiple classes of service (Multi-CoS), interconnectedness and manageability. Figure 1 Metro Ethernet Forum Generations Framework Figure 1 provides a summary of the s Generations Framework for Carrier Ethernet. Carrier Ethernet 1.0 focused on standardized Ethernet services within a single provider s network. Carrier Ethernet 2.0 enhances the work of CE 1.0 by extending the specifications to address multiple classes of service, standards for delivering Carrier Ethernet services across multiple, interconnected networks and overall service management of Carrier Ethernet services, in particular over multi-provider networks. The multi-cos, management and interconnected features apply to each of the eight service types. Multi-CoS Multi-CoS defines standardized performance objectives across geographically defined performance tiers such that long haul services have different target objectives when compared to metro-based services given the derived propagation delay inherent in the distances covered by each performance tier 1. In addition, specifications have compiled data from a number of public resources to provide specific application performance requirements and this per application type (for example VoIP, interactive video, point-of-sale, etc.). 1 See 23.1 for performance tiers definition. Page 4 of 18
Interconnect Just as the success of the telephone voice system was based on standards enabling the interconnectivity of public switched telephone networks, so is the success of Carrier Ethernet based on standards enabling Interconnectivity of Carrier Ethernet networks so that one service can be delivered across multiple operator s networks without compromising its features such as Multi-CoS and manageability. Manageability Finally, manageability ensures standards for both fault management and performance monitoring of any CE 2.0 service whether they are provided by a single operator or traverse multiple operator s networks. Manageability is critical in delivering an assured service that meets its objectives for availability and performance. Furthermore, these features support service providers in differentiating their services to their end customers, providing the necessary service level agreement (SLA) reporting, maintaining their own service level objectives and minimizing operations costs involved in the troubleshooting and maintenance of CE 2.0 services. (e.g. truck rolls). The Future The s vision is to enable automation of the full network service life cycle service ordering, activation, modification and termination. The life cycle will be programmatically controlled through standardized APIs and service orchestration that are abstracted from the technologies and OSI network layers used. Based on a strategy to generalize service information models and service definitions for all connectivity services from optical transport to IP services the seeks to make its vision of enabling the efficient and effective automation of the network service life cycle, a reality. 2.1 CE 2.0 Service Architecture The has defined a Carrier Ethernet service architecture which the interested reader will find in 6.1 2. As depicted in Figure 2, the network reference model defines Ethernet services that transport subscriber Ethernet frames across a service provider s Carrier Ethernet network (CEN). The service provider is responsible for the performance and availability of the service between the user-to-network interface (UNI) demarcation points. Ethernet service frames are transported across the CEN through virtual connections. 6.1 defines three service types: an E-Line which is a point-to-point Ethernet Virtual Connection (EVC), an E-LAN which is a multipoint-to-multipoint EVC and an E-Tree which uses a rooted multipoint EVC. The s service architecture is built on virtual connections established over lower-layer transport services, therefore, Ethernet service frames can be transported over a variety of different technologies such as SONET/SDH, MPLS, bundled-copper and Fiber. The underlying transport mechanisms may vary on a linkby-link basis. Thus, service providers can offer -certified services independent of the underlying transport technology. 2 Replacement Technical Specification for 6.1 (to be named 6.2) to be published in July 2014. Page 5 of 18
2.2 CE 2.0 Service OAM Architecture Figure 2: Basic Network Reference Model A Carrier Ethernet Network (CEN), as depicted in Figure 3, is made up of one or more administrative domains. The, ITU-T Study Group 15, and IEEE 802.1 are developing a common management model to span these management domains. The CEN is partitioned into eight maintenance levels which start with the outermost level of the subscriber followed by the test level, EVC level, service provider level, operator level, UNI and ENNI level. Service providers have end-to-end responsibility for the service, while operators only have responsibility within their own access network. Based on an entity s particular maintenance domain and level of responsibility, each entity will have a particular level of visibility into the network. Figure 3: Maintenance Entities Page 6 of 18
The common management model defines functional entities for message transport. A maintenance entity group (MEG) is a set of maintenance entities that exist in the same administrative domain. The specific management interfaces are known as MEG end points (MEPs) and MEG intermediate points (MIPs). A MEP is an instantiation of a management interface and is associated with the interface of a network element such as a network interface device (NID) that is part of a data service path. It is located at the endpoint of a data service path and is paired with a peer MEP to bind the path. A MIP is a simplified instantiation of a MEP. It is located within a network element such as an Ethernet switch that is also part of the data service path between two MEPs. As a result, multiple MEPs and MIPs for different data service paths can exist within a single network element. Since the network elements are connected together, the MEPs within a data service path communicate with one another using specialized fault management and performance management messages. These messages are referred to as Service operation, administration and maintenance (OAM) frames. 3 Service Life Cycle for CE 2.0 Services Figure 4 highlights the business processes and workflow which begins at step 3 and continues through step 6 as illustrated in Figure 4 for the service operation life cycle of a Carrier Ethernet 2.0 service. In the context of this white paper, we will examine the service management workflow. It begins once an Ethernet service order has been filled, whether it is a new service order, or a modification of an existing service instance. The following sections detail the processes that fall into the categories of service fulfillment and service assurance. Figure 4: Service Management Life Cycle for Carrier Ethernet 2.0 Services Page 7 of 18
Note: the s Service Operations Committee is working complementary areas including: a Generic Service Life Cycle Process Model, Carrier Performance Reporting Framework, Ethernet Serviceability, Standardized Ethernet Service Order Specifications, Standardized Ethernet Product Catalog, etc. 4 Service Fulfillment Service fulfillment encompasses those activities necessary to provide the services which meet customer demands in a timely manner. For the service provider, this equates to deploying a solution that meets the customer requirements, including SLAs, which specify the requirements for performance, availability and throughput. In the context of the service management life cycle, service fulfillment will include the implementation of the service definition (service configuration and activation) and the testing and hand off of the service (service activation testing). 4.1 Service Configuration and Activation Service configuration processes include activities necessary to allocate, implement and configure the necessary resources to meet customer requirements including resource provisioning. For example, with an E-Line service, resource provisioning would be performed at each UNI-Network (UNI-N) demarcation point, where the customer ingress and egress traffic engineering and conditioning processes are occurring, in accordance with the service level specification (SLS). The resources, or equipment, that provides the customer s service must be provisioned to enable the end-to-end service delivery with the required service attributes. Examples of configuration attributes defined at the UNI-N demarcation include: IEEE 802.3 PHY Medium, speed and mode UNI interfaces and EVC segment UNI-to-EVC associations bandwidth profiles It is important to note that both service configuration as well as Service OAM (SOAM) configuration is performed at this stage of the workflow to enable service assurance mechanisms the moment the service is deployed to the customer. Examples of SOAM configuration for Ethernet services include: Configuration of maintenance domains (MDs), maintenance associations (MAs), MEGs, MEPs and MIPs Configuration of fault management functions including Continuity Check, Alarm Indication Signal and Fault Alarm Generation Configuration of performance monitoring functions including Frame Loss and Frame Delay measurements and event generation for threshold crossing alerts (TCAs) 4.2 Service Activation Testing Before a service provider deploys an Ethernet service to a customer (i.e., during the installation phase), the service must be validated to ensure the participating network devices have been configured properly (e.g., connectivity is established) and the service meet the defined SLS. This testing also Page 8 of 18
provides baseline service measurements to create a Carrier Ethernet service birth certificate. The service activation testing methodology, as illustrated in Figure 5, addresses these requirements. One aspect of service fulfillment is the process of testing the end-to-end service. End-to-end testing involves testing specific services attributes to ensure all components are operating within normal parameters and that the service is delivered in accordance to its service level specification. Figure 5 illustrates the complete service activation testing methodology. The first phase is the setup of the test architecture. This phase ensures that the Ethernet test equipment have connectivity between them. The next two phases, Service Configuration and Service Performance tests, are describe in details in the following paragraph. The final phases of the methodology will return the service under test to its pre-test state and complete the test report. Figure 5 Service Activation Testing Methodology The Service Configuration tests validate that the configuration of the service is in accordance with the service definition. Service attributes such as the CE VLAN ID and CE VLAN CoS ID preservation, unicast/multicast/broadcast frame delivery and bandwidth profiles need to be validated to make sure they were configured as per the service definition. The Service Configuration tests are of short duration. The Service Performance tests are of a longer duration compared with Service Configuration tests. Its goal is to ensure the quality of the service being tested. By generating test traffic, measuring and calculating the resulting service performance attributes and comparing these results against a predetermined acceptance criteria, one can expect that the service meets the SLAs accepted by the subscriber. The service birth certificate is based on the test results and data from both the Service Configuration and the Service Performance tests. Creating a concise configuration and performance report facilitates the validation of future service performance against an established baseline. Page 9 of 18
Example performance attributes that are measured and validated during service activation include: Frame Loss Ratio Performance Frame Delay and Mean Delay Performance Frame Delay Range and Inter-Frame Delay Variation Performance While the methodology defined in this section is used to turn-up services, it can also be used throughout the service life cycle to assure the service and validate the resolution of network performance and availability incidents. 5 Service Assurance When service fulfillment is successfully completed, the service s configuration is confirmed and validated in accordance with the SLA defined with the end customer at which point, the service becomes active and is passing customer traffic. Service assurance includes the activities necessary to ensure the on-going monitoring and maintenance of the service s health and SLA compliance. This includes proactive performance management functions, reactive fault management functions, and on-demand diagnostic and troubleshooting activities. Figure 6 highlights the different network management toolsets that enable service assurance. 5.1 Performance Management Figure 6: Network Management Functions for Service Assurance Performance management is a proactive and on-demand network management function that gathers and analyzes various technical indicators and statistical data for the purpose of monitoring and detecting service degradation, to warn of impending service degradation and possible SLA breach as well as correcting the behavior and effectiveness of the network. Furthermore, performance management is used to support network planning, provisioning, maintenance and quality of experience for the delivered service. Typically performance management involves the recurring collection of counters and statistics from network resources or their element management systems (EMS) in the service delivery chain. Page 10 of 18
Performance management uses near real-time analysis as well as longer term historical trend analysis on the counters and statistics to evaluate service quality and availability. Additionally performance management may evaluate service quality and notify operations personnel through the configuration of thresholds and generation of threshold crossing alerts as well as performance events, forecasts and performance baseline deviations. Performance management within the s Service OAM framework borrows from the ITU-T Y.1731 Recommendation and defines how one implements performance monitoring in a Carrier Ethernet network. 35 defines the ability to monitor end-to-end service performance and SLA compliance of Ethernet Virtual Connections (EVCs) across maintenance domains. Monitored performance attributes include: Frame Loss Ratio Frame Delay, Frame Delay Range and Mean Frame Delay Inter-Frame Delay Variation Availability and Resiliency Frame Loss Ratio (FLR) is a measure of the number of lost frames between the ingress UNI and the egress UNI, expressed as a percentage. FLR measurements use Loss Measurement Messages (LMMs) and Loss Measurement Replies (LMRs), sent between MEPs with recent non-management traffic counter information. The counters are used to estimate the Frame Loss Ratio of the non-management traffic. This measurement is only useful in point-point topologies; flooding makes the measurement useless in other topologies. MEPs must be capable of generating and receiving LMMs and LMRs for E-Line services. MEPs must also be able to calculate and report the one-way Frame Loss Ratio based on frame counter information and received LMMs and LMRs. Figure 7 presents a graphical example of a Loss Measurement. Figure 7 Performance Management Loss Measurement Page 11 of 18
Frame Delay (FD) is a measure of the time required to transmit a service frame from the ingress UNI to the egress UNI, where the performance is expressed as a measure of the delays experienced by service frames belonging to the same Class of Service (CoS) instance. Mean Frame Delay (MFD) is the arithmetic mean of delays experienced by a set of service frames. One-way Frame Delay measurements (1DM) use periodic 1DM messages sent by the source MEP to a destination MEP. The message contains a timestamp from the source MEP. The receiving MEP computes the one-way delay by subtracting the transmit timestamp from the received timestamp, which is created upon receipt of the 1DM. The 1DM protocol assumes that MEPs are all operating with synchronized clocks. MEPs must be capable of computing one-way delay end-to-end across E-Line services using successive 1DM messages. Two-way Frame Delay measurements use two message types, Delay Measurement Message (DMM) and the Delay Measurement Response (DMR) messages. The DMM message is sent by the source MEP to a destination MEP. The message contains a time-stamp from the source MEP. The receiving MEP replies with a DMR message containing a copy of the received DMM time-stamp, and optionally, a receive timestamp taken at DMM reception, and a transmit time-stamp created at the time of DMR transmission. The two-way delay can then be computed by the source MEP as the difference between its current time-stamp and its original time-stamp as received back in the DMR message. The addition of transmit and receive time-stamps allows for the calculation of the destination MEP processing time. Two-way delay calculations do not require synchronized clocks. MEPs must be capable of computing two-way delay end-to-end across an Ethernet service using successive DMM/DMR messages. Inter-Frame Delay Variation (IFDV) is a measure of the variations in the Frame Delay experienced by a pair of service frames belonging to the same Class of Service (CoS) instance. Figure 8 23.1 Performance Specifications for CE 2.0 E-Access Services Page 12 of 18
Frame Delay Range (FDR) is the difference between the Frame Delay Performance values corresponding to two different percentiles (e.g. 90 th percentile to 20 th percentile). FDR can be seen as the characterization of the variation in the delays experienced by different Service Frames mapped to the same EVC that have the same Class of Service Name. As FD can be classified in percentile, FDR will provide a measure of the extent of delay variability. 23.1 [7] IA requires support for at least one of either Frame Delay (FD) or Mean Frame Delay (MFD). Either one or both of these attributes can apply to a given SLS. Similarly for Inter-Frame Delay Variation (IFDV) and Frame Delay Range (FDR) Performance, 23.1 IA [7] requires support for at least one of these two attributes. Figure 8 above presents the 23.1 performance specification for CE 2.0 E-Access services. The same protocols and messages used for measuring Frame Delay can be used for measuring Inter- Frame Delay Variation. Figure 9 Availability and Resiliency Attributes Hierarchy Page 13 of 18
Availability is defined as the percentage of time over a stated period that the Frame Loss Ratio is below a set threshold. It is used to indicate the percentage of time that an Ethernet service is usable. MEPs must be capable of calculating and reporting the availability of services based on frame loss measurements. Resiliency performance is defined as the number of High Loss Intervals (HLIs) and Consecutive High Loss Intervals (CHLIs) in a time interval T. HLI is defined as a small time interval contained in T with a high Frame Loss Ratio. CHLI is defined as sequence of small time intervals contained in T, each with a high Frame Loss Ratio. Resiliency performance attributes (HLI and CHLI) allow a MEN Operator to offer Services that are resilient to failures that affect UNI or EVC with limits on the duration of short term disruptions and to apply constraints like diversity. 5.2 Fault Management Fault management is a proactive and on-demand network management function which enables the detection, isolation and correction of abnormal operation. Typically such abnormal operation is detected during proactive service monitoring where a fault event is detected by a network device in the service delivery chain. Once the failure is identified, a service trouble report can be created and assigned to a technician so that on-demand isolation and corrective procedures can be taken. Fault management within the Service OAM framework uses the protocols of IEEE 802.1Q-2011 Clause 22 (formerly known as IEEE 802.1ag) Connectivity Fault Management (CFM) and ITU-T G.8031/Y.1731 as defined in 30.1. These protocols aid in the ability to monitor the status and connectivity of EVCs across maintenance domains as well as provide OAM functions for troubleshooting service impairment. Figure 10 Connectivity Check Overview Faults in a network are identified by sending Continuity Check Messages (CCMs) at regular intervals between adjacent MEPs as seen in Figure 10. If the messages do not arrive at a MEP, the receiving MEP declares a loss of connectivity defect. CCMs are one-way messages similar to keep alive messages. CCMs are capable of detecting inadvertently cross-connected virtual connections where one MEP sees a CCM from a MEP associated with a different Ethernet virtual connection. If the CCM interval is configured, MEPs must generate CCMs on a periodic basis as defined by the CCM interval. An Alarm Indicate Signal Page 14 of 18
(AIS) can be used to suppress alarms following detection of defect conditions such as signal fail conditions in the case that CCMs are enabled. In addition, Remote Defect Indication (RDI) can be used by a MEP to communicate to its remote peer MEPs that a defect condition has been encountered. RDI is a 1-bit field carried in the CCM frames. When service providers detect a fault in the network, they can initiate the transmission of a finite number of periodic Loop Back Messages (LBMs) to another MEP. Upon receipt of an LBM, the receiving MEP replies with a Loop Back Reply (LBR) message, similar to a ping. MEPs must be capable of generating LBMs at the request of the service provider and respond to LBMs with LBRs upon receipt. In order to isolate faults in a network, a MEP, upon demand, can send a Link Trace Message (LTM) to another MEP. MIPs respond with a Link Trace Reply (LTR) message and forward the LTM onward. In this way, the transmitting MEP receives multiple replies and learns the path taken by the LTM and the last reachable point in the network. MEPs must be capable of generating LTMs at the request of the service provider and respond to LTMs with LTRs upon receipt. Finally, there are two remaining on-demand fault management functions for diagnostic testing and troubleshooting. The locked signal function is used to communicate an administrative lock to the far end MEP, resulting in an interruption in data traffic. This allows a MEP receiving lock (LCK) frames to distinguish between defect conditions and an administrative locking action. This feature can be used by other OAM functions that require a MEP to be administratively locked, such as for out-of-service testing. The Test Signal function can be used to send test (TST) frames containing test patterns and checksums for diagnostic throughput, frame loss and bit error testing. Throughout the lifetime of the fault condition, the service provider should have tracking and monitoring capability in order to view the real-time status of the fault condition from an operational standpoint. When the fault condition has been corrected, the service trouble report can be closed. 6 Summary and Conclusions As service providers expand their CE 2.0 service offerings, customers are demanding service level agreements with an increased ability to monitor and manage their service performance. This is driving service providers to expand their Service OAM capabilities and business processes beyond basic connectivity and limited performance assurance. In addition, these Service OAM capabilities require a common framework for service activation, service monitoring, and service troubleshooting as shown in Figure 6. When enabling a new service, service providers focus on service fulfillment first, which includes processes and tools used to provision and turn-up new services to meet customer requirements. Service fulfillment consists of service configuration and service activation. Service configuration includes resource provisioning, connection configuration (e.g. IEEE 802.3 PHY, UNI, and EVC parameters), and management configuration (e.g. maintenance domains, MEPs, MIPs, and fault and performance management thresholds). Once the service is configured and prior to it being placed in service, service activation processes and tools are used to verify that the service complies with its service level specification. In particular, two types of tests are used Service Configuration tests, which verify that Page 15 of 18
the connection offers end-to-end connectivity and that the service attributes are configured as per the service definition, and the Service Performance test, which measure the Frame Loss Ratio, Frame Delay/Mean Delay Performance, Frame Delay Range and Inter-Frame Delay Variation Performance of the connection. Test results from both the Service Configuration and Service Performance tests are combined to create the service birth certificate. Creating a concise performance report facilitates future performance validation against a known baseline. Figure 11 Ethernet OAM Layer Interaction Service providers use fault management protocols defined in the IEEE 802.1Q-2011 (formerly known as IEEE 802.1ag) Connectivity Fault Management (CFM) and ITU-T G.8031/Y.1731 standards as defined in 30.1. Connectivity Check Messages, similar to pings, identify faults. When a fault is identified, service providers can use Loop Back Messages to verify the fault. Finally, service providers can use Link Trace Messages, analogous to traceroute, to isolate the fault. These messages can be exchanged between network elements supporting IEEE, ITU-T, and Service OAM standards. Service providers use performance management measurement and reporting protocols defined in the ITU-T G.8031/Y.1731 standards and specifications, including 35, to monitor the quality of an Ethernet service and validate SLA compliance. While MEPs are not expected to support all performance management measurements defined in ITU-T G.8031/Y.1731 and 35 they are expected to support a subset. In particular, service providers should be interested in measuring Frame Loss, one-way and twoway Frame Delay, Inter-Frame Delay Variation (jitter), and availability. Figure 11 presents a summary view of the different OAM layer presented in this document. By adopting new business processes and tools to support Ethernet service configuration, service activation, fault management, and performance management, service providers can offer high-quality Ethernet services with reduced OPEX. Page 16 of 18
7 About the The is a global industry alliance comprising more than 225 organizations including telecommunications service providers, cable MSOs, network equipment/software manufacturers, semiconductors vendors and testing organizations. The s mission is to accelerate the worldwide adoption of Carrier-class Ethernet networks and services. The develops Carrier Ethernet technical specifications and implementation guidelines to promote interoperability and deployment of Carrier Ethernet worldwide. The s Certification Program has resulted in more than 800 products and services from more than 160 companies and more than 2000 certified professionals. For more information about the Forum, including a complete listing of all current members, please visit http://www.metroethernetforum.org 8 Glossary and Terms A glossary of terms used in this document can be found online at http://www.metroethernetforum.org/carrier-ethernet/terms-used-in-mef-specifications Other Terms and Abbreviations used in this document Term Description 1DM One way Delay Measurement AIS Alarm Indication Signal CCM Continuity Check Message CFM Connectivity Fault Management CoS Class of Service DMM Delay Measurement Message DMR Delay Measurement Reply EVC Ethernet Virtual Connection FD Frame Delay FLR Frame Loss Ratio FM Fault Management HLI High Loss Interval IEEE Institute of Electrical and Electronics Engineers IFDV Inter-Frame Delay Variation ITU-T Telecommunication Standardization Sector of the International Telecommunication Union LBM Loopback Message LBR Loopback Reply LCK Locked Signal Message LMM Loss Measurement Message LMR Loss Measurement Reply LTM Link Trace Message LTR Link Trace Reply MA Maintenance Association MD Maintenance Domain Metro Ethernet Forum MEG Maintenance Entity Group MEP Maintenance Entity Group End Point Page 17 of 18
MFD MIP MPLS NID OAM PM RDI SDH SLA SLS SOAM SONET TCA TST UNI UNI-N Mean Frame Delay Maintenance Entity Group Intermediate Point Multiprotocol Label Switching Network Interface Device Operations, Administration and Maintenance Performance Monitoring Remote Defect Indication Synchronous Digital Hierarchy Service Level Agreement Service Level Specification Service Operations, Administration and Maintenance Synchronous Optical Networking Threshold Crossing Alert Test Signal Message User-to-Network Interface User-to-Network Interface - Network 9 References [1] Ethernet Services Definitions - Phase 2, 6.1,, April, 2008. [2] Phase 2 EMS-NMS Information Model, 7.2,, April 2013. [3] Ethernet Services Attributes Phase 3, 10.3,, October 2013. [4] Requirements for Management of Metro Ethernet Phase 1 Network Elements, 15,, November, 2005. [5] Service OAM Requirements & Framework - Phase 1, 17,, April, 2007. [6] Carrier Ethernet Class of Service Phase 2, 23.1,, January 2012. [7] Service OAM Fault Management Implementation Agreement: Phase 2, 30.1,, April 2013. [8] Service OAM Performance Monitoring Implementation Agreement, 35,, April 2012. [9] ITU-T Recommendation G.8013/Y.1731, OAM functions and mechanisms for Ethernet based networks, November 2013. [10] IEEE Std 802.3 TM 2012, IEEE Standard for Ethernet, December 2012. [11] IEEE Std 802.1Q TM, Media Access Control (MAC) Bridges and Virtual Bridge Local Area Networks, August 2011. 10 Editors and Contributors Editor: Bruno Giguère, EXFO Contributors : Ulrich Kohn, ADVA Optical Networks; Scott Mansfield, Ericsson; Christopher Cullan, InfoVista; Ralph Santitoro, Fujitsu; Mark Fishburn, Page 18 of 18