HGI-RD016 HOME GATEWAY AND HOME NETWORK DIAGNOSTICS MODULE REQUIREMENTS Published April, 2013 Source HGI02184_R06.doc P a g e 1
1 CONTENTS 2 Important notices, IPR statement, disclaimers and copyright... 5 2.1 About HGI... 5 2.2 This may not be the latest version of This HGI Document... 5 2.3 There is no warranty provided with This HGI Document... 5 2.4 Exclusion of Liability... 5 2.5 This HGI Document is not binding on HGI nor its member companies... 5 2.6 Intellectual Property Rights... 6 2.7 Copyright Provisions... 6 2.7.1 Incorporating HGI Documents in whole or part within Documents Related to Commercial Tenders 6 2.7.2 Copying This HGI Document in its entirety... 6 2.8 HGI Membership... 7 3 Acronyms... 8 4 Introduction... 11 4.1 Scope And Purpose... 11 4.2 Definitions Of Terms... 12 5 Troubleshooting Philosophy... 13 5.1 Troubleshooting Philosophy And Key Requirements... 13 5.2 Detecting A Problem... 14 5.3 Service Specific Diagnostics... 14 5.4 Avoiding The Initial Help-desk Call... 14 5.5 Selecting The Service To Be Investigated... 15 5.6 Initiating Troubleshooting... 15 5.7 Non Service-specific Troubleshooting... 15 5.8 Self-Care And Help-Desk Assistance... 15 6 Diagnostics Architecture... 17 7 Types Of Problem... 19 7.1 Examples Of Problems... 19 P a g e 2
7.1.1 Connectivity... 19 7.1.2 Reachability... 20 7.1.3 Speed... 20 7.1.4 QoS... 21 7.1.5 HG Hardware Or Software Problem... 21 7.1.6 Service Provisioning... 22 7.1.7 Service Initiation... 22 7.2 Real-World Support Issues... 23 7.2.1 Survey Results... 23 7.3 High Level Requirements... 25 7.3.1 Connectivity... 25 7.3.2 Reachability... 26 7.3.3 Speed... 26 7.3.4 QoS... 26 7.3.5 Home Gateway Hardware Or Software Fault... 26 7.3.6 Service Provisioning... 27 7.3.7 Service Instance Initiation... 27 8 Requirements... 27 8.1 Basic HG Information... 27 8.2 Firmware Management... 28 8.3 HG Self-Test... 29 8.4 HG Hardware and Performance monitoring... 30 8.5 Duplicate Address Detection... 30 8.6 SWEX Support... 30 8.7 Diagnostics WEB page support... 31 8.8 WAN Port Control... 31 8.9 Power Saving... 31 8.10 Service Selection... 32 8.11 Service Configuration Testing... 32 8.12 Device Discovery... 33 P a g e 3
8.13 HG Discovery... 34 8.14 Topology Discovery... 34 8.15 Connectivity Testing... 35 8.16 Reachability Testing... 35 8.17 Speed Testing... 35 8.17.1 Interface Counters... 35 8.17.2 Accessing a Network Based Speed Checker... 36 8.17.3 HG Based Speed Checker... 37 8.18 Service Class Monitoring... 37 8.19 Instantaneous Interface Rate Monitoring... 39 8.20 Long Term Interface Rate Monitoring... 39 8.21 Wireless Interface Logging... 40 8.22 Multicast... 40 8.23 Voice Specific Diagnostics... 41 8.24 Remote Access Support... 42 9 Management Requirements... 42 9.1 CWMP... 42 9.2 SWEX Management... 43 10 References... 43 P a g e 4
2 IMPORTANT NOTICES, IPR STATEMENT, DISCLAIMERS AND COPYRIGHT This chapter contains important information about HGI and this document (hereinafter This HGI Document ). 2.1 ABOUT HGI The Home Gateway Initiative (HGI) is a non-profit making organization which publishes guidelines, requirements documents, white papers, vision papers, test plans and other documents concerning broadband equipment and services which are deployed in the home. 2.2 THIS MAY NOT BE THE LATEST VERSION OF THIS HGI DOCUMENT This HGI Document is the output of the Working Groups of the HGI and its members as of the date of publication. Readers of This HGI Document should be aware that it can be revised, edited or have its status changed according to the HGI working procedures. 2.3 THERE IS NO WARRANTY PROVIDED WITH THIS HGI DOCUMENT The services, the content and the information in this HGI Document are provided on an "as is" basis. HGI, to the fullest extent permitted by law, disclaims all warranties, whether express, implied, statutory or otherwise, including but not limited to the implied warranties of merchantability, non-infringement of third parties rights and fitness for a particular purpose. HGI, its affiliates and licensors make no representations or warranties about the accuracy, completeness, security or timeliness of the content or information provided in the HGI Document. No information obtained via the HGI Document shall create any warranty not expressly stated by HGI in these terms and conditions. 2.4 EXCLUSION OF LIABILITY Any person holding a copyright in This HGI Document, or any portion thereof, disclaims to the fullest extent permitted by law (a) any liability (including direct, indirect, special, or consequential damages under any legal theory) arising from or related to the use of or reliance upon This HGI Document; and (b) any obligation to update or correct this technical report. 2.5 THIS HGI DOCUMENT IS NOT BINDING ON HGI NOR ITS MEMBER COMPANIES This HGI Document, though formally approved by the HGI member companies, is not binding in any way upon the HGI members. P a g e 5
2.6 INTELLECTUAL PROPERTY RIGHTS Patents essential or potentially essential to the implementation of features described in This HGI Document may have been declared in conformance to the HGI IPR Policy and Statutes (available at the HGI website www.homegateway.org). 2.7 COPYRIGHT PROVISIONS 2013 HGI. This HGI Document is copyrighted by HGI, and all rights are reserved. The contents of This HGI Document are protected by the copyrights of HGI or the copyrights of third parties that are used by agreement. Trademarks and copyrights mentioned in This HGI Document are the property of their respective owners. The content of This HGI Document may only be reproduced, distributed, modified, framed, cached, adapted or linked to, or made available in any form by any photographic, electronic, digital, mechanical, photostat, microfilm, xerography or other means, or incorporated into or used in any information storage and retrieval system, electronic or mechanical, with the prior written permission of HGI or the applicable third party copyright owner. Such written permission is not however required under the conditions specified in Section 2.7.1 and Section 2.7.2 : 2.7.1 INCORPORATING HGI DOCUMENTS IN WHOLE OR PART WITHIN DOCUMENTS RELATED TO COMMERCIAL TENDERS Any or all section(s) of HGI Documents may be incorporated into Commercial Tenders (RFP, RFT, RFQ, ITT, etc.) by HGI and non-hgi members under the following conditions: (a) The HGI Requirements numbers, where applicable, must not be changed from those within the HGI Documents. (b) A prominent acknowledgement of the HGI must be provided within the Commercial document identifying any and all HGI Documents referenced, and giving the web address of the HGI. (c) The Commercial Tender must identify which of its section(s) include material taken from HGI Documents and must identify each HGI Document used, and the relevant HGI Section Numbers. (d) The Commercial Tender must refer to the copyright provisions of HGI Documents and must state that the sections taken from HGI Documents are subject to copyright by HGI and/or applicable third parties. 2.7.2 COPYING THIS HGI DOCUMENT IN ITS ENTIRETY This HGI Document may be electronically copied, reproduced, distributed, linked to, or made available in any form by any photographic, electronic, digital, mechanical, photostat, microfilm, xerography or other means, or incorporated into or used in any information storage and retrieval system, electronic or mechanical, but only in its original, unaltered PDF format, and with its original HGI title and file name unaltered. It may not be modified without the advanced written permission of the HGI. P a g e 6
2.8 HGI MEMBERSHIP The HGI membership list as of the date of the formal review of this document is: Actility, Advanced Digital Broadcast, Alcatel-Lucent, Arcadyan, Arm, Belgacom, Bouygues Telecom, British Sky Broadcasting Ltd., Broadcom, BT, Cavium, Celeno, Cisco, Deutsche Telekom, Devolo, Dialog Semiconductor, D-Link Corporation, DSP Group, eflow, EnOcean Alliance, Ericsson AB, Fastweb SpA, France Telecom, Hitachi, Huawei, Ikanos, Intel, IS2T, KDDI, KPN, LAN, Lantiq, LG Electronics, Lionic, Makewave, Marvell Semiconductor, Mindspeed, Mitsubishi, MStar, NEC Corporation, Netgear, NTT, Oki Electric Industory, Portugal Telecom, ProSyst, Qualcomm Atheros, Sagemcom, Samsung, Seagate, Sercomm Corp., Sigma, SoftAtHome, Stollmann, Sumitomo, Swisscom AG, Technicolor, Telecom Italia, Telekom Austria, TeliaSonera, Telstra, TNO, Trac Telecoms & Radio Ltd, Vodafone, Vtech, Zarlink, ZTE, ZyXEL. P a g e 7
3 ACRONYMS ACS ADSL AN ANP ARP ATA BRAS BSP CAC CE CoS CPE CPU CRC DHCP DNS Auto-Configuration Server Asymmetric Digital Subscriber Line Access Network Access Network Provider Address Resolution Protocol Analogue Terminal Adapter Broadband Remote Access Server Broadband Service Provider Call Admission Control/Connection Admission Control Customer Experience Class of Service Customer Premises Equipment Central Processing Unit Cyclic Redundancy Check Dynamic Host Control Protocol Domain Name Server DRAM Dynamic Random Access Memory DSL Digital Subscriber Line DSLAM DSL Access Multiplexer ED EU End Device End user GPON Gigabit Passive Optical Network GUI Graphical user Interface P a g e 8
HG HGI HN HNID I/F MAC LAN NAS NAT NGN NGA NT(E) OAM OS OTT Home Gateway Home Gateway Initiative Home Network Home Network Infrastructure Device Interface Media Access Control Local Area Network Network Attached Storage Network Address Translation Next Generation Network Next Generation Access Network Termination (Equipment) Operations, Administration & Maintenance Operating System Over The Top (service) OUI-SN Organisationally Unique Identifier PHY PLT PM PPP PVR QoS RCPI RMS Physical Layer Power Line Technology Performance Monitoring Point-to-Point Protocol Personal Video Recorder Quality of Service Received Channel Power Indicator Remote management System P a g e 9
RTP SLA Real Time Protocol Service Level Agreement SNMP Simple Network Management Protocol SP SSID STB SWEX UI Service Provider Service Set Identifier Set Top Box Software Execution Environment User Interface UMTS Universal Mobile Telephone System UPnP URL USB VAS VDSL VoD VoIP WAN xdsl Universal Plug and Play Universal Resource Locator Universal Serial Bus Value Added Service Very highspeed Digital Subscriber Line Video on Demand Voice Over IP Wide Area Network ADSL and VDSL P a g e 10
4 INTRODUCTION 4.1 SCOPE AND PURPOSE Broadband Service Providers (BSPs) are increasingly looking to take broadband beyond basic Internet access and provide a range of Value Added Services (VASs). These range from fully managed services, such as IPTV, to others which may simply benefit from some enhanced network treatment, e.g. VoIP or gaming. Getting additional revenue from VASs is especially important when trying to make the business case for higher speed access technologies such as VDSL and GPON. These VASs often place greater transport demands on the network in terms of both quantity and quality. These will not always be able to be met unless there is a significant upgrade in network capacity; further in some parts of the network (such as in the home), the required upgrade may not even be possible. Therefore VASs will not always be delivered in an acceptable way, although QoS management can help greatly. When a problem does arise, it is essential that it can be resolved as quickly and cheaply as possible, so that the BSPs support costs do not become prohibitive, and the customer experience is not compromised to the point where the user no longer takes the service. If this can be done by the customer himself, then there will be benefits in terms of both speed of problem resolution and lower cost. It is important to be able to diagnose problems on a service specific basis, because the techniques required may be service dependent, and so the appropriate service provider is contacted (where this cannot be avoided). The situation is however complicated by the fact that there may be a variety of services, sometimes delivered concurrently, and the mix will change over time. This document specifies a set of diaqnostic functions in the Home Gateway (HG) to support a flexible troubleshooting architecture. The way in which these are actually used to diagnose problems is left to the Broadband Service Provider, thereby presenting an opportunity for BSP differentiation, and to allow them to integrate the troubleshooting capability into their own processes and back-end systems. While there is a focus on diagnosing QoS-related issues, this is just one of a number of possible problems, and all of these are covered to some degree. While this document takes an end to end architectural view, the main focus is on specifying the requirements needed in the HG to support this architecture. The role of a Cloud-based service in augmenting the embedded diagnostics capability is recognised, but there is no intention to define the details of such a Cloud service, or how it might interact with the HG. HNIDs are included in the architecture, but this document does not include any specific requirements for HNID diagnostics functionality; this may be the subject of future HGI work. Since at least the initial stages of troubleshooting are intended to be done by the user themself, there is a need for a local (graphical) interface to allow them to interact with the system both to invoke tests, P a g e 11
and observe the system status and test results. This user interface could be implemented on any or all of the following: PC, laptop, smartphone and tablet. HGI does not plan to specify this interface. The look and feel, as well as the functionality of such an interface is left to individual BSPs as it is a potentially significant differentiator. In the Requirements, this interface is referred to generically as the Local UI. 4.2 DEFINITIONS OF TERMS The definitions of MUST and SHOULD in this document are as follows: MUST A functional requirement which is based on a clear consensus among HGI Service Provider members, and is the base level of required functionality for a given class of equipment. MUST NOT This function is prohibited by the specification. SHOULD Functionality which goes beyond the base requirements for a given class of HG, and can be used to provide vendor product differentiation (within that class). Note: These definitions are specific to the HGI and should not be confused with the same or similar terms used by other bodies. P a g e 12
5 TROUBLESHOOTING PHILOSOPHY 5.1 TROUBLESHOOTING PHILOSOPHY AND KEY REQUIREMENTS Broadband Service Providers are increasingly attempting to provide their customers with Value Added Services, either as direct revenue sources, or to make their overall service bundle more attractive so that churn is reduced. However these services have to be delivered alongside OTT services and content (such as YouTube), and in the presence of increasing in-home traffic such as local content streaming from a PVR or NAS. This can lead to multiple concurrent streams both through the HG, and on the home network. VASs range from fully managed services, such as IPTV, to those which may simply benefit from some enhanced network treatment, e.g. VoIP or gaming. Although the user may have the greatest expectations of the high-value services, in particular IPTV, there is a need for the diagnostics architecture to be able to cope with a wider range of services, as the BSP is likely to be held responsible for all service shortcomings, whether these services are managed or not. There are many ways in which service delivery can be compromised. These range from provisioning problems, through connectivity and authorisation, to QoS. The problem might also be with the service platform itself e.g. temporary unavailability or inadequate capacity to meet a specific service request. While the HGI QoS scheme ([10]) was designed to be able to cope with a mixture of managed and unmanaged services, QoS alone cannot help when the sustained load offered to the network is greater than the physical layer capacity. Admission control is sometimes suggested as the way to avoid sustained overload, but this presupposes that all applications request permission to start via some signalling protocol, and in a way which indicates their bandwidth requirements. Further, there needs to be a way of rejecting a request when necessary. Few applications actually do this, but even if they did, it would not address the case of physical layers with a time-varying capacity (e.g. wireless and PLT). Therefore the likelihood of service disruption is increasing, while the revenue opportunities remain extremely limited. It is therefore very important that support calls and costs are kept to the absolute minimum. Few if any residential services have a parameterised SLA i.e. there is no contractual agreement with the service provider as to the level of service provided. However customers do expect value added services to provide an appropriate quality of experience, in particular, for video. This means for example no picture blocking, frame freezing or audio discontinuities. These however will occur from time to time, and when they do, it is highly desirable that the end-user himself is able to analyse and fix or avoid the problem without the need to contact the Service Provider, especially when the problem is in the HG or HN. This is not just a benefit to the SP; the end-user also benefits as helpdesk costs are part of the coststack of any product, and so will ultimately impact its price. The vast majority of customers are however P a g e 13
non-technical - indeed home networking can be a challenge even for industry professionals - therefore it is necessary to provide some very easy to use diagnostic tools to make this approach feasible. 5.2 DETECTING A PROBLEM The first issue is how to determine that there is a problem. This document acknowledges the key role of the user in problem detection, although the BSP is also likely to have their own service monitoring tools which can proactively detect some service failures, especially those due to the platform itself. The user is a key element in the diagnostics chain in the HGI approach. If the user is happy with the service delivery, then there is no problem by definition, even if the service is not being delivered optimally. Further, it is hard, if not impossible, for an automated system to decide what the user is doing, and whether his particular mix of services is being delivered satisfactorily. Many services have no formal signalling so there is no way of knowing what their bandwidth requirements are, let alone the more subtle characteristics of jitter, packet loss etc. Even value added services, such as voice and video, can be very bursty, making it hard to detect abnormal behaviour. Probably the worst thing of all would be to tell the end-user that he had a problem when he didn't, or was maybe not even trying to do something. 5.3 SERVICE SPECIFIC DIAGNOSTICS Since this is a multi-service environment, the diagnostics capability needs to be able to be servicespecific. The end-user perspective is also likely to be service-based. The way in which problems are detected and located may depend on the nature of the transport requirements of the service. From a commercial perspective, services may be provided by different service providers, and it is important that if a support call is needed, it is directed to the appropriate Application Service Provider. However the approach used here is based on service type rather than service instance. Detecting a service type can be relatively easy; it is harder to detect every instance. This greatly simplifies the solution and will hardly detract from the usefulness of the technique as multiple simultaneous VASs of exactly the same type will be a rare occurrence. The techniques will still work in that particular case, but it may be more difficult to identify which Service Provider to contact. 5.4 AVOIDING THE INITIAL HELP-DESK CALL When the user does experience a problem, the first thing to do is to get him to access a self-care system, rather than calling a helpdesk. One way of doing this would be an application which displays a simple help button on a PC screen, smartphone or tablet. Clicking this button accesses a URL which is redirected to the HG and initiates a local diagnostics application. This application generates Web-like Help pages which are displayed on the appropriate device to guide the end-user through the process. P a g e 14
This self-care can also be a combination of local (HG) and remotely hosted (e.g. Cloud-based) applications. In the latter case the URL link on the home gateway could provide a redirect to the network-based self-care platform. This may allow more advanced diagnostics, and means that maintaining and upgrading the system can be done centrally, rather than having to upgrade a large number of HGs. However the HG always needs to have some local diagnostics capability, for the case where the problem is due to lack of WAN connectivity, which would of course prevent access to a central diagnostics system. 5.5 SELECTING THE SERVICE TO BE INVESTIGATED Since troubleshooting needs to be service specific, the end-user has to have some way of telling the diagnostics system which service is having a problem. HGI services are identified by a service signature for QoS control purposes. This signature is configured in the HG, so that all that is required is that a userfriendly service name is also configured and stored in the HG for each service. The end-user can then be prompted to choose the affected service from a drop-down list. Service naming and selection can also be done remotely (e.g. in the Cloud) where a centralised approach is preferred. 5.6 INITIATING TROUBLESHOOTING Once the faulty service has been identified by the user, the appropriate diagnostics can be initiated. This document specifies a set of capabilities in the HG that can be used to support a wide variety of troubleshooting procedures; which tools are actually used - and the interpretation of the results - is entirely up to each BSP. It is anticipated that some kind of local expert system would be used to sequence the tests and interpret the results. This is an ideal use of the SWEX capabilities that are specified in [2]. Again, a Cloud-based system could be used as well or instead, as long as there is Cloud connectivity. 5.7 NON SERVICE-SPECIFIC TROUBLESHOOTING While a lot of the troubleshooting is expected to be service-specific, there are some general problems which are not; the classic example being slow Internet access. Indeed it may well be appropriate to test general connectivity and access speed before trying the service specific diagnostics. The HGI diagnostics approach also provides tools to investigate these more general problems. 5.8 SELF-CARE AND HELP-DESK ASSISTANCE While the main purpose of this approach is to avoid helpdesk calls, more difficult cases - or less knowledgeable end-users - may still need to call a helpdesk. In these cases, the helpdesk may also P a g e 15
benefit from having direct access to the diagnostics tools. Requirements are therefore included to allow secure remote access to local tools and the associated statistics. P a g e 16
6 DIAGNOSTICS ARCHITECTURE The overall HGI diagnostics architecture is shown in Figure 1, which also gives an indication of the types of processes that this architecture can support. FIGURE 1 OVERALL HGI DIAGNOSTICS ARCHITECTURE P a g e 17
The customer realises that he has a problem with a service. He attempts to find a solution by self-care using a smartphone, tablet or PC. The Help request is intercepted by the HG and redirected to a Web page. This can either be locally generated, or a real Web page in the Cloud, if there is still WAN connectivity. It is always necessary to have at least some basic diagnostics capability in the HG for those cases where WAN connectivity has been lost. The BSP-specific diagnostics procedure is initiated from this Web page. The diagnostics application should only request the essential minimum of information from the user; this has to include the identity of the service which is having the problem. Gathering more information on the nature of the problem (e.g. by automated structured questioning) may result in more targeted and therefore quicker testing, but that is entirely a matter for the BSP and what he chooses to put in his application. The application can then carry out a series of tests ranging from device presence and connectivity, to general performance, and finally through to testing or monitoring of the service itself. Again the degree of user interaction is entirely a matter for the BSP. If this resolves the problem, which of course could be via a non-technical solution such as suggesting a change in customer behaviour, the process terminates. If there is a still a problem and it is clearly not in the customer domain, this is communicated to the user, along with a suggestion as to what to do next. If however the problem might still be in the customer domain, but more subtle or tricky, the customer (or the application) can contact a helpdesk which has access to at least the same (and possibly a fuller set of) tools, and will have more expertise. In certain cases the helpdesk operative will need to interact directly with the HG, e.g. to start a test, read a counter or a test result etc. This document does not specify the detailed nature of that interaction, there is simply a high level requirement for direct, secure, remote access to certain diagnostic functions in the HG. This is shown in Figure 1 by the direct dotted line between the SP domain and the HG. It is deliberate and significant that this interaction does not go via the ACS. One of the requirements of this interface is that it should be near real-time, i.e. with a response time of ~<1 sec. This is both to speed up the troubleshooting, and allow the helpdesk to see the impact of various user actions, e.g. unplugging a cable, in a timely fashion. ACSs are not generally designed to have this kind of real-time performance. For other diagnostics operations, in particular those that involve any significant data transfer, the ACS is likely to be the appropriate entity. In the Requirements, these 2 cases are distinguished by using the term Remote Agent for the direct interaction, and ACS for the other, typically more data oriented, interaction. Background information gathering which may assist troubleshooting can be permanently enabled. This can include such things as a history of device attachment, and the long-term performance of interface types. This information can also be used to decide whether or not to allow a user to subscribe to a particular service. The network management system is shown here generically as an ACS. P a g e 18
7 TYPES OF PROBLEM There are several types of problem that can adversely impact the delivery of a service: 1. Connectivity 2. Reachability 3. Speed 4. QoS 5. Home Gateway hardware or software fault 6. Service provisioning 7. Service instance initiation (e.g. signalling). Note: The distinction between connectivity and reachability is related to addressing, i.e. there may be a Layer 2 connection between 2 points (e.g. PPP connection), but no IP reachability e.g. to the service Gateway. Some of these problems can occur anywhere on the end to end path pertaining to the service. One of the main purposes of troubleshooting is to determine the part of the network in which the problem is occurring. The scope of this document is limited to the HG, HN (including HNIDs) and end-devices. The intention is to be able to determine if services are being delivered to the HG, through the HG, and to the extent possible, across the HN to the end-devices. Where services are not even being delivered correctly to the HG, then this information is communicated to the user, but this document does not address troubleshooting the nature or location of WAN problems, except where it is the HG that is the cause of the problem, e.g. by mismanaging upstream QoS. 7.1 EXAMPLES OF PROBLEMS The following Tables give some examples of how and where each of the above problem types might arise, and how various monitoring points could provide useful demarcation information. Monitoring point here can mean a network location, a device (e.g. HG, HNID), a device attribute (e.g. DSL sync), a device element (e.g. HG LAN port), or a specific counter on a port. Note that while HG-based diagnostics can provide fairly granular fault demarcation within the home, it can only directly determine whether or not a problem is outside the home domain, not where it is. 7.1.1 CONNECTIVITY Symptom Possible Causes Monitoring points DSL line will not sync Noise, access network fault, DSLAM fault HG WAN sync P a g e 19
DSL line frequently resyncs Noise on access line. Incorrect line optimization (low margin, no interleave) HG WAN stats (e.g. margin) No PPP session established Login failure, BRAS fault HG No wireless connectivity between HG and End Device (ED) No connectivity between HG and ED or HNID Incorrect SSID, incorrect wireless keys, noise, excessive distance Disconnected or faulty cable. PLT blackspot HG access point LAN ports on HG and HNIDs 7.1.2 REACHABILITY Symptom Possible Causes Monitoring points Cannot access any WAN service Cannot access a particular WAN service Cannot access any LAN device Cannot access a particular LAN device No HG IP address Incorrect Firewall setting Service down DNS failure Firewall setting DHCP failure ARP failure DHCP address limit NAT problem HG HG Remote server HG HG 7.1.3 SPEED Symptom Possible Causes Monitoring points Slow download/browsing DSL line rate reduced Congestion in aggregation network DSL WAN port HG LAN interface ports HG CPU usage P a g e 20
Internet peering point congestion Server overload In-home technology running slowly due to a PHY rate change or congestion. HG CPU overloaded HG memory usage HNID interfaces ED interfaces 7.1.4 QOS Symptom Possible Causes Monitoring points Service constantly disrupted or fails completely Service occasionally disrupted DSL line slow (PHY rate) DSL line congested Excessive packet loss on Access line No QoS configuration Incorrect QoS configuration LAN technology has insufficient bitrate Excessive packet loss on access line Excessive packet loss on LAN Noise No QoS configuration Incorrect QoS configuration WAN service rate and error count LAN service rate and error counts LAN and WAN queue statistics WAN I/F (error count) LAN I/F (error count) LAN and WAN queue statistics Wireless interface (noise and PHY rate) 7.1.5 HG HARDWARE OR SOFTWARE PROBLEM Symptom Possible Cause Monitoring points Erratic performance or slow response HG CPU overload, i.e. attempting to execute too much simultaneous processing HG CPU All local and WAN HG CPU failure. Overheating, HG internal temperature. HG P a g e 21
connectivity lost static discharge hardware self-test HG hangs up at various times, but may be able to reboot HG fails to operate properly or at all HG frequently reboots Voice port not active LAN ports connection failure (Ethernet, USB) HG memory failure Overheating Static discharge Firmware corruption Firmware bug Flash or memory fault SWEX problem Software problem Firmware corruption Voice port damaged by over voltage or static Port damaged by over voltage or static. HG internal temperature Memory tests Memory checksums Memory usage CPU load HG memory check HG port monitoring HG port monitoring 7.1.6 SERVICE PROVISIONING Symptom Possible Cause Monitoring points No voice dial tone 7.1.7 SERVICE INITIATION Voice not been provisioned on the home gateway Misconfiguration HG Symptom Possible Cause Monitoring points Cannot initiate a service session Service does not work properly on initial attempt Service has not been provisioned on the home gateway Firewall misconfiguration QoS not configured for service HG HG ED P a g e 22
No access to a particular service Service platform down Lack of service platform capacity Bill not paid Firewall misconfiguration User authorization failure Signalling failure HG, ED 7.2 REAL-WORLD SUPPORT ISSUES Section 7.1 lists those problems which could occur in principle. The whole point of remote diagnostics is to reduce support costs, and improve the customer experience (CE). The focus therefore needs to be on the most common problems which have the greatest impact on costs and CE. A survey was undertaken in which HGI SPs were asked to rate a number of service-related fault types for their impact on support costs, and the degree to which enhanced diagnostics might be able to help. A summary of the responses is presented below, with the individual BSP responses having been anonymised, as they may have some commercial sensitivity. 7.2.1 SURVEY RESULTS The problems covered by the survey are shown in the below Table. Error Category Service problem Network configuration WiFi Configuration Wiring Broadband Performance HG Setup/Configuration End devices Description Loss of service, service glitches etc. Wrong network configuration (IP, Broadband setup) WiFi Setup (WiFi not enabled, forgotten passwords etc.) Unplugged cable, wrongly plugged cable, etc. Quality, speed, stability of connection Configuration, faulty firmware etc. SP and non SP devices (PC, NAS, STB etc.) P a g e 23
Broadband line sync Access Parameters Hardware Replacement General Support/Guidance IPTV VoIP VAS Establishment of connection, access line drop-out Unknown or incorrect login parameters (password, credentials etc.) Replacement of (possibly) faulty hardware Issues which can only be clarified by a helpdesk agent (information request etc.) Availability, quality of IPTV, VoD Availability, quality Availability, quality of other Value Added Services The categories which were said to have the highest current impact on support costs were as follows: IPTV poor video quality or complete loss of service Poor video quality over WiFi Poor VoIP quality (network) Broadband login failure username/password problems UMTS misconfiguration (business customers) PC problems WiFi Security keys DSL sync loss HG frequently reboots. Not all of these can be identified or solved by diagnostics. The below Table summarises the essential nature of the problem, and the extent to which diagnostics may help. The solutions have been categorised by whether they are amenable to a (local or Cloud-based) expert system, and the degree to which a helpdesk may provide additional capability, although again helpdesk involvement is to be avoided if possible for cost reasons. Symptom Actual problem Useful fault demarcation IPTV quality Packet delivery Network HG HN Expert system Yes Helpdesk added value Minimal (overall service problem) WiFi video quality Packet delivery HN Yes Minimal P a g e 24
Local noise HGI-RD016 HG and Home Network Diagnostics Module Requirements VoIP quality (network) Broadband login failure Network congestion Lack of QoS Customer forgetfulness Network Yes Minimal - - Password reset PC problems Infinite - No over and above PC wizards Possible but requires high level of expertise WiFi Security keys HG loses DSL sync Customer lack of knowledge or ability to enter configuration HG/DSLAM DSL configuration - Yes Identifying the type of problem No No Can trigger an investigation or configuration change HG loses DSL sync Access cable cut No Possibly Can do a physical line test HG frequently resyncs Access line noise No No Access to DSLAM stats This suggests that many of the problems are amenable to automatic identification, and demarcation, and that a helpdesk will only be of value in a small number of cases. However the challenge will be to stop the natural customer reaction to call the helpdesk first regardless, and to find a solution that the customer can implement himself whenever possible. 7.3 HIGH LEVEL REQUIREMENTS The high-level requirements related to diagnosing the different problem types identified above are as follows. 7.3.1 CONNECTIVITY Determine the presence of all the devices on the HN. This needs to include the automatic detection and logging of changes e.g. when devices are added to or removed from the HN. A log P a g e 25
of changes can be important when attempting to correlate service problems with a change in the HN configuration Determine the connectivity between the various end-devices, any HNIDs and the HG. This may involve both passive monitoring and active probing. 7.3.2 REACHABILITY Be able to test IP reachability on demand. 7.3.3 SPEED Measure the aggregate rate on the HG WAN interface for traffic (ingress and egress) Measure the rate at the ingress to, and egress from, the HG for packets with a given service signature Provide access to a (WAN-based) speed checker to test the access speed Provide a local speed check function for measuring the performance across the LAN in both the upstream and downstream directions Measure and store historical rates for all interfaces which have a time-varying PHY. 7.3.4 QOS The HGI QoS approach is based on the concept of configurable (packet based) service signatures [10]. The QoS diagnostics need to be able to: Check the QoS configuration, i.e. which signatures have been configured and the mapping between service signature and queues Check the performance of a specific queue in terms of average and maximum queue length, and dropped packets Measure the throughput in a given queue over configurable time periods. 7.3.5 HOME GATEWAY HARDWARE OR SOFTWARE FAULT Hardware self-test capabilities Software self-test capabilities Firmware tests i.e. integrity of the firmware image Monitor and log hardware usage and performance e.g. CPU and memory Temperature sensing. P a g e 26
7.3.6 SERVICE PROVISIONING Check firewall settings Check QoS settings Check VoIP settings Check voice dial tone. 7.3.7 SERVICE INSTANCE INITIATION Check reachability of service platform Check availability of service platform Check service subscription Check firewall settings Check QoS settings. 8 REQUIREMENTS Some requirements involve the local logging of information in the HG. In some cases these need to be preserved through a reboot or power cycling of the HG, and therefore need to be stored in non-volatile memory (Flash). However Flash has a limited number of read-write cycles, and so logs that are expected to be updated frequently should be stored in DRAM. This is explicitly specified in the appropriate requirements, with all other logs being written to Flash memory. 8.1 BASIC HG INFORMATION The HG MUST be able to provide the following HG configuration information to a Remote Agent or Local UI on demand: R1. Device type Hardware version Software version Firmware version Device status (e.g. device in self-test, active etc.) R2. The HG SHOULD be able to store a Crash Dump in local non-volatile memory. R3. The HG MUST preserve the Crash Dump through a power-cycle and reboot. P a g e 27
R4. The HG MUST be able to upload the Crash Dump to the ACS using CWMP as per [3], [4] and [5]. 8.2 FIRMWARE MANAGEMENT Accurate knowledge of the current hardware and firmware versions is a key initial step in remote diagnosis. R5. R6. The HG MUST be able to rollback to a previous, stored firmware version (either stored locally or in the Cloud) on command from a Remote Agent or the Local UI. The HG MUST be able to return to the firmware version that was running before the rollback on command from a Remote Agent or the Local UI. R7. The HG MUST be able to be remotely rebooted by the Remote Agent. R8. R9. R10. R11. R12. R13. The HG MUST be able to be reset to its factory defaults on command from a Remote Agent or the Local UI. The HG MUST automatically locally store its latest configuration prior to restoring the factory defaults. The HG MUST also be able to store this configuration in the Cloud. The HG MUST be able to restore its latest stored configuration on command from a Remote Agent or the Local UI. The HG MUST keep a local log of all firmware download attempts including the date, time, firmware version, and success or failure. This log MUST be accessible by the Remote Agent and the ACS. The HG MUST be able to delay a firmware download attempt when designated services are active. The HG MUST support both complete image and modular firmware upgrades. Modular upgrades SHOULD support both individual modules and multiple modules being upgraded. P a g e 28
R14. The HG MUST log the date, time, and version number of each module download. If the download is unsuccessful, then the previous version of that module MUST be automatically reinstalled. R15. The HG MUST be able to store and load a local rescue firmware version. 8.3 HG SELF-TEST The HG needs to support the following self-test requirements, but there is no expectation that service delivery will continue unimpaired, or indeed at all, during a self-test. R16. The HG MUST support a representative hardware test. This MUST include CPU, memory, firmware and all physical interfaces and provide an overall and per category status. R17. The HG MUST support a representative software test that checks the integrity of software components and provides a report per component. This MUST as a minimum include a CRC check on a per module basis, and on the complete software image. R18. The HG MUST support sending a Dying Gasp message to the WAN if connected via xdsl. R19. The HG MUST support a daemon which monitors DSL line synchronization. R20. R21. All self-tests MUST be able to be initiated from a Remote Agent and the Local UI. The HG SHOULD support a hardware button to activate the self-test. The HG MUST indicate the self-test result as a simple PASS/FAIL, for example by using a LED. This LED SHOULD be readily visible from the front of the HG i.e. not hidden behind or beneath the HG box. P a g e 29
8.4 HG HARDWARE AND PERFORMANCE MONITORING R22. R23. R24. R25. R26. R27. The HG MUST measure and keep a DRAM log of its current, mean and peak CPU usage in % terms. The HG MUST measure and keep a DRAM log of its current, mean and peak memory usage in absolute and % terms. There MUST be a separate log for each different memory type. The HG MUST measure and keep a log of the internal temperature of its case. The temperature sensor SHOULD be mounted on the inside of the HG case in a position expected to be near the top of the HG when installed as suggested by the vendor. The HG SHOULD measure and keep a log of the temperature of the CPU heat sink. The HG SHOULD be able to trigger a local and remote alarm when average DRAM usage exceeds a configurable % threshold. The HG SHOULD be able to trigger a local and remote alarm when the internal temperature exceeds a configurable threshold. The HG MUST support resetting the mean and peak values in Requirements R22, R23, and R24 with a single Remote Agent or Local UI command. 8.5 DUPLICATE ADDRESS DETECTION R28. R29. The HG SHOULD be able to detect and store in DRAM duplicate LANside MAC addresses. The HG SHOULD be configurable to raise a remote alarm on the first detection of a given duplicate. The HG SHOULD be able to detect and store in DRAM duplicate LANside IP addresses. The HG SHOULD be configurable to raise an alarm on the Local UI on the first detection of a given duplicate. 8.6 SWEX SUPPORT P a g e 30
The Diagnostics architecture has led to the specification of a number of functional elements. However these need to be controlled by a BSP-specific diagnostics system at least part of which needs to run on the HG itself. SWEX allows such systems to be downloaded and upgraded as appropriate. R30. R31. R32. The HG MUST support the HGI SWEX as defined in [2], HGI-RWD008-R3 (HG Requirements for Software Execution Environment). The HG MUST be able to disable all SWEX applications except the diagnostics application itself, but only by means of a command from the Remote Agent or ACS. The HG MUST log the date, time, and version number of each SWEX module installed. 8.7 DIAGNOSTICS WEB PAGE SUPPORT R33. R34. R35. The HG MUST support the generation of Local Web pages to handle the troubleshooting interaction with the user. The HG MUST be able to intercept and redirect a pre-configured diagnostics URL to the local, or Cloud-based, diagnostics Web page. The HG MUST be able to display the Local Web pages automatically on detection of loss of WAN connectivity. 8.8 WAN PORT CONTROL R36. Where the HG has an Ethernet WAN port, it MUST support configuration of that port to a specified rate (100 Mbps or 1 Gbps) from the Remote Agent or Local UI i.e. override auto-negotiation. 8.9 POWER SAVING P a g e 31
R37. The HG MUST be able to disable all power saving features under command of a Remote Agent or Local UI for the duration of the troubleshooting. 8.10 SERVICE SELECTION R38. R39. R40. R41. R42. R43. The HG MUST support the configuration of a single, unique service name for each service signature via the Local UI or ACS. The HG MUST be able to display a list of all the configured services on the Local UI. The HG MUST support the selection of a single service from this list via a Remote Agent or the Local UI. Selection of a service MUST overwrite any previous selection (so that only one service is being diagnosed at any one time). The HG MUST support non-service specific diagnostics, i.e. operations/counts etc. which are applied to all packets irrespective of service signature. This non-service specific diagnostics option MUST appear in the list of configured services as a selectable option with a user-friendly name e.g. Any Service. 8.11 SERVICE CONFIGURATION TESTING R44. The HG MUST be able to display the following information related to the selected service on the Local UI and send this information to the Remote Agent. Service ID Service signature (number and type of all the constituent elements) Queue number and type to which this service has been allocated. P a g e 32
R45. The HG MUST be able to display the information in R44 for all other currently configured services on the Local UI and send this information to the Remote Agent. 8.12 DEVICE DISCOVERY The purpose of Home Network device discovery and identification in this document is simply to provide sufficient information to allow device presence and identity to be established, not to provide full management capability of those devices. These identity requirements are therefore a subset of those specified in R169-189 of [1]. R46. R47. The HG, when acting as an UPnP Control Point, MUST be able to log and display on the Local UI the following information for each UPnP Device on the LAN. The fields are defined in the UPnP Device Architecture [6]. Information From UPnP discovery Server Information From UPnP description DeviceType FriendlyName Manufacturer ModelDescription ModelName ModelNumber UPnPServer ServiceType [1..n] (this is a list of ServiceType for each supported service) The HG MUST support Multicast DNS as per [7] (RFC 6762) and DNS-Based Service Discovery as per [8] (RFC 6763), in order to discover Apple devices. P a g e 33
R48. The HG MUST be able to keep a DRAM log of all LANside device connections and disconnections (both physical and logical). This log MUST contain at least the following OUI-SN-Product Class UPnP ID (where applicable) Device ID Device OS (in the case of a PC) IP address Date and time of event Running total of the number of connections Running total of the number of disconnections. This log MUST be enabled by default. This log MUST be retained for a configurable time period (in days) of up to 30 days. 8.13 HG DISCOVERY Although the main emphasis of this document is diagnostics based on the HG itself, making the HG visible to PC-based network discovery mechanisms may also be useful. R49. The HG MUST support Microsoft LLTD responder functionality as per [9]. 8.14 TOPOLOGY DISCOVERY R50. The HG MUST be able to display a graphical representation of all devices (device icon) and their connectivity on the Local UI. P a g e 34
The connectivity map SHOULD include the following: R51. Device type Device ID Device OS (where applicable) The connecting technology type (wireless, PLT, wired Ethernet etc.) Link PHY rate IP address. 8.15 CONNECTIVITY TESTING R52. The HG MUST be able test the Layer 2 connectivity to all identified devices on the HN: as a complete set individually on command via a Remote Agent or the Local UI. 8.16 REACHABILITY TESTING R53. The HG MUST be able to test the IP reachability of all connected devices via a Ping test initiated by clicking on the device icon via the Local UI. R54. The HG MUST maintain a Table of all locally DHCP allocated IP addresses. 8.17 SPEED TESTING 8.17.1 INTERFACE COUNTERS The following requirements support the monitoring of data rates on a per logical interface basis (i.e. where there is more than one logical interface on a physical interface). This is specified for both service specific and any traffic types. P a g e 35
R55. R56. R57. R58. R59. R60. All the counters in this sub-section MUST be available for each WAN and LAN interface logical (L2) interface. All counts MUST be available as the total number of packets per sample time, packets per second and kbps. Every counter MUST be reset upon reception of a specific, single command from the Remote Agent or Local UI. All counters SHOULD be individually resettable via a Remote Agent and the Local UI. The current value of all counters MUST be able to be read by a Remote Agent and the Local UI. The HG MUST use a single configurable sample interval. The sample interval MUST be configurable from 1-60 seconds with 1 second granularity and SHOULD be configurable from 1-900 seconds. This configuration MUST be able to be done from a Remote Agent, the Local UI, and the ACS. R61. The sample intervals of all counters MUST be synchronised. R62. R63. R64. The HG MUST store in DRAM the last N results for the sample interval counters, where N is configurable from 1 to 2048. The HG MUST maintain a sample interval count of the number of sent packets with a single, specified service signature. The HG MUST maintain a sample interval count of the number of received packets with a single, specified service signature. 8.17.2 ACCESSING A NETWORK BASED SPEED CHECKER R65. R66. The HG MUST be able to store the IP address of a network based speed checker and connect to it on command from a Remote Agent or the Local UI. The duration of the speed test MUST be configurable from a Remote Agent, the Local UI, and the ACS. P a g e 36
R67. The default duration of the speed test MUST be 30 secs. 8.17.3 HG BASED SPEED CHECKER R68. R69. The HG MUST be able to intercept attempted access to the configured IP address of the network based speed checker and generate traffic locally which it sends to the requesting device. This local intercept MUST be able to be requested via the Local UI and a Remote Agent. The fact that it was a local speed check MUST be indicated in the results presented on the Local UI and to a Remote Agent. The duration of the speed test MUST be configurable between 5-30 seconds via the Local UI, Remote Agent or ACS. R70. The default duration of the speed test SHOULD be 10 secs. R71. The speed test rate MUST be able to be greater than the PHY rate of the highest speed LAN interface on the HG. 8.18 SERVICE CLASS MONITORING The following requirements support the monitoring of the WAN, LAN and WLAN egress queues, and apply to all such queues. The monitoring is on a per queue basis and so all services which use the same queue will have their traffic counted. These are based on requirements R483-500 in the HGI Residential Profile [1] but are not identical. The main difference is that the range of sample intervals is shorter to aid more real-time diagnosis. R73. R74. R75. All the following counters MUST be provided for each WAN and LAN egress queue (including WLAN queues). Every counter MUST be reset upon reception of a specified single command from the Local UI or Remote Agent. The HG MUST determine which queues to monitor on the basis of the currently selected diagnostics service signature. P a g e 37
R76. If the selected diagnostics service signature is any service then the HG MUST monitor all queues. R77. All counters SHOULD be individually resettable via the Local UI or Remote Agent. R78. All counters MUST be reset when the selected diagnostic service signature is changed. R79. All counters MUST be reset by a reboot of the HG. R80. R81. The current value of all counters MUST be able to be read via the Local UI and a Remote Agent. The HG MUST have a single configurable sample interval. The sample interval MUST be configurable from 1-900 seconds with a 1 second granularity from the Local UI, Remote Agent and ACS. R82. The sample intervals of all counters MUST be synchronised. R83. The HG MUST be able to store in DRAM the last N results for the sample interval counters, where N is configurable from 1 to 2048. R84. The HG MUST maintain a running count of the number of dropped packets. R85. The HG MUST maintain a sample interval count of the number of dropped packets. R86. The HG MUST maintain a running count of the number of sent packets. R87. The HG MUST maintain a sample interval count of the number of sent packets. R88. The HG MUST maintain a running count of the number of bytes sent. R89. The HG MUST maintain a sample interval count of the number of bytes sent. R90. The HG MUST store the peak queue occupancy counted in packets and bytes. R91. The HG MUST store the peak percentage queue occupancy. R92. The HG MUST store the peak queue occupancy in packets and bytes for each sample interval. P a g e 38
R93. The HG MUST be able to provide the peak percentage queue occupancy for each sample interval. 8.19 INSTANTANEOUS INTERFACE RATE MONITORING R94. R95. The HG MUST be able to report the current downstream and upstream physical layer rate of the WAN interface via the Local UI and Remote Agent. The HG MUST be able to report the current downstream and upstream physical layer rate of any selected LAN interface via the Local UI and Remote Agent. 8.20 LONG TERM INTERFACE RATE MONITORING These Requirements are intended to support the long term monitoring of the PHY rates of specified interfaces. They allow a performance history to be built up which can be used in the diagnosis of intermittent faults, or as part of a business decision as to whether to offer a certain service to a given customer. These are identical to the requirements R518-520 in the HGI Residential Profile [1]. R96. R97. R98. The HG MUST be able to monitor the downstream physical layer rate of designated LAN interfaces. The HG MUST be able to generate and store locally in DRAM a historical measurement of the physical layer rate of each designated interface in the following form: the rate (R hist ) that was exceeded for x% of the previous t minutes. The HG MUST be able to send the R hist value to the ACS when it changes and is less than the current WAN downstream PHY rate. P a g e 39
8.21 WIRELESS INTERFACE LOGGING R99. The HG MUST be able to measure and log in DRAM the noise level in a 20 MHz band on all wireless channels for any and all relevant spectral bands (i.e. 2.4 and 5 GHz). R100. The HG MUST keep a time and date-stamped log of the wireless channels used. R101. R102. R103. R104. The HG MUST be able to determine on demand from the Local UI or Remote Agent the number of SSIDs on each wireless channel. The HG MUST be able to determine on demand from the Local UI or Remote Agent the SSID strings on each wireless channel that have not hidden their ID. The HG MUST be able to determine on demand from the Local UI or Remote Agent the security type of each SSID. The HG MUST be able to determine on demand from the Local UI or Remote Agent the RCPI value for each channel. R105. The HG MUST keep a log of all unsuccessful attempts to connect to each SSID. R106. The HG MUST support the disabling of any guest access SSIDs by a Remote Agent. R107. R108. The HG MUST log the wireless channels currently in use by the embedded HG access point. The HG MUST log the wireless channels currently in use by any other, (i.e. nonembedded) access points that it can detect. R109. The HG MUST keep a log of all its configured SSIDs, and their security status. R110. The HG MUST keep a log of all its currently active SSIDs. 8.22 MULTICAST R111. The HG MUST be able to count the total number of received multicast packets with a single resettable counter. P a g e 40
R112. The HG MUST be able to determine the number of active multicast streams. 8.23 VOICE SPECIFIC DIAGNOSTICS Where an HG contains an ATA it will have one (or more) voice codecs. The following counters need to be available, on a per ATA basis for the codec in use (for that ATA) R113. The HG MUST be able to provide access to the following counters for each embedded ATA. R114. The HG MUST reset all the counters for a given ATA at the start of every voice call. R115. R116. R117. R118. R119. R120. R121. R122. The HG MUST provide access to the count of the total number of voice packets sent, from the Local UI or Remote Agent. The HG MUST provide access to the count of the total number of voice packets received, from the Local UI or Remote Agent. The HG MUST provide access to the count of the total number of lost voice packets, using RTP sequence number tracking, from the Local UI or Remote Agent. The HG MUST provide access to the highest RTP sequence number received, from the Local UI or Remote Agent. The HG MUST provide access to the count of the total number early packets, i.e. packets arriving earlier than the maximum depth of the de-jitter buffer, from the Local UI or Remote Agent. The HG MUST provide access to the count of the total number of late packets, i.e. packets arriving after their estimated playout time, from the Local UI or Remote Agent. The HG MUST provide access to the count of the total number of invalid packets. i.e. packets with the wrong version number, sequence number or playout type, from the Local UI or Remote Agent. The HG MUST provide access to the mean value of the network jitter estimate, from the Remote Agent. P a g e 41
R123. R124. R125. R126. The HG MUST provide access to the current value of playout delay (microsecs) from the Remote Agent. The HG MUST provide access to the minimum value of playout delay (microsecs) from the Remote Agent. The HG MUST provide access to the maximum value of playout delay (microsecs) from the Remote Agent. The HG MUST provide access to the count of the total number of resynchronisations, from the Remote Agent. Note: a resync will occur when the sequence number or timestamp jumps, e.g. if the transmitter has changed or restarted the transmission and reinitialized the jitter buffer. 8.24 REMOTE ACCESS SUPPORT R127. R128. R129. All actions which can be initiated locally MUST also be available to an authenticated remote user, i.e. a Remote Agent. The HG MUST log the date and time of the most recent attempt by the user to use the self-care system. This data MUST be available to a Remote Agent. The HG SHOULD be able to log any tests that were performed by the user, and their results and SHOULD be able to send the history of these tests to a Remote Agent for helpdesk support or offline analysis. This history SHOULD cover at least the previous 24 hours. 9 MANAGEMENT REQUIREMENTS 9.1 CWMP The diagnostics capability will be managed using the Broadband Forum s CWMP as defined in TRs 69 [3], 98 [4], and 181 [5] etc.. The HGI Residential Profile already contains a significant number of P a g e 42
management requirements, including management of the basic diagnostics capability that was contained in that document. That needs to be extended to include management of the more comprehensive set of objects resulting from the new capabilities defined here. This will be the subject of a companion document which needs to be developed by the HGI, and then liaised to the Broadband Forum to do the full specification of the Object Models. An extended set of notifications is also required; these have been covered in the individual Requirements Sections. R130. The HG MUST support all the management requirements in Section 8.6 of the HGI Residential Profile [1]. 9.2 SWEX MANAGEMENT R131. R132. The HG MUST support all the management requirements in Section 5.4 of the HGI SWEX specification [2]. In the event of any conflict between the management requirements in the Residential Profile and the SWEX specifications, then the SWEX management requirements MUST take precedence. 10 REFERENCES 1. Home Gateway Technical Requirements: Residential Profile V1.01 - (HGI-RD001-R2.01, 2008) 2. Requirements for Software Execution Environment (SWEX) - (HGI-RD008-R3 HG, 2011) 3. CPE WAN Management Protocol - (TR-069 Amendment 4, Broadband Forum 2011) 4. Internet Gateway Device Data Model for TR-069 - (TR-098 Amendment 2, Broadband Forum 2008) 5. Device Data Model for TR-069 (TR-181 Issue 2, Amendment 6, Broadband Forum 2012) 6. UPnP Device Architecture 1.0 (2008) 7. RFC 6762, Multicast DNS (2013) 8. RFC 6763, DNS-Based Service Discovery (2013) 9. Link Layer Topology Discovery (LLTD) Protocol Specification, Microsoft (2010), http://www.microsoft.com/whdc/connect/rally/lltd-spec.mspx 10. Home Gateway QoS Module requirements - (HGI-RD027-R3, 2012) P a g e 43
<End of Document> P a g e 44