Data Center Real User Monitoring

Size: px
Start display at page:

Download "Data Center Real User Monitoring"

Transcription

1 Data Center Real User Monitoring Central Analysis Server User Guide Release 12.3

2 Please direct questions about Central Analysis Server or comments on this document to: Customer Support Copyright 2015 Compuware Corporation. All rights reserved. Unpublished rights reserved under the Copyright Laws of the United States. U.S. GOVERNMENT RIGHTS-Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set forth in Compuware Corporation license agreement and as provided in DFARS (a) and (a) (1995), DFARS (c)(1)(ii) (OCT 1988), FAR (a) (1995), FAR , or FAR (ALT III), as applicable. Compuware Corporation. This product contains confidential information and trade secrets of Compuware Corporation. Disclosure is prohibited without the prior express written permission of Compuware Corporation. Use of this product is subject to the terms and conditions of the user's License Agreement with Compuware Corporation. Documentation may only be reproduced by Licensee for internal use. The content of this document may not be altered, modified or changed without the express written consent of Compuware Corporation. Compuware Corporation may change the content specified herein at any time, with or without notice. All current Compuware Corporation product documentation can be found at Compuware, FrontLine, Network Monitoring, Enterprise Synthetic, Server Monitoring, Dynatrace Network Analyzer, Dynatrace, VantageView, Dynatrace, Real-User Monitoring First Mile, and Dynatrace Performance Network are trademarks or registered trademarks of Compuware Corporation. Cisco is a trademark or registered trademark of Cisco Systems, Inc. Internet Explorer, Outlook, SQL Server, Windows, Windows Server, and Windows Vista are trademarks or registered trademarks of Microsoft Corporation. Firefox is a trademark or registered trademark of Mozilla Foundation. Red Hat and Red Hat Enterprise Linux are trademarks or registered trademarks of Red Hat, Inc. J2EE, Java, and JRE are trademarks or registered trademarks of Oracle Corporation. VMware is a trademark or registered trademark of VMware, Inc. SAP and SAP R/3 are trademarks or registered trademarks of SAP AG. Adobe Reader is a registered trademark of Adobe Systems Incorporated in the United States and/or other countries. All other company and product names are trademarks or registered trademarks of their respective owners. Local Build: April 1, 2015, 13:00

3 Contents Contents Introduction Who Should Read This Guide Organization of the Guide Related Publications Acronyms Customer Support Information Component Updates Reporting a Problem Documentation Conventions Chapter 1 Central Analysis Server Overview Supported Browsers and Connectivity Enabling Java Support in Internet Explorer Protocols Supported by CAS Internationalization Support Integration with Dynatrace Application Monitoring Integration with Business Service Management Analyzer Groups Chapter 2 Logging in to the Report Server Chapter 3 Accessing CAS Reports Drilldown Reports Links to Master and Slave Servers Chapter 4 Smart Packet Capture How Smart Packet Capture Works Smart Packet Capture Feature Summary Starting a Packet Capture Capture Packets Dialog Box Listing Traffic Captures Packet Data Mining Tasks Dialog Box

4 Contents Chapter 5 Configuration Settings Software Services and Autodiscovery Configuring a User-Defined Software Service on CAS Autodiscovery and Multithreaded Decodes Tenants Managing Tenants on AMD Defining a Tenant Rule Assigning Tenants on CAS Baseline Data Localizing the Report Server Setting the Operating System to Display East Asian Languages Localized User Settings Changing the Report Language at Logon Sending Reports by Specifying the SMTP Server for Scheduled Report Mailing Saving Reports to PDF or MHT File Report Server Security Settings Chapter 6 Tabular Reports Selecting Time Range and Resolution Selecting and Filtering Columns to Display Sorting, Filtering, and Searching Data Drilldown Links Status Indicators Status Icons Health Check Icons Aliasing: Customizing Dimension and Metric Names Tooltips Chapter 7 Data Mining Interface Chapter 8 Reports Menu Application Health Status Report Application Health Status - Applications Alerts Report EUE Overview Reports Monitored Traffic from a Business Perspective Applications, Transactions, and Tiers Multi-Tier Reporting The Concept of the Front-end Tier Multi-Level Hierarchy Reporting EUE Overview Reports Structure Default Configuration of the Tiers Report Front-End Tiers for Application Applications Report Transactions for Application Report Tiers Report

5 Contents Sequence Transactions Log Report Sequence Transactions Log - Details Report Transaction Load Sequence Report Citrix/WTS (Presentation) Tier Citrix Landing Page Report Citrix Servers Report Citrix Sites Report Client Optimized Network Tier Optimized WAN Environment Performance Overview - Sites Report Optimized WAN Environment Performance Overview - Software Services Report Optimized WAN Environment Performance Overview - Links Report WAE - LAN and WAN Comparison Report WAE - Software Services Performance Report Client Network and Network Tiers Sites Report Applications for Site Report Transactions for Site Report Areas Report Regions Report Synthetic Overview Reports Enterprise Synthetic Reports Synthetic - Applications Synthetic - Transactions Synthetic - Application Performance Synthetic - Site Performance for Application Synthetic - Single Site Status Synthetic - Transaction List Synthetic - Unavailable Transactions Synthetic - Slow Transactions Synthetic - Unavailable Transaction Summary Synthetic - Metric Charts for Transaction Synthetic - Metric Charts for Application Synthetic Backbone Reports Synthetic Backbone Overview Synthetic Backbone Tests Synthetic Backbone Tests with Errors Synthetic Backbone Pages Synthetic Backbone Nodes RUM Analysis Reports Data Center Analysis Report Software Services Report Software Services Overview Servers Report Operations Report Tasks Report

6 Contents Modules Report Services Report Breakdown by Server Report Metric Charts Errors Report Error Details Report Application Responses Report Application Responses - Details All Users Report Application Performance Affected Users Report Network Performance Affected Users Report Availability Affected Users Report User Activity Report User Activity Tabular Report List of Users On Demand Report Slow Operation Cause Breakdown Report Slow Operation Loads Reports Operation Load Sequence Reports Hit Details Location Health Report User Health Report Top N View Report Network Analysis Reports Example Network Analysis Reports Usage Voice and Video Reports VoIP Overview Report VoIP Activity - Site Report VoIP Activity - Interval Report Voice and Video Status - Software Services (Codecs) Report Voice and Video Status - Conversations Report Voice and Video Status - Activity Report Voice and Video Status - Network Topology Oriented Reports Voice and Video Status - Signaling Report Voice and Video Status - Signaling - Activity Report Voice and Video Status - Signaling - Conversations Report Voice and Video Status - Signaling - Network Topology Oriented Reports Voice and Video Graphical Reports RUM Browser Reports RUM Browser - User Actions RUM Browser - Visits RUM Browser - Metric Charts Chapter 9 Alerts Alert System Types of Alerts Alert States and Notifications Means of Alert Delivery

7 Contents Appendix A Central Analysis Server Data Views Central Analysis Server Software service, operation, and site data Software service, operation, and site baselines Application, transaction, and tier data Application, transaction, and tier baselines Synthetic and sequence transaction data Synthetic and sequence transaction baselines Internetwork traffic data Network link data Citrix/WTS hardware data Low-significance traffic Synthetic Backbone Synthetic Monitoring page data Synthetic Monitoring test data RUM Browser RUM Browser data RUM Browser baselines Tools Alert Log Appendix B Tools Data Views Alert Log Report Parameters Appendix C Dynatrace Network Analyzer On-Demand Trace-To-Report Conversion Appendix D Network Performance Calculations Appendix E Graphical Explanation of Network Performance Metrics Glossary Index

8 Contents 8

9 INTRODUCTION Who Should Read This Guide This manual is intended for users of Central Analysis Server. It guides you through all the features of CAS and shows you how to interpret the reports, identify problems, and optimize your network and site operation. It also explains CAS statistics. Organization of the Guide This user guide is organized as follows: Central Analysis Server Overview [p. 13] Provides a high-level description of CAS, including supported protocols and integration with other Dynatrace products. Logging in to the Report Server [p. 29] Explains how to connect to your report server. Accessing CAS Reports [p. 31] Describes the structure of the report tree and explains how to access different types of reports. Configuration Settings [p. 49] Describes configuration actions performed by individual report users and configuration settings related to an individual user, such as server localization settings or password changes. Tabular Reports [p. 69] Explains how to use basic report elements and how to customize the reports. Data Mining Interface [p. 83] Introduces the concept of Data Mining Reports. Reports Menu [p. 85] Describes CAS reports and explains the concept of applications, transactions, and tiers. Alerts [p. 191] Explains the concept and mechanics of alerts and describes how to enable sending notification messages by and traps. Central Analysis Server Data Views [p. 197] Describes all DMI data views provided with CAS and defines all of the dimensions and metrics appearing in each view. Dynatrace Network Analyzer On-Demand Trace-To-Report Conversion [p. 387] Explains how to access DNA reports or trace files from the CAS. 9

10 Introduction Network Performance Calculations [p. 389] Explains how to analyze network performance problems and how the network-related metrics are calculated. Graphical Explanation of Network Performance Metrics [p. 393] Gives a graphical explanation of network performance metrics. Related Publications Documentation for your product is distributed on the product media. For Data Center RUM, it is located in the \Documentation directory. It can also be accessed from the Media Browser. Go online ( for fast access to information about your Dynatrace products. You can download documentation and FAQs as well as browse, ask questions and get answers on user forums (requires subscription). The first time you access FrontLine, you are required to register and obtain a password. Registration is free. PDF files can be viewed with Adobe Reader version 7 or later. If you do not have the Reader application installed, you can download the setup file from the Adobe Web site at Acronyms Table 1. Acronym Definitions Acronym ADS AMD APM AS ASN B-AMD BSM CAS CBA CIDR COS CSS CSV DATM DC RUM DLM Definition Advanced Diagnostics Server Agentless Monitoring Device Application Performance Management product suite Autonomous System Autonomous System Number Broadband Agentless Monitoring Device Business Service Management Central Analysis Server Console Basic Analyzer Classless Inter-Domain Routing Compuware Open Server Central Security Server Comma Separated Values Deep Application Transaction Management Data Center Real User Monitoring Distributed License Management 10

11 Introduction Table 1. Acronym Definitions (continued) Acronym DMI EUE NDA PVU RUM SAS SI SPAN TCAM DNA VACL WINS Definition Data Mining Interface End User Experience Network Delta Access Page View Users Real User Monitoring Serial Attached SCSI Subscriber Intelligence Switched Port Analyzer Thin Client Analysis Module Dynatrace Network Analyzer VLAN Access Control List Windows Internet Name Service Customer Support Information Dynatrace Community For product information, go to and click Support. You can review frequently asked questions, access the training resources in the APM University, and post a question or comment to the product forums. You must register and log in to access the Community. Corporate Website To access the corporate website, go to The Dynatrace site provides a variety of product and support information. Component Updates Using the Update Manager tool, you can quickly scan for available updates for all of your installed Data Center RUM components. The update tool can be accessed from the RUM Console by selecting Help Check for Updates. NOTE The Update Manager tool requires that your browser's security settings allow for the execution of Java programs and that the browser is launched from a system with Internet connectivity. 11

12 Introduction The Update Manager tool scans all of the locally installed components as well as the remote components and devices managed by the RUM Console. After the scan, the information displayed includes the name of the component, currently detected version, and current status. If updates are available for any of the components, you can download them directly from our website. A valid account is required to download the updates. Reporting a Problem Use these guidelines when contacting APM Customer Support. When submitting a problem, log on to the Dynatrace Support Portal at click the Open Ticket button and select Data Center Real User Monitoring from the Product list. Refer to the DC RUM FAQ article at to learn know how to provide accurate diagnostics data for your DC RUM components. Most of the required data can be retrieved using RUM Console. Documentation Conventions The following font conventions are used throughout documentation: This font Bold Citation Documentation Conventions [p. 12] Fixed width Fixed width bold Fixed width italic Indicates Terms, commands, and references to names of screen controls and user interface elements. Emphasized text, inline citations, titles of external books or articles. Links to Internet resources and linked references to titles in documentation. Cited contents of text files, inline examples of code, command line inputs or system outputs. Also file and path names. User input in console commands. Place holders for values of strings, for example as in the command: cd directory_name Menu Item Screen Menu items. Text screen shots. Code block Blocks of code or fragments of text files. 12

13 CHAPTER 1 Central Analysis Server Overview The Central Analysis Server provides real-time access to information about performance and usage of key business applications. It monitors user session performance, application performance, and server performance in different configurations, with the purpose of identifying when and where problems occur and how to address them. Analysis options give insight into business application performance on the transaction and operation level. The information is aligned with the business structure of the organization (such as branches, working groups, and business units) and is not dependent on the infrastructure components. It is delivered via comprehensive, interactive, service-oriented reports, and via event-driven alerts that inform you about important events such as performance degradation or traffic pattern anomalies. CAS reports enable you to see a complete view of your application performance. The report structure reflects business organization priorities and allows for quick identification of the root causes of problems. The CAS is equipped with powerful data mining and report building tools for creating new or customized reports quickly and easily. The CAS uses the measurement data provided by the passive network monitoring devices referred to as Agentless Monitoring Devices or Network Monitoring Probes, and by synthetic network monitoring agents referred to as Enterprise Synthetic Agents. In real user monitoring, one or more AMDs or Network Monitoring Probes are attached to the monitored network near the core switch of the data center or near VPN access switches. The AMDs and Network Monitoring Probes collect the data from the monitored network, preprocess it, and deliver it to the report server. Each report server can handle a number of AMDs and Network Monitoring Probes. The report server processes the received data further, stores it in a database, and then generates user-friendly reports. The reports can then be viewed and analyzed regularly or only when a network problem occurs. Smart Packet Capture functionality enables you to analyze and diagnose the cause of a known and observed network problem by examining detailed packet trace data. Once a monitoring system has detected a network problem, the Smart Packet Capture process can then take over to drill down to the root cause of the issue. The CAS provides: Web analysis and reporting Decryption and analysis of HTTPS traffic 13

14 Chapter 1 Central Analysis Server Overview Monitoring of SSL errors Analysis of middleware transactions (XML, SOAP, SAP RFC, and others) Analysis of various database protocols Analysis of the Oracle Forms protocol Analysis of Microsoft Exchange and SMTP protocols Analysis of a selection of SAP protocols Thin client (ICA) protocol analysis VoIP analysis VPN analysis WAN analysis Enterprise applications analysis and reporting Real-time reports, trending reports, and baseline calculations Detection of abnormal application usage and network usage patterns User diagnostics Report access management, publication, and sharing Customizable reports For more information, see Protocols Supported by CAS [p. 17]. Supported Browsers and Connectivity DC RUM users can access the report servers through the supported web browsers. The following browsers are supported: Microsoft Internet Explorer: versions 9, 10 and 11. JavaScript and HTTP 1.1 must be enabled. IMPORTANT When using Internet Explorer, do not use Compatibility View (MSIE 10 and older) or Document Mode emulating previous releases (MSIE 11). To see if the browser is set to Compatibility View or set to Document Mode other than default, press F12 to access MSIE Developer Tools. Data is handled differently in the HTML. Internet Explorer may experience performance degradation when viewing reports that contain many columns or tooltips. Mozilla Firefox: latest stable release. JavaScript, cookies, and HTTP 1.1 must be enabled. Google Chrome: latest stable release. JavaScript and cookies must be enabled. No tablet or smart phone browsers are supported. Before using the report server, you may have to adjust the Java, JavaScript, and HTTP 1.1 settings in your browser. 14

15 Chapter 1 Central Analysis Server Overview NOTE Some configuration screens depend on the web browser running a supported release of the Java plug-in. The Java Web Start-based RUM Console requires JRE 8 installed on the desktop, and it will run only on Windows and a 32-bit JRE. If you use 32-bit and 64-bit browsers interchangeably, you need a Java plug-in for each browser. The Windows 64-bit operating system comes with 32-bit and 64-bit Internet Explorer browsers and the 32-bit version runs as a default. If JavaScript is not enabled, the top menu of the report server will not be visible and you will see the following message instead: This product uses JavaScript. Please make sure JavaScript is enabled in your browser settings. Because of Internet Explorer security policy, you may encounter some issues when executing Java applets. You need to modify the default settings for Internet Explorer 9 to run applets. For more information, see Enabling Java Support in Internet Explorer 9 [p. 16]. Adobe Flash Player must be installed on the client machine to enable drilldowns from autodiscovered software services on the CAS to the RUM Console to create user-defined software services. ADS and CAS can be accessed using HTTP or, over secured connections using HTTPS. We recommend secure access with a browser that supports TLS v.1 or 1.1.For more information, see Configuring Report Server to Use Private Keys and Certificates in the Data Center Real User Monitoring Administration Guide. Enabling JavaScript and Support for HTTP 1.1 in a Browser Internet Explorer To enable JavaScript in Internet Explorer: 1. Select Tools Internet Options from the top menu in your browser. 2. Select the Security tab. 3. Click Custom level to display the Security Settings dialog box. 4. Enable Active scripting in the list of options. To enable HTTP 1.1 in Internet Explorer: 1. Select Tools Internet Options from the top menu in your browser. 2. Select the Advanced tab. 3. Scroll within the Settings list to the section titled HTTP 1.1 settings and ensure that Use HTTP 1.1 is selected. 4. Click OK and restart your browser. To set the browser mode: 1. On the Internet Explorer 10 main menu bar, select Tools F12 developer tools. 15

16 Chapter 1 Central Analysis Server Overview 2. On the F12 developer tools menu bar, select Browser Mode and click a non-compatibility View option. Mozilla Firefox To enable JavaScript in Firefox: 1. Select Tools Options from the top menu in your browser. 2. Select the Content tab. 3. Select Enable JavaScript. To enable HTTP 1.1 in Firefox: 1. Open the browser and, in the address bar, type about:config and press [Enter]. The browser displays a list of current preferences. 2. Scroll to the network.http.version preference and make sure its value is 1.1. If the value is other than 1.1, double-click that row, change the value to 1.1, click OK, and restart your browser. Google Chrome To enable JavaScript in Chrome: 1. Select Settings from the top menu in your browser. 2. Select Show advanced settings. 3. Select the Content settings tab. 4. In the Privacy section, click Content settings. 5. In the Privacy section, select Allow all sites to run JavaScript (recommended). Enabling Java Support in Internet Explorer 9 By default, because of security policy, Java applets are not supported in Internet Explorer 9. Before You Begin Although it is not a prerequisite for this procedure, it is assumed that you have JVM version 1.5 or higher installed on the computer from which you intend to connect to CAS or ADS. To enable support for Java applets: 1. Add CAS (and ADS, if applicable) to Trusted sites. a. In Internet Explorer, go to the Tools Internet Options menu. The Internet Options dialog box appears. b. Select the Security tab, then select Trusted sites. c. Click Sites. The Trusted sites window appears. d. Enter the URL to your CAS and ADS, and click Add. e. Click Close when all URLs are entered. 16

17 Chapter 1 Central Analysis Server Overview 2. Customize security settings for trusted sites in the Security Settings - Trusted Sites Zone window. a. Click Custom level... to open the Security Settings dialog box. b. Scroll to ActiveX controls and plug-ins section and disable Allow ActiveX Filtering. c. Scroll to the Scripting section and enable Scripting of Java applets. d. Click OK. A warning window appears. e. Click OK to confirm your change. 3. Click OK to close the Internet Options window and apply all your changes. Protocols Supported by CAS Table 2. Protocols Supported by CAS Analyzer Protocol Version Limitations Example application Cerner over TCP Cerner Cerner 2005, Cerner over TCP 2007 Cerner Millennium Cerner over MQ Cerner over MQ Cerner 2007 MQ: WebSphere 6.0 MQ: WebSphere V7.0 MQ 6.0 Limitations: Traffic for channels with encryption is not monitored. Traffic for channels with header compression is not monitored. MQGET message segmentation is not supported. Additional MQ v7.0 Limitations: For the new v7.0 GET-GET_REPLY operations, only TSH and Message Descriptor headers are processed. No correlation is extracted from the new v7.0 MQ_GET segments. Bytes 8-15 of the TSHM header are not processed. Cerner Millennium Corba Epic Corba Epic over TCP Summer 2010 EpicCare EMR DNS DNS RFC 1035 UDP-based DNS only. No support for multi-query requests. 17

18 Chapter 1 Central Analysis Server Overview Table 2. Protocols Supported by CAS (continued) Analyzer Protocol Version Limitations Example application DRDA (DB2) DRDA (DB2) DRDA version 2 IBM DB2 Universal Database 8.1 Exchange/RPC over HTTP Exchange/RPC over HTTPS Exchange MS Exchange 2003, 2007, 2010 Encryption at application level is reported as Encrypted transaction. Microsoft Exchange Server 2003 Generic TCP RFC 793 Generic (with transactions) TCP RFC 793 HTTP HTTP 1.1, 1.0 (RFC 2616) Advanced analysis for GET/POST methods by default. For all other methods, such as PUT, every hit is reported separately. These methods require manual configuration change to enable advanced analysis No pipelining. IBM MQ IBM MQ over SSL IBM MQ WebSphere 6.0 MQ 6.0 Limitations: WebSphere WebSphere MQ V7.0 Traffic for channels with encryption is not monitored. Traffic for channels with header compression is not monitored. Traffic between MQ servers, (Manager to Manager) and between MQ clients and MQ MQGET message segmentation is not supported. Additional MQ v7.0 Limitations: servers can be analyzed. Dynamic queue names are recognized. For the new v7.0 GET-GET_REPLY operations, Persistent TCP sessions are supported. only TSH and Message Descriptor headers are processed. No correlation is extracted from the new v7.0 MQ_GET segments. Bytes 8-15 of the TSHM header are not processed. 18

19 Chapter 1 Central Analysis Server Overview Table 2. Protocols Supported by CAS (continued) Analyzer Protocol Version Limitations Example application ICA (Citrix) Citrix 4 and 4.5 Username extraction and counting is limited to ICA traffic with Basic or None encryption levels. Citrix Metaframe Presentation Server When enhanced encryption is enabled, traffic will be considered an encrypted operation. ICA (Citrix) Citrix 5.0, 6.0, 6.5, 7.0, 7.1, and 7.5 Username extraction and counting is limited to ICA traffic with Basic or None encryption levels. Citrix XenApp When enhanced encryption is enabled, traffic will be considered an encrypted operation. ICMP ICMP RFC 792 Informix Informix IDS 7.31, IDS 9.40 Informix Dynamic Server IP IP RFC 791 Jolt (Tuxedo) Jolt 8.1 BEA Tuxedo Kerberos SMB Microsoft Kerberos 5 All Microsoft Windows systems that use the SMB 1.0 protocol. (Tested on Windows 2000 and Windows XP.) LDAP LDAPS LDAP RFC 4511 Applications using LDAP All Java applications services in which Kerberos using LDAP services encryption is used are not where plain text supported. authentication is used. MySQL MySQL 4.x and 5.x NetFlow NetFlow version 5, version 9, IPFIX Supported excluding flexible NetFlow: You should configure NetFlow to export only a subset of fields and only for ingress traffic. Cisco router Oracle Net8 9i, 10g R1, 10g R2, 11g No support for TDE R1, 11g R2, and 12c R1 encryption. If TDE encryption is enabled, the Oracle 9i, 10g, 11g 19

20 Chapter 1 Central Analysis Server Overview Table 2. Protocols Supported by CAS (continued) Analyzer Protocol Version Limitations Example application Oracle analyzer stops reporting performance data. Oracle T3/TS3 is not supported. Oracle Forms over HTTP Oracle Forms over TCP Oracle Forms 6i, 9i, 10.1, 11g Oracle Forms 6i Oracle Application Server 9i, 10i, 10g R2, 11g Oracle Forms over SSL SSL support. Oracle Forms over HTTPS RMI BEA T3 JBoss RMI SUN RMI SAP GUI SAP GUI protocol (DIAG) 6.40, 7.10 SAP GUI for Java 7.10rev8, SAP GUI for Windows 7.10, SAP GUI for Windows 7.30, SAP GUI Console, Net weaver Business Client 4.0. SAP GUI over HTTP HTTP 1.1, 1.0 (RFC 2616) Support for SAP 7.01 SP3 and SAP GUI for HTML SAP GUI over HTTPS HTTPS HTTP 1.1 encapsulated Support for SAP 7.01 SP3 in SSL, SSL 3.0, and TLS1.0 (RFC 2246), TLS1.1 (RFC 4346) and TLS1.2 (RFC 5246) SAP GUI for HTML SAP RFC SAP RFC SAP PI, SAP BW, Excel plugin for SAP Siebel HTTP HTTP 1.1, 1.0 (RFC 2616) A special parameter configuration is Siebel CRM SSL support. HTTP 1.1 encapsulated recommended for analyzing in SSL, SSL 3.0, Siebel applications. For TLS1.0 (RFC 2246), more information, see TLS1.1 (RFC 4346) and Global Settings for TLS1.2 (RFC 5246) Recognition and Parsing of URLs in the Data Center Real User Monitoring Web 20

21 Chapter 1 Central Analysis Server Overview Table 2. Protocols Supported by CAS (continued) Analyzer Protocol Version Limitations Example application Application Monitoring User Guide. SMB SMB SMB 1.0, 2.0 All Microsoft Windows systems that use the SMB protocol. SMTP SMTP, ESMTP RFC 821, RFC1891 Supported commands: HELO/EHLO, MAIL FROM, RCPT TO, DATA, QUIT, RSET, VRFY, HELP, EXPN, NOOP (no support for SEND, SOML, SAML, TURN Multi-part attachments are always saved in one piece (no segmentation is preserved). MS Exchange Server native RPC protocol and POP3 ( download) are not supported. SOAP over HTTP SOAP over HTTPS SOAP SOAP 1.1 and 1.2 Support for Remote Procedures Calls only. Any business application that uses SOAP for data exchange over the network. SSL support. SSL SSL Decrypted HTTPS HTTP 1.1 encapsulated Advanced analysis for in SSL GET/POST methods by SSL 3.0, default. TLS1.0 (RFC 2246) For all other methods, such as PUT, every hit is reported TLS1.1 (RFC 4346) separately. These methods TLS1.2 (RFC 5246) require manual configuration change to enable advanced analysis No pipelining. 56-bit DES is not supported. Only RSA Key Exchange Algorithm is supported. Only a 1024-bit SSL key is supported on CryptoSwift SSL cards. SSL support. 21

22 Chapter 1 Central Analysis Server Overview Table 2. Protocols Supported by CAS (continued) Analyzer Protocol Version Limitations Example application Open SSL supports 1024-bit, 2048-bit, 4096-bit and 8192-bit keys. ncipher cards support 1024-bit, 2048-bit, 4096-bit and 8192-bit keys. Cavium NITROX XL FIPS cards support 1024-bit and 2048-bit keys. TCP TCP RFC 793 TDS TDS 5.0, 7.0, 8.0 MS SQL Server 7.0, 2000, 2005, 2008, 2008R2, and Sybase 10.0, Sybase Adaptive Server Enterprise (ASE) 15 UDP UDP RFC 768 VoIP RTP, RTCP, SIP, H323 G.726, GSM, G , Conference calls, secure G.729, G.711 (PCMA), protocols, and forked calls G.711(PCMU), G (multiple phones ringing at (ACELP), G the same time) are not (MP-MLQ), LPS supported. The AMD must see both signaling and media on the same AMD. XML XML over SSL XML W3C recommendation 1.0 and 1.1 Encapsulated in TCP, in HTTP, and in HTTPS. SSL support. XML over HTTP XML over HTTPS XML over MQ XML over MQ over SSL XML MQ XML: W3C recommendation 1.0 and 1.1 MQ: WebSphere 6.0 MQ: WebSphere V7.0 XML encapsulated in MQ. MQ 6.0 Limitations: Traffic for channels with encryption is not monitored. Traffic for channels with header compression is not monitored. MQGET message segmentation is not supported. 22

23 Chapter 1 Central Analysis Server Overview Table 2. Protocols Supported by CAS (continued) Analyzer Protocol Version Limitations Example application Additional MQ v7.0 Limitations: For the new v7.0 GET-GET_REPLY operations, only TSH and Message Descriptor headers are processed. No correlation is extracted from the new v7.0 MQ_GET segments. Bytes 8-15 of the TSHM header are not processed. Internationalization Support The Central Analysis Server supports international environments on both ends: report server and client browser. Localized Server Support The user interface of the report server is rendered in the following languages: English Japanese Korean Chinese simplified English is the default language setting. To support other languages, install the required font set for the target language and customize the regional options accordingly. Character Encoding Support for Monitored Traffic By default, only UTF-8 encoding is supported and support for other encodings is turned off. Turn UTF-8 off selectively for HTTP processing and XML processing through the configuration options in the RUM Console. Central Analysis Server recognizes the following character encodings: HTTP and XML/SOAP ISO ISO Unicode (UTF-8) UTF-16 (XML/SOAP only) Japanese: EUC-JP, Shift_JIS, Unicode (UTF-8) Korean: EUC-KR, ISO-2022-KR, Unicode (UTF-8) 23

24 Chapter 1 Central Analysis Server Overview MQ Chinese: Big5, Big5-HKSCS, EUC-TW, GB18030, GB2312, GBK, HZ, ISO-2022-CN, Unicode (UTF-8) Unicode (UTF-8) Database/SQL (Oracle, TDS, DRDA, Informix) UTF-8 (all DB analyzers) UTF-16 (TDS analyzer only) EBCDIC (DRDA analyzer only) DB statements that were not sent in a supported encoding are encoded such that all non-ascii characters are replaced with their hexadecimal value in the form %XX, where X is a hexadecimal digit. SMB and Kerberos Character encoding in monitored traffic does not affect SMB and Kerberos analyzer operations. Jolt Character encoding in monitored traffic does not affect Jolt analyzer operations. Generic TCP Character encoding in monitored traffic does not affect generic TCP analyzer operations. In addition to international character support in monitored traffic, locale-specific characters can also be used in the AMD configuration and in the names defined in the protocols.xml file. If you use locale-specific characters in the configuration files, save the files in UTF-8 encoding. Turning on internationalization support will adversely affect AMD performance. Performance degradation will depend on the nature of the monitored traffic. Integration with Dynatrace Application Monitoring DC RUM analytics can be enhanced with Dynatrace Application Monitoring's unique ability to provide detailed information on the performance of operations in business applications monitored by Dynatrace Application Monitoring agents. If the same applications are monitored using DC RUM, both components can be integrated to provide higher value of the application performance analytics. Dynatrace Application Monitoring server can also provide the CAS with the User Experience Management data, that is performance data collected by the JavaScript Agent as the result of the user's browser instrumentation. The data then can be accessed through the CAS reports and a dedicated data view. This integration makes it possible to navigate from CAS reports to the associated PurePath data in the Dynatrace Application Monitoring Client and also use the Dynatrace Application Monitoring User Experience Management (UEM) as the data feed for CAS. To make this work, configure the AMD for the CAS (and, optionally, for the ADS) to monitor the same HTTP traffic that is generated by business applications being monitored by Dynatrace Application Monitoring agents. 24

25 Chapter 1 Central Analysis Server Overview Drilldowns to Dynatrace Application Monitoring. Central Analysis Server reports provide the following drilldowns to Dynatrace Application Monitoring. Software Services Report [p. 148] Software service column Slow operations column Servers Report [p. 151] Server name column Slow operations column Operations Report [p. 153] Operation column Slow operations column Breakdown by Server Report [p. 158] Operation column Slow operations column All Users Report [p. 161],Application Performance Affected Users Report [p. 162],Network Performance Affected Users Report [p. 162],Availability Affected Users Report [p. 163] Slow operations column. User name column. Slow Operation Loads Reports [p. 168] Page begin time column. Operation Load Sequence Reports [p. 170] Component URL in the Load Sequence table. Operation in the Page Details table. User Activity Tabular Report [p. 164] Slow operations column. Operation (other than all) column. Client IP address column. Enterprise Synthetic Reports [p. 131] Application column. Transaction column. Client Site column. Integration with Business Service Management DC RUM can be integrated with Business Service Management, so that DC RUM data can be accessed through the unified interface. 25

26 Chapter 1 Central Analysis Server Overview Analyzer Groups Traffic monitored by the AMD and reported by the CAS can be grouped according to the type of the transport layer or application layer; for example, UDP, TCP, and HTTP. Each layer type has a corresponding software component called an analyzer. Associated analyzers are grouped for general user convenience: Cerner Based on the Cerner over TCP and Cerner over MQ analyzers. Database Based on the DRDA (DB2), Informix, Oracle, SQL Server Resolution Protocol, and TDS analyzers. Datacenter Infrastructure Based on the DHCP, DNS, Generic (with transactions), ICMP, IP, Kerberos, Netflow, Non IP, SMB, SMTP, and UDP analyzers. DHCP Based on the DHCP analyzer. Epic Based on the Epic over TCP analyzer. Exchange Based on the Exchange analyzer. Generic Based on the TCP analyzer. IBM MQ Based on the IBM MQ analyzer. ICA (Citrix) Based on the ICA (Citrix) analyzer. Jolt (Tuxedo) Based on the Jolt (Tuxedo) analyzer. LDAP Based on the LDAP and LDAPS analyzer. Middleware Based on the Jolt (Tuxedo), SOAP over HTTP, SOAP over HTTPS, XML, XML over HTTP, XML over HTTPS, XML over MQ, XML over SSL and SAP RFC analyzers. Measurements for those types of traffic can be viewed on one tier. Oracle Forms Based on the Oracle Forms over HTTP, Oracle Forms over HTTPS, Oracle Forms over SSL, and Oracle Forms over TCP analyzers. SAP Based on the SAP GUI, SAP GUI over HTTP, SAP GUI over HTTPS. SOAP Based on the SOAP over HTTP and SOAP over HTTPS analyzers. VoIP Based on the VoIP analyzer. 26

27 Chapter 1 Central Analysis Server Overview Web Based on the HTTP, SSL, and SSL Decrypted analyzers. XML Based on the XML, XML over HTTP, XML over HTTPS, XML over MQ, and XML over SSL analyzers. 27

28 Chapter 1 Central Analysis Server Overview 28

29 CHAPTER 2 Logging in to the Report Server To connect to your report server, enter the server address in your browser. For a new installation, only the system administrator account created during installation is active. Log in with that account name and password. This lets you perform basic configuration tasks such as setting up business units (applications and transactions). You should then define additional users. Access the server as a system administrator only for configuration purposes. For more information, see Adding a User in the Data Center Real User Monitoring Administration Guide. For information on recommended browsers, see Supported Browsers and Connectivity [p. 14]. 29

30 Chapter 2 Logging in to the Report Server 30

31 CHAPTER 3 Accessing CAS Reports CAS reports are accessed through a web browser. The default report you see after logging on to the CAS is Application Health Status. The top menu bar, shown in Figure 1. CAS Top Menu Bar [p. 31], gives you instant access to DMI-based reports, the DMI tool for building your own reports, and CAS settings. The options available in the menu depends not only on the license type but also on the user rights. Figure 1. CAS Top Menu Bar CAS reports are organized as a coherent workflow. In the top-level reports, you can use tabs and links nested in the reports to change the perspective of the monitored data or drill down to details for a selected dimension. For more information, see Drilldown Reports [p. 31]. Drilldown links enable you to navigate down in the hierarchy tree to obtain more details. Report tabs and section tabs are used to access different sets of data on the same hierarchy level. For example, you can choose the Applications, Tiers, or Sites view of your monitored environment. The currently selected tab is highlighted with a dark blue background. In Figure 2. Report Tabs [p. 31], the Applications tab is selected. Figure 2. Report Tabs Drilldown Reports CAS lower-level reports can be reached through links incorporated in some entity names (such as software service names) and in numeric data in some report columns (such as the Unique users or Errors columns). 31

32 Chapter 3 Accessing CAS Reports Such links are easy to find by moving the mouse pointer over the report table: if a given name or number is underlined while the cursor is hovering over it, it is a link to another report. For example: The links embedded in software service names lead usually to the Servers report for a selected software service. The links embedded in server names lead usually to the Operations report for a particular server. The links embedded in module names lead to the Tasks report. The links embedded in region names lead to the Areas report for a given region. The links embedded in area names lead to Sites report for a given area. The numbers of users, clients, or hosts are links to reports that list all users. The numbers of errors are links to the Errors report for a selected application, tier, software service, or site. Additionally, a few other drilldown reports are accessible under the Multilink icon selected dimension or metric: Metric Charts opens the Metric Charts - Performance report. for the Application Performance Affected Users opens the Application Performance Affected Users report. Network Performance Affected Users opens the Network Performance Affected Users report. Availability Affected Users opens the Availability Affected Users report. The drilldown reports enable you to modify the level of detail on your report. For example, if you choose the Software Services report from the top menu, then click the name of a software service in the Software service column, and then, already on the lower level report, click a particular server name in the Server column, you will get the Operations report for a selected server and, at the same time, also for the selected software service. Opening Drilldown Links in New Tab In some columns, when you hover over a drilldown link, the browser right click context menu will not display the option to open the link in a new tab or window. In such cases, press the [Ctrl] or [Shift] key when clicking the link to open the drilldown report in a new tab or window respectively. Links to Master and Slave Servers CAS reports generated by a number of different servers belonging to a group called a server cluster can be accessed from each server belonging to the server cluster. The server cluster can be managed from one designated member of the cluster referred to as the master server. If the server cluster feature is enabled, the Reports Other Data Servers menu contains links to the master server and slave servers. Navigation between reports is the same on all servers in the cluster. 32

33 CHAPTER 4 Smart Packet Capture Smart Packet Capture functionality enables you to analyze and diagnose the cause of a known and observed network problem by examining detailed packet trace data. Once a monitoring system has detected a network problem, the Smart Packet Capture process can then take over to drill down to the root cause of the issue. To support this, DC RUM provides not only near real time performance monitoring, but also convenient and efficient access to historical packet level data (also referred to as back-in-time data). When a subtle or intermittent problem occurs, the network administrators can quickly go back in time to discover the root cause. This verification process typically involves checking the end user's recent network activity and looking up any operations recently run by that host, to discover, for example, why the network is slow for that user, or what is causing high RTT, or what is causing packet drop. NOTE Smart Packet Capture supersedes functionality formerly offered by the Capture Packets screen, a limited-scope diagnostics tool that has been retired. The information that follows offers only a brief overview of Smart Packet Capture functionality. For full information, see the Smart Packet Capture User Guide. How Smart Packet Capture Works To use Smart Packet Capture, apply fault domain isolation, describe and run the capture, and examine the results. 1. Use fault domain isolation to focus on a particular user. 2. In a user-centric CAS report, find that user and select Links Capture packets from the context menu. The Capture packets screen opens with the capture filters filled in based on the user profile in the CAS, such as a conversation address pair. 33

34 Chapter 4 Smart Packet Capture For more information, see Starting a Packet Capture [p. 36]. 3. Describe the capture. Using these filters as the boundary for the collection, you can decide to retrieve the data immediately (start a capture now), in the future (schedule a capture to start at a specified time), or retrieve all relevant historical data (only available when used in conjunction with the EndaceProbe). 4. Schedule or run the capture. The packet capture is started either immediately or at the scheduled time and the capture data is stored on the CAS with a link to the in-context report view from the CAS. 34

35 Chapter 4 Smart Packet Capture 5. Examine the capture. After the data retrieval is completed, you can view the data in DNA (the default) or a third-party application such as Wireshark. Smart Packet Capture Feature Summary The Smart Packet Capture subsystem features automated multi-point collection, high-capacity continuous capture, and the ability to view and analyze results. Automated multi-point collection Takes data from multiple sources (AMD & EndaceProbe) 35

36 Chapter 4 Smart Packet Capture Automatically captures & displays trace High-capacity continuous capture High-speed interface access Lossless capture architecture Storage scale from minutes to days View and analyze capabilities Contextual views from CAS to DNA Expert capabilities in DNA Available as capture file for third party analyzers Real-time capture Back-in-time analysis Starting a Packet Capture Use the Capture Packets dialog box to configure a packet capture. 1. In the CAS, open a standard report that displays the user data. For example: a. From the CAS main menu bar, select Reports EUE Overview Tiers. b. In that report, in the Unique and affected users (performance) column, click in a user row and select one of the report options from the Links menu. For more information, see Reports Available for Traffic Capture in the Data Center Real User Monitoring Smart Packet Capture User Guide. 2. In the report that opens, find the user (row) for whom you want to capture data. 3. In the User name column of that row, click the arrow and select Links Capture packets from the context menu. 36

37 Chapter 4 Smart Packet Capture The Capture Packets screen appears. NOTE If you get an error message at this point, ensure that you have configured all of the components needed for packet capture. In the RUM Console, you have added an EndaceProbe and a CAS. You have assigned the EndaceProbe to the CAS. The devices are active. Check device status. For more information, see Adding and Editing Devices in a Report Server Configuration in the Data Center Real User Monitoring Smart Packet Capture User Guide. Set Traffic Filters Use the Traffic Filters tab to narrow the range of your capture to traffic between two dates and times, limit the duration of the capture, and review the filter settings. Figure 3. Capture Packets, Traffic Filters Tab 37

38 Chapter 4 Smart Packet Capture 4. Adjust the Time range to focus the capture on the traffic you want to see and to reduce the potential size of the capture. Time range To capture traffic during a specific time range, select Fixed date and time and set the Start time and Stop time. To capture traffic for a certain amount of time starting from when you click OK, select Period relative to the current date and set Duration to the number of seconds, minutes, or hours you want to capture data. The date range is initially populated from the CAS report, but you can adjust it in the Capture Packets dialog box. Related messages: The selected time range extends into the past. The AMD does not support back-in-time captures, so all AMD data sources will be ignored. To fix this, add data sources that do support back-in-time captures. The selected data sources do not support back-in-time captures. Change the time range or add data sources that support back-in-time captures. To fix this, either change the time range so that the capture does not require back-in-time data sources or add data sources that support back-in-time captures. Too many concurrent recordings for this time range. NOTE This occurs when the number of concurrent connections to one AMD exceeds the maximum, which, by default is 10. This value is defined in userprop-nf.properties (SYSTEM.NF_AMD_MAX_NUMBER_OF_CONCURRENT_TASKS). You cannot schedule a new task when this condition's maximum is exceeded. Assuming the default value (10), change the schedule so that you do not have more than 10 tasks scheduled to run at the same time. You can schedule the new task to run after one or more of the already scheduled tasks have finished, or you can reschedule or cancel one of the previously scheduled tasks so that there are no more than 10 scheduled to run after you create this task. In the case of a Client from dimension, the filter is generated for the most active client IP address (most total bytes), not for all client IP addresses. 5. Review the TCP filter settings and adjust the filters as needed. The filters are initially populated from the CAS reports filters (converted to tcpdump filter format), but you can adjust them in the Capture Packets dialog box. If you edit this field, be sure to conform to the tcpdump filter format. If a filter setting is invalid, an error message is displayed and it is not possible to submit the task. 38

39 Chapter 4 Smart Packet Capture Set Data Sources Use the Data Sources tab to select the devices that will be used to gather data for this task. By default, network packets are gathered from all available data sources. Figure 4. Capture Packets, Data Sources Tab 6. Click the Data Sources tab. 7. Select or clear the check box for each source in the list of potential data sources. By default, all EndaceProbes available on the selected CAS are used, but you can clear the check boxes of probes you don't want to query for this task. If your probe is not listed here, ensure that it has been configured and has been added to the RUM Console's list of devices. For more information, see EndaceProbe Configuration Requirements in the Data Center Real User Monitoring Smart Packet Capture User Guide and Adding an EndaceProbe to the Devices List in the Data Center Real User Monitoring Smart Packet Capture User Guide. Set Advanced Options Use the Advanced Options tab to set file-related parameters. 39

40 Chapter 4 Smart Packet Capture Figure 5. Capture Packets, Advanced Options Tab 8. Click the Advanced Options tab. 9. (AMD only.) Select Remove encapsulation remove IP encapsulation from the capture. Default: selected. 10. (AMD only.) Change Maximum file size (AMD) to adjust the maximum file size for the capture. Default: 500 MB. 11. Select Secure files with password (and provide the password twice) to password-protect all task files stored on the CAS. If you select this option, you will need to provide this password to open the trace in DNA or another application. 12. Click OK to submit the task to the scheduler and display the list of scheduled tasks. Review the Scheduled Tasks Use the Packet Data Mining Tasks dialog box to list the captures previously made, captures still in progress, and captures scheduled to run in the future. This list displays tasks from all users, not only the current user. 40

41 Chapter 4 Smart Packet Capture Figure 6. Smart Packet Capture Task Schedule 13. Verify that your task is listed and that the status is appropriate. Scheduled. Task will start: {time} The task has been created but not yet submitted to a device. Hover your mouse pointer over the information icon to see when the task is schedule to start. Capturing traffic The task has been submitted to a device and traffic is being captured. Downloading Trace files are being downloaded to a CAS. Current task progress is shown (overall and per device). Completed The task is complete and the trace archive is available for analysis. 14. Verify that Disk usage is within safe bounds. Capture Packets Dialog Box Use the Capture Packets dialog box to configure a traffic capture. It is available to users to whom the Packet Capture User role is assigned. The task name, estimated result size, and free space are displayed at the top and bottom of all tabs. Task name The name of the task as it will appear in the list of tasks. Set this to a name that is useful for searching and sorting. Description The description of the task as it will appear in the list of tasks. Estimated task size The estimated total size of the capture, combining all data sources. Free space The storage space available for the capture. If this is not larger than the estimated task size, there will not be enough space available to save the capture. 41

42 Chapter 4 Smart Packet Capture OK, Cancel Click OK to schedule the task and display a list of tasks with the filter set to your user name. Click Cancel to discard task submission. Traffic Filters Use the Traffic Filters tab to narrow the range of your capture to traffic between two dates and times, limit the duration of the capture, and review the filter settings. Time range To capture traffic during a specific time range, select Fixed date and time and set the Start time and Stop time. To capture traffic for a certain amount of time starting from when you click OK, select Period relative to the current date and set Duration to the number of seconds, minutes, or hours you want to capture data. The date range is initially populated from the CAS report, but you can adjust it in the Capture Packets dialog box. Related messages: The selected time range extends into the past. The AMD does not support back-in-time captures, so all AMD data sources will be ignored. To fix this, add data sources that do support back-in-time captures. The selected data sources do not support back-in-time captures. Change the time range or add data sources that support back-in-time captures. To fix this, either change the time range so that the capture does not require back-in-time data sources or add data sources that support back-in-time captures. Too many concurrent recordings for this time range. NOTE This occurs when the number of concurrent connections to one AMD exceeds the maximum, which, by default is 10. This value is defined in userprop-nf.properties (SYSTEM.NF_AMD_MAX_NUMBER_OF_CONCURRENT_TASKS). You cannot schedule a new task when this condition's maximum is exceeded. Assuming the default value (10), change the schedule so that you do not have more than 10 tasks scheduled to run at the same time. You can schedule the new task to run after one or more of the already scheduled tasks have finished, or you can reschedule or cancel one of the previously scheduled tasks so that there are no more than 10 scheduled to run after you create this task. In the case of a Client from dimension, the filter is generated for the most active client IP address (most total bytes), not for all client IP addresses. TCPDUMP filter The filters are initially populated from the CAS reports filters (converted to tcpdump filter format), but you can adjust them in the Capture Packets dialog box. If you edit this field, 42

43 Chapter 4 Smart Packet Capture be sure to conform to the tcpdump filter format. If a filter setting is invalid, an error message is displayed and it is not possible to submit the task. You can copy these filters into DNA and edit them to filter your trace during import. For more information, see Importing a DC RUM Traffic Capture into DNA in the Data Center Real User Monitoring Smart Packet Capture User Guide. Click Syntax warnings under the filter box to list all syntax warnings. For more information, see Filter Error Messages and Syntax in the Data Center Real User Monitoring Smart Packet Capture User Guide. NOTE If both the server and the client are aggregated, the real client IP address is present in the filter expression but the real server IP address is not. In such cases, you probably need to change the filter expression manually to use the real server IP address. For servers that are not aggregated, the server IP address is present in the filter expression. Data Sources Use the Data Sources tab to select the devices that will be used to gather data for this task. By default, network packets are gathered from all available data sources. NOTE In a farm deployment, you can have multiple slaves and AMDs connected to them in various configurations (for example, AMD1 connected to slave 1, and AMD2 connected to slave 2). If you are browsing DMI reports (on a master CAS), data is downloaded from slave servers and aggregated on the master. For packet capture, this screen displays all probes (AMDs and EndaceProbes) from the master and all slaves. It is not known which server in the farm holds a given porting of data. If data for a given tcpdump filter is not visible to a given probe, no data is captured on that probe. Type Device type. IP Device address. Port Port number. File size Size of the trace. Advanced Options Use the Advanced Options tab to set file-related parameters. TCPDUMP filter settings (AMD only.) Select Remove encapsulation remove IP encapsulation from the capture. Default: selected. 43

44 Chapter 4 Smart Packet Capture File Settings (AMD only.) Change Maximum file size (AMD) to adjust the maximum file size for the capture. Default: 500 MB. Secure files with password Select Secure files with password (and provide the password twice) to password-protect all task files stored on the CAS. If you select this option, you will need to provide this password to open the trace in DNA or another application. Listing Traffic Captures Use the Packet Data Mining Tasks dialog box to list the captures previously made, captures still in progress, and captures scheduled to run in the future. This list displays tasks from all users, not only the current user. 1. On the CAS, open the Packet Data Mining Tasks dialog box. On the CAS main menu bar, select Tools Packet Data Mining Tasks. 2. Review the list of tasks. 3. Use the action buttons to manage tasks. Each row has its own set of buttons that apply to that row's task and traces. Click to return to the report used to generate the task. From there, you can create a new capture task that will not overwrite the existing task's data. Click to edit the task and run it again to overwrite the existing task's data. Click to stop a running task or one trace in a multi-trace task. You can stop an entire task, which stops all associated file captures, or you can stop a single file capture and leave the rest of the captures running. If this button is not displayed, either the task has not begun (is scheduled for the future) or it has already finished. Click to delete the task or trace, depending on whether you click the icon in the task row. 44

45 Chapter 4 Smart Packet Capture If you delete a task, all associated trace files are deleted with it. If you delete the last trace file associated with a task, the task is also deleted. If this button is not available, that task is still running. You cannot delete an active task. Packet Data Mining Tasks Dialog Box Use the Packet Data Mining Tasks dialog box to list the captures previously made, captures still in progress, and captures scheduled to run in the future. This list displays tasks from all users, not only the current user. Type part of a task name in the box to list only the tasks that match what you typed. Click in any task row to display all trace files associated with that task. If nothing is listed, hover the mouse pointer over the status icon to review the status of reporting devices. Click to collapse the trace list for that task. The table of scheduled tasks displays the following information: Tasks The task name, task description, the user who created the task, the task start time, and time remaining or task duration are all listed for each task. A progress bar shows task process for started or complete tasks. indicates that the capture files associated with the task are not password protected. indicates that all capture files associated with the task are password protected. You must supply a password before you can open them. For more information, see Secure files with password [p. 44]. indicates that the task is in progress. indicates that the task was interrupted. Hover the pointer over this icon for more information. Click to return to the report used to generate the task. From there, you can create a new capture task that will not overwrite the existing task's data. 45

46 Chapter 4 Smart Packet Capture Click to edit the task and run it again to overwrite the existing task's data. Click to stop a running task or one trace in a multi-trace task. You can stop an entire task, which stops all associated file captures, or you can stop a single file capture and leave the rest of the captures running. If this button is not displayed, either the task has not begun (is scheduled for the future) or it has already finished. Click to delete the task or trace, depending on whether you click the icon in the task row. If you delete a task, all associated trace files are deleted with it. If you delete the last trace file associated with a task, the task is also deleted. If this button is not available, that task is still running. You cannot delete an active task. Possible task statuses: Scheduled. Task will start: {time} The task has been created but not yet submitted to a device. Hover your mouse pointer over the information icon to see when the task is schedule to start. Capturing traffic The task has been submitted to a device and traffic is being captured. Downloading Trace files are being downloaded to a CAS. Current task progress is shown (overall and per device). Completed The task is complete and the trace archive is available for analysis. Files Click in any task row to display all trace files associated with that task. If nothing is listed, hover the mouse pointer over the status icon to review the status of reporting devices. Click to collapse the trace list for that task. For each file, the file source (device type, address, and port number), capture status, and file size are all listed. Click to stop capturing that file. Any other running file captures in that task will remain running. If this button is not displayed, either the file capture has not begun (is scheduled for the future) or it has already finished. Click to edit the task and run it again to overwrite the existing task's data. Click to delete the task or trace, depending on whether you click the icon in the task row or in a trace file row. 46

47 Chapter 4 Smart Packet Capture If you delete a trace file, the associated task are preserved as long as there are other trace files associated with that task. When you delete the last trace file associated with a task, the task is also deleted. If this button is not available, the file capture is still running. You cannot delete an active file capture. Click to download the trace file from the CAS. If the task is password-protected, you need to enter the password at this point. If this button is not displayed, either the capture is still running or there was an error in the capture. For more information, see Downloading a Traffic Capture in the Data Center Real User Monitoring Smart Packet Capture User Guide. 47

48 Chapter 4 Smart Packet Capture 48

49 CHAPTER 5 Configuration Settings Most configuration settings global report server settings and minor options require administrator's rights. The following sections provide overviews of the configuration settings available to common users. Software Services and Autodiscovery The AMD can automatically discover software services based on traffic content, protocol, and port number. A software service is a service implemented by a specific piece of software, offered on a TCP or UDP port of one or more servers, and identified by a particular TCP port number. Software services are identified on reports by assigned names. Software service autodiscovery makes it possible to use packet pattern-matching rules to match sessions that are not assigned to any user-defined software service to automatically created software services. 49

50 Chapter 5 Configuration Settings IMPORTANT The Port Finder feature known from DC RUM 12.2 and earlier versions is no longer available. This functionality has been replaced by the Autodiscovery reporting workflow, which provides access to the same information on all servers present on the monitored network, but in an organized form available to every report user, as opposed to the Port Finder's primary use case as an advanced configuration aid for experienced administrators. Be sure to review all of the Limitations of Autodiscovery section for important notes on using autodiscovery. For more information, see Limitations of Autodiscovery [p. 53]. There are important performance considerations related to autodiscovery and multithreaded decodes. Whether a decode runs in singlethreaded or multithreaded mode affects performance. For more information, see Autodiscovery and Multithreaded Decodes [p. 55]. If your AMD is under heavy load, or if you want to use custom settings (such as software service name, slow threshold, and defined operations), you may want to convert autodiscovered software services to user-defined software services. This will conserve AMD resources (memory and CPU). Adobe Flash Player must be installed on the client machine to enable drilldowns from autodiscovered software services on the CAS to the RUM Console to create user-defined software services. Default Software Services The AMD handles default software services as follows: New installations, by default, have default software service monitoring enabled, so that all traffic visible by the AMD is reported. IMPORTANT If you were using default software services in DC RUM release earlier than 12.3, and you then upgrade to DC RUM12.3 and configure DC RUM to use autodiscovered software services, autodiscovery will start but will continue reporting user-defined software services as configured in the earlier release. The new autodiscovery rules and features introduced in 12.3 will not function. This is because your configuration of default software services is preserved rather than overwritten when you configure autodiscovered software services. Workaround: to enable autodiscovered software services after such an upgrade, you must replace the existing pktmatchconfig.xml and protocols.xml files on your AMD with the files included in your DC RUM 12.3 distribution files. Until an autodiscovered software service is manually configured, it uses a generic decode, which for certain types of traffic (such as FTP) may make Application Performance values less precise. For autodiscovered traffic, you should configure a software service using the correct analyzer to improve Application Performance values. For Citrix traffic, you should refer to the Citrix Landing Page report. 50

51 Chapter 5 Configuration Settings Each default software service definition is specified by port number or pattern-matching rules. The precedence of pattern matching rules over port matching rules is configurable on the software service level. For a new installation, the list of default software services is predefined. In an upgrade when monitoring of default software services is enabled, the list of ports is not modified. In an upgrade when monitoring of default software services is disabled, the list of ports is upgraded by a predefined list and monitoring of autodiscovered software services remains disabled. The priority of content-based rules is higher by default than the rules based on well-known ports. Even if the given port is almost always used by a single service defined by a well-known port (for example, port 25 and SMTP), the AMD will still apply content-based recognition rules to all new TCP sessions. For some protocols and content-based rules, it is possible a general rule (transport protocol) will be detected first and then (in a subsequent packet) a more detailed one (application protocol) will be detected. For example, consider HTTP and SOAP. To give the detailed rule a chance, recognition does not stop after detection of the general rule; it continues until a detailed rule is found or until a packet or time limit is exceeded. HTTP Express is used to monitor auto-discovered HTTP software services. Unknown Traffic Traffic that does not match any of the user-defined or default software service definitions (none of the content-based rules is positive and the port is not listed as a well-known port) is reported as unknown TCP/UDP traffic. For IP traffic, bytes and packets are reported. For TCP traffic, the Generic No-Trans analyzer is used. Some analyzers (database analyzers, SMB, LDAP, SAP, OF, XML, and SOAP) work differently depending on whether they are operating in user-defined or default software service mode. In the default software service mode, these analyzers do not report decode-specific operations (such as T-code names or operation names) but do report the number of operations in the summary rows. If there is no license for an analyzer used in the default software services configuration, the Generic TCP/UDP analyzer is used instead. Server ranges can be used to filter default software services. This works as follows: Detected based on packet content rules the rule specifies which IP address is the server, and if the address is not in the server ranges the traffic is reported as filtered out. Detected based on well-known port the side with the well-known port is the server, and if the address is not in the server ranges the traffic is reported as filtered out. Unknown see the rules outlined above. The traffic filtered out due to server ranges is reported as All Other on the CAS report. 51

52 Chapter 5 Configuration Settings Aggregation All autodiscovered clients are aggregated, regardless of any other aggregation settings. If a client IP address is private, it is aggregated by subnet, not by site, for both autodiscovered and user-defined software services. For more information, see Software Services and Autodiscovery [p. 49]. For more information, see CAS User Aggregation Options in the Data Center Real User Monitoring Administration Guide. Client IP Ranges and Autodiscovered Software Services Client ranges can also be used to filter autodiscovered software services. If the client IP address is not in the client range, the traffic is not analyzed. For more information, see CAS User Aggregation Options in the Data Center Real User Monitoring Administration Guide. Configuration Procedures In general, the only user configuration needed is to enable or disable the autodiscovery of software services. By default, it is enabled, and this should be a satisfactory setting for most users. For more information, see Enabling or Disabling Software Service Autodiscovery on AMD in the Data Center Real User Monitoring Administration Guide. To review or change this setting: 1. In the RUM Console, select Devices and Connections Manage Devices to list the devices managed by this RUM Console. 2. For the AMD you want to configure, open the context menu and select Open configuration. 3. In the Configuration panel, select Software Services Autodiscovered Software Services. 4. Review and change (if needed) the setting for Enable monitoring of Autodiscovered Software Services. 5. Click Save and Publish to record and apply your change. AHS Network Performance Section The Network tile of the Application Health Status (AHS) report shows the total number of software services being monitored and a breakdown chart of autodiscovered and user-defined software services. For more information, see Application Health Status Report [p. 86]. Drilldowns from Reports NOTE Adobe Flash Player must be installed on the client machine to enable drilldowns from autodiscovered software services on the CAS to the RUM Console to create user-defined software services. If there is a in the A (for Autodiscovered ) column of the Software Services report, the Configure User-Defined Software Service option is available from the Software service 52

53 Chapter 5 Configuration Settings column. The green check mark indicates that there was at least one server with autodiscovered traffic present for this software service. For more information, see Configuring a User-Defined Software Service on CAS [p. 53]. If there is a in the A (for Autodiscovered ) column of the Servers report, the Configure User-Defined Software Service option is available from the Server name column. The green check mark indicates that there was autodiscovered traffic present for this server. If there is a in the Configured column of the Servers report, this server/port combination has been configured for the given software service. Limitations of Autodiscovery Be sure to review all of the Limitations of Autodiscovery section for important notes on using autodiscovery. On the Servers report, the Configured column indicates whether the server has been configured for the particular software service. If the server is already configured for some other software service and not for this software service, this column will show an X. If you try to configure such a server, the configuration dialog will pop up with an empty list of servers. In certain circumstances, two kinds of application may be detected and reported on the same server and port. For example, Oracle EBS provides both Web App and Forms App on the same server port. When you turn an autodiscovered service into a monitored service, however, you need to choose one decode or the other to analyze that server:port service. In this case, you would have to choose whether to monitor Oracle EBS Web part with the HTTP decode or monitor the Forms part with the Forms decode, and when you configured one of them for monitoring, the other one would disappear from reports because the AMD would use the configured decode for all traffic on this port. When using a 32-bit AMD and the custom driver, the driver, due to performance reasons, truncates all packets not belonging to user-defined software services. As a result, some of the content-based autodiscovery rules might not be able to detect protocols. As a workaround, you can either switch to the native driver or upgrade to a 64-bit AMD. When addressing this issue, be aware that DC RUM 12.3 is the last DC RUM release to support the 32-bit AMD. A single address may be counted and listed as two different users on the same CAS report if the software service that the address is using occurs twice on the same report, once as a user-defined software service and once as an autodiscovered software service. Configuring a User-Defined Software Service on CAS Use the Configure User-Defined Software Service wizard to configure a user-defined software service. NOTE Adobe Flash Player must be installed on the client machine to enable drilldowns from autodiscovered software services on the CAS to the RUM Console to create user-defined software services. 53

54 Chapter 5 Configuration Settings 1. On the CAS, open the Software Services report or Software Services Overview report. Choose one of the following access methods: From the CAS top menu, choose Reports RUM Analysis Software Services. From the Application Health Status report, in the Network Performance tile, click the total number of software services or click a monitoring interval in the chart. A row with A (for Autodiscovered ) selected indicates that at least one server with autodiscovered traffic was present for this software service. For more information, see Software Services Report [p. 148], Software Services Overview [p. 150], and Servers Report [p. 151]. 2. In a row with A selected, open the menu in the Software service column and select Configure User-Defined Software Service. The Configure User-Defined Software Service wizard is displayed. 3. On the first page, choose the servers you want to add to the user-defined software service definition and then click Next. NOTE If you have trouble selecting software services here, press F5 and then reopen the configuration wizard. Click one row to select that server. Click Select all to select all servers in the table. Click Deselect all to clear all selections. Click Toggle selection to invert all current selections. 4. On the second page, choose how you want to monitor the selected servers and then click Proceed to Console. You can find software services and rules according to analyzer type and string search. Add a new user-defined software service. A new monitoring configuration will be created. If you select this option, no other configuration is required. Add the servers to an existing user-defined software service rule. The existing monitoring configuration will be used. If you select this option, a table of existing user-defined software services and rules is displayed. This option allows you to choose an existing software service and choose a specific rule within that software service. Add a new rule to an existing user-defined software service. The existing software service name will be used but a new monitoring configuration will be created. If you select this option, a table of existing user-defined software services and rules is displayed. 54

55 Chapter 5 Configuration Settings This option allows you to select an existing software service and add a new rule to that software service. Select one and then click Proceed to Console to configure this CAS to monitor all selected servers using the selected rule. The RUM Console opens. 5. In the RUM Console, add or edit the rule, depending on your previous selection. If you chose Add a new user-defined software service, the RUM Console opens to Add Software Service, where you can fully describe the new software service. If you chose Add the servers to an existing user-defined software service rule, the RUM Console opens to Edit Rule, where you edit the existing software service rule you selected earlier. The servers you selected are appended (with the prefix CAS ) to the list of servers already associated with this rule. If you chose Add a new rule to an existing user-defined software service, the RUM Console opens to Edit Rule, where you edit the existing software service rule you selected earlier. The servers you selected are listed (with the prefix CAS ) 6. Edit the rule as needed and click OK to save your changes. Autodiscovery and Multithreaded Decodes There are important performance considerations related to autodiscovery and multithreaded decodes. Whether a decode runs in singlethreaded or multithreaded mode affects performance. Some decodes on the AMD are always singlethreaded. Some decodes on the AMD are singlethreaded if special configuration settings are enabled. When some decodes are used, it may switch the analysis of all default software services (autodiscovery) into singlethreaded mode. For more information, see Table 3. Autodiscovery and Multithreaded Decodes - Reference [p. 56] and Multi-core, Multithreaded Processing on the AMD in the Data Center Real User Monitoring Administration Guide. 55

56 Chapter 5 Configuration Settings Table 3. Autodiscovery and Multithreaded Decodes - Reference Decode User-def. soft. serv. Autodisc. soft. serv. (well-known port) HTTP Multithreaded N/A HTTP Express Multithreaded Multithreaded Oracle Singlethreaded if dynamic ports enable Singlethreaded if dynamic ports enable TDS Singlethreaded if dynamic ports enable Singlethreaded if dynamic ports enable Oracle forms over HTTP Singlethreaded Singlethreaded DNS Singlethreaded Singlethreaded Generic, no trans, DRDA, Informix, My SQL Multithreaded Multithreaded SSL, SAP GUI, Multithreaded Multithreaded Autodisc. soft. serv. (content-based rules) N/A Singlethreaded if autodiscovery TCP singlethreaded Singlethreaded if autodiscovery TCP singlethreaded Singlethreaded if autodiscovery TCP singlethreaded Singlethreaded N/A Singlethreaded if autodiscovery TCP singlethreaded Singlethreaded if autodiscovery TCP singlethreaded Forces autodisc. TCP to single-threaded mode No No If dynamic ports enabled If dynamic ports enabled Y (but only if content based rules are used) No N N Forces autodisc. UDP to single-threaded mode N N N N N N N N Used in autodisc. soft. serv. by default (well-known ports / content-based rules) N/N N/Y Y/Y, dynamic ports disabled Y/Y, dynamic ports disabled N/Y, only discovered, generic decode used Y/N Y/Y N/Y Used in autodisc. soft. serv. by default NFC, well-known port N N Y Y N Y Y Y 56

57 Chapter 5 Configuration Settings Table 3. Autodiscovery and Multithreaded Decodes - Reference (continued) Decode User-def. soft. serv. Autodisc. soft. serv. (well-known port) Autodisc. soft. serv. (content-based rules) ICA, IBM MQ, XML over MQ, SOAP, SAP RFC, JBoss RMI, Sun RMI, T3 RMI, Corba Multithreaded Multithreaded Singlethreaded if autodiscovery TCP singlethreaded SMTP, ICMP, generic IP Multithreaded Multithreaded N/A Jolt, Cerner Singlethreaded Singlethreaded Singlethreaded XML, XML over HTTP Multithreaded Multithreaded Singlethreaded if autodiscovery TCP singlethreaded OF over TCP Multithreaded Multithreaded Singlethreaded if autodiscovery TCP singlethreaded SMB, Kerberos Singlethreaded Singlethreaded Singlethreaded RPC Singlethreaded Singlethreaded if autodiscovery singlethreaded Singlethreaded if autodiscovery TCP singlethreaded Epic Multithreaded Multithreaded N/A SIP over UDP Singlethreaded Singlethreaded Singlethreaded Forces autodisc. TCP to single-threaded mode N N Y N N N Y (but existence of user-def., not autodisc., soft. serv.) N N Forces autodisc. UDP to single-threaded mode N N N N N N N N Y Used in autodisc. soft. serv. by default (well-known ports / content-based rules) N/Y Y/N N/N N/Y, only discovered, generic decode used N/N Y/N Y/Y N/N Y/Y, only discovered, Used in autodisc. soft. serv. by default NFC, well-known port N Y N N N Y Y N Y 57

58 Chapter 5 Configuration Settings Table 3. Autodiscovery and Multithreaded Decodes - Reference (continued) Decode User-def. soft. serv. Autodisc. soft. serv. (well-known port) Autodisc. soft. serv. (content-based rules) SIP over TCP Singlethreaded Singlethreaded Singlethreaded H323 Singlethreaded Singlethreaded Singlethreaded Universal decode Multithreaded Multithreaded Singlethreaded if autodiscovery TCP singlethreaded Forces autodisc. TCP to single-threaded mode Y (but only if content based rules are used) Y N Forces autodisc. UDP to single-threaded mode Y Y N Used in autodisc. soft. serv. by default (well-known ports / content-based rules) generic decode used Y/Y, only discovered, generic decode used Y/N, only discovered, generic decode used N/N Used in autodisc. soft. serv. by default NFC, well-known port Y Y N 58

59 Tenants Using tenant functionality, an MSP can use a single DC RUM system to monitor several separate networks (tenants), measure statistics separately per tenant, and present these statistics on tenant-specific reports. As long as the tenants are kept in separate IP address ranges in the MSP's data center, the MSP can use one RUM Console installation and one AMD to monitor multiple tenants, but the MSP has to maintain one CAS per tenant because tenant IP addresses may overlap out in the cloud. Tenant traffic can be distinguished by any combination of the following traffic characteristics: AMD sniffing interface MPLS label VLAN ID GRE tunnel ID Based on the configuration of tenants, the AMD produces data files already tagged with customer identifiers, so that different CAS servers reading data from the same AMD can process only the data relating to specific customers. Tenant Configuration Overview 1. Define the tenants on the AMD. For more information, see Managing Tenants on AMD [p. 59]. 2. For each CAS in the configuration, define the tenants that that CAS should process. For more information, see Assigning Tenants on CAS [p. 61]. Managing Tenants on AMD You can manage tenants on a selected AMD using the RUM Console. 1. Start and log on to RUM Console. 2. Select Devices and Connections Manage Devices from the top menu, to display the current device list. 3. Select Open Configuration from the context menu for an AMD. The AMD Configuration window appears. 4. In the AMD Configuration dialog box, in the Configuration panel, select Global Advanced Tenants. The Tenants screen is displayed. 5. On the Tenants screen, define one or more tenants. For each tenant name, you must define at least one rule that defines the sort of traffic that will be considered traffic for that tenant. For example, all traffic on a certain port, or all traffic on a certain VLAN, or all traffic on a certain VLAN and a certain interface, will belong to the tenant whose name you are entering here. 59

60 Chapter 5 Configuration Settings For each tenant name, you can define more than one such rule, so that you might, for example, create three rules to assign all traffic on three different VLANs as belonging to the same tenant name. For each tenant rule you want to add, click AMD table. For more information, see Defining a Tenant Rule [p. 60]. at the top of the Tenant configuration on 6. Use the up and down arrows to set the order of precedence for the rules. If two or more rules potentially match some traffic, the first matching rule on this list (from top to bottom) determines the tenant to which that traffic is assigned. You can make processing more efficient by putting the most likely matching rules near the top. 7. After you have configured all tenants on this AMD, specify how to treat traffic that does not match the configured tenants. Unassigned If Unassigned is selected, all traffic that does not match the tenant rules defined above remains unassigned to any tenant. Tenant name If Tenant name is selected, all traffic that does not match the tenant rules defined above is automatically assigned to the specified tenant. You can select a tenant name from the names defined in the rules above or you can type a new tenant name. 8. Click Save and Publish to save your changes and publish them to the selected AMD. Defining a Tenant Rule Use the Tenant Rule Definition screen to define or adjust a tenant rule on the selected AMD. 1. Required: Type a Tenant name. The Tenant name is used to identify the tenant. Be sure to give it an obvious name (the name of the tenant). The Tenant name setting is required. You cannot create a tenant rule without a Tenant name setting. 2. Required: Enter at least one identifying criterion for this tenant. It can be any combination of interface ID, VLAN number, MPLS number, and GRE/WCCP number. Each of these is optional by itself, but it is mandatory that you use at least one of them (or some combination of them) to describe the traffic that should be assigned to this tenant. a. Optional: Select an Interface number. The number of the interface associated with this tenant. You can select one from the list of interfaces configured on this AMD. The Interface setting is optional: you can create a tenant rule without an Interface setting as long as you have selected at least one of the other option settings. b. Optional: Type the VLAN identifier, if any, by which you want to define this tenant. 60

61 Chapter 5 Configuration Settings An integer identifying a VLAN. If the No VLAN in traffic check box is selected, observed network traffic does not contain VLAN traffic. The VLAN setting is optional: you can create a tenant rule without a VLAN setting as long as you have selected at least one of the other optional settings. If you specify a VLAN, it must be an integer between 1 and c. Optional: Type the MPLS identifier, if any, by which you want to define this tenant. An integer identifying an MPLS-based network. If the No MPLS in traffic check box is selected, observed network traffic does not contain MPLS traffic. The MPLS setting is optional: you can create a tenant rule without a MPLS setting as long as you have selected at least one of the other optional settings. If you specify a MPLS, it must be an integer between 0 and d. Optional: Type the GRE/WCCP identifier, if any, by which you want to define this tenant. An integer identifying a Generic Routing Encapsulation (GRE) /Web Cache Communication Protocol (WCCP) service. Enter the WCCP System ID that is running in the GRE tunnel. The Dynamic service identifier can be a number from 0 to 254 and can be obtained from your router by running the show ip wccp command at the router's command prompt. Select the check box when your traffic does not use WCCP in GRE tunnel. If the No GRE/WCCP in traffic check box is selected, observed network traffic does not contain GRE/WCCP packets. The GRE/WCCP setting is optional: you can create a tenant rule without a GRE/WCCP setting as long as you have selected at least one of the other optional settings. If you specify a GRE/WCCP, it must be an integer between 0 and Required: Click OK to save your changes and return to the Tenants screen. Assigning Tenants on CAS By default, the CAS processes data from all tenants configured on the AMDs assigned to it, but you can configure it to process data from only a selection of the tenants configured on those AMDs. To process data from a selection of tenants: 1. Start and log on to RUM Console. 2. Select Devices and Connections Manage Devices from the top menu, to display the current device list. 3. Find the CAS in the list and select Open configuration from its context menu. The CAS Configuration window is displayed. 4. In the navigation panel, click Tenants. The Assign Tenants screen is displayed. 5. Change Process all data assigned to any tenant to Process data from the following tenants. 61

62 Chapter 5 Configuration Settings If Process all data assigned to any tenant (the default setting) is chosen, the selected CAS processes all data it receives from any tenant. If Process data from the following tenants is chosen, you can then choose to process or ignore traffic based on tenants configured on the data sources for this CAS. 6. When Process data from the following tenants is chosen, you can select, from the list of available tenants, select the tenants whose data you want this CAS to process. By default, Available Tenants lists all tenants configured on all AMDs assigned to this CAS. For each tenant you want this CAS to process: a. Select the tenant in the Available Tenants list. b. Click the right arrow. The selected tenant is moved to the Assigned Tenants list. Data from tenants remaining in the Available Tenants list will not be processed by this CAS. 7. Select or clear the Process data with no tenants assigned check box. In addition to the traffic for tenants selected in the Assigned Tenants list, the reporting server will also process the traffic that was not assigned to any tenant if Process data with no tenants assigned is selected. 8. Click Apply to save and apply your changes. Baseline Data A baseline is the data from the last several days (usually nine days) aggregated into one average or typical day. Baselines are necessary for considering the variations in traffic on different days of the week, random anomalies in traffic load, or to compare traffic with a known baseline from a specific point in time. Baseline data is generated once a day after the arrival of data from the first monitoring interval after 00:10 am (in the background). Baseline data is not averaged over the day within each day and therefore may vary rapidly depending on the time of day just as monitored data would. Each monitoring interval is assigned the value averaged over the nine-day period for this specific monitoring interval. Requesting baseline data for Yesterday will yield the same results as requesting baseline data for Today, because baseline data for yesterday will still be calculated over the last nine days counting from today. NOTE The daylight saving time (DST) shift generates a discrepancy between current and baseline values. For ten days after the time change, baselines are calculated from a mix of data that originated before the change and after the change. This discrepancy resolves automatically after ten days, when the data used for the baseline calculation originates entirely from after the time change. You can use pinned or manual baselines to mitigate this inconsistency. For more information, see Baseline Modes in the Data Center Real User Monitoring Administration Guide. 62

63 Chapter 5 Configuration Settings Localizing the Report Server The user interface of the report server is rendered in the following languages: English Japanese Korean Chinese simplified The user interface and server reports are sensitive to the language settings of the browser, which are checked when you log on to the server and are remembered throughout the session. Changing the language settings of the browser after you log on does not affect the current session. If your browser uses an unsupported language, English is used by default. CAUTION The report server does not work on systems where support for the English language is not installed. Setting the Operating System to Display East Asian Languages To display East Asian languages, change the Windows regional settings. Make sure that the target language matches the regional settings of the operating system where the report server is installed and by the client system connecting to the server. NOTE East Asian characters are displayed correctly when using a web browser run under a localized operating system. For example, Japanese characters are rendered correctly when reports or other screens are viewed using a browser on an operating system localized for Japan. Also, it is important that the report server generating the reports be localized for the language in which the reports are being viewed. To enable support for East Asian languages: 1. In Windows, click Start Settings Control Panel and select Regional and Language Options to open the Regional and Language Options dialog box. 2. Click the Languages tab. 3. Select Install files for East Asian languages. 4. Click the Regional Options tab. 5. In Standards and formats, select a language. 6. Optional: In Location, select a geographical location. 7. Click OK. Localized User Settings Internationalization of the report server can be set as automatic, set by users per session, or set permanently by the system administrator or users. 63

64 Chapter 5 Configuration Settings The report language can be set for each user individually. A system administrator can set a user's language when adding a user to the server. A user can override the language setting at logon. For more information, see Changing the Report Language at Logon [p. 64]. Users can also modify the setting in the configuration screen. On the screens for adding or modifying users, you can choose a language from a menu or set the value to Auto to force automatic language detection. To override automatic language detection at the moment a new user is added or modified: 1. Click Settings Security User Settings to open the User Settings screen. 2. On the Internationalization panel, from the Language menu, choose a language or Auto for language auto detection. 3. Click OK. 4. Log off and log on again to ensure that the language setting has changed as intended. The Formats panel displays detected regional settings, such as the time and date format and the time zone of the server and client. These settings change according to the language setting of the browser. Changing the Report Language at Logon When you log on to the report server, you can change the language settings for the session. The supported languages are listed at the bottom of the logon screen. By default, the language setting is taken from the browser language settings, but you can switch the language for a new session by clicking a different language name before logging on. The date and time display format, however, depend on the report server system locale unless the web browser setting for the default language is different. For example, if a report server is installed and running on a Japanese Windows server and a user with a web browser localized for the Japanese language selects the English language for the reports at the logon screen, all reports will be displayed in English, but dates and times will be displayed in Japanese. If that user instead sets the browser default language to Spanish, dates and times will be displayed in the Spanish language. NOTE East Asian characters are displayed correctly when using a web browser run under a localized operating system. For example, Japanese characters are rendered correctly when reports or other screens are viewed using a browser on an operating system localized for Japan. Also, it is important that the report server generating the reports be localized for the language in which the reports are being viewed. 64

65 Sending Reports by To send a static copy of the currently displayed report, select the Send by option in the Actions list. This option is active only if the mail server has been configured and enabled. For more information, see Specifying the SMTP Server for Scheduled Report Mailing [p. 65]. Figure 7. Send by Option in the Actions List Chapter 5 Configuration Settings Specifying the SMTP Server for Scheduled Report Mailing To enable the report server to send messages, a system administrator must first configure the server to use an existing SMTP server. Before You Begin You need to have access to an SMTP server. The server must be configured so that it does not modify line terminators. This is the only required configuration setting that is specific to the server being used for report mailing. The way to configure this option will depend on the particular SMTP server. For example for the Postfix server, the relevant configuration option is stored in the main.cf configuration file and is: sendmail_fix_line_endings = never NOTE If the alert subsystem is not configured, the report server polls the DNS server for the SMTP server IP address of the network domain in which the report server is installed. If the DNS server does not return any IP addresses, no reports are sent by and an error message is displayed in the report-mailing configuration window. 1. Open the mail configuration screen: From the CAS top menu, choose Settings Sending Mail. In the RUM Console, choose Devices and Connections Manage Devices. On the Devices screen, choose Open configuration from the context menu for your report server. When the Server configuration window appears, choose Sending Mail from the menu. 65

66 Chapter 5 Configuration Settings Figure 8. Example Sending Mail Screen 2. Change the mail server settings as required. NOTE MAIL_ACTIVE MAIL_HOST MAIL_PORT MAIL_SENDER This enables and disables the scheduled report mailing feature. The default value is true. Setting this property to false does not affect the command Send Mail Now, which is available through the Send Report by option. Send Mail Now is active regardless of the value of the MAIL_ACTIVE property. The SMTP server's IP address or DNS name. The port number of the SMTP server defined in MAIL_HOST. The address from which the alert notifications are sent. If the mail host is not specified, the report server sends a DNS request to the default DNS server to learn the default SMTP server's IP address. If the DNS server returns a valid address of an SMTP server and this server does not require authentication, the feature works without the need to enter an explicit SMTP server address in the configuration. Click Set value to record each change. When DMI is used from within VantageView, the DMI controls and features described in this section are handled by the hosting VantageView environment and do not appear on the DMI screen. For more information, see Differences in DMI Integrated with VantageView in the Data Center Real User Monitoring Data Mining Interface (DMI) User Guide. Saving Reports to PDF or MHT File You can export any report to a PDF or MHT file using the built-in screen capture capability. To save a report in either format: 1. While viewing the report, choose Save as PDF or Save as MHT in the Actions list. PDF 66

67 Chapter 5 Configuration Settings Portable Document Format, which requires a PDF reader to view the content. MHT Web page archive format compatible with Internet Explorer. By default, the exported report file name is Report.pdf or Report.mht, but you can change the file name in the Save As dialog box. 2. After the file is prepared, click Open to view the report or Save to save the report on your local computer. Report Server Security Settings Adding users (locally or from LDAP) is done in the CSS. In the report servers, you can assign users to reporting groups, and grant or deny access to specific report server modules or reports. Most of these tasks require administrator rights, but some basic settings can also be configured by common users. In this chapter we will focus only on settings available to every user. For more information, see User Access and Security in the Data Center Real User Monitoring Administration Guide. 67

68 Chapter 5 Configuration Settings 68

69 CHAPTER 6 Tabular Reports All CAS tabular reports are similar in terms of the way information is structured, displayed, and customized. They use the same search mechanisms, filtering options, and status indicators. Selecting Time Range and Resolution Your report definition must have valid time range and data resolution settings selected in the navigation bar at the top of the page. NOTE When DMI is used from within VantageView, some of the DMI controls and features are handled by the hosting VantageView environment and therefore do not appear on the DMI screen. For more information, see Differences in DMI Integrated with VantageView in the Data Center Real User Monitoring Data Mining Interface (DMI) User Guide. VantageView users should use the VantageView time bar to set the time range for a report and use the Date Time Filters tab for changing the resolution. Figure 9. Navigation Bar with Resolution, Time Range, and Calendar Controls Resolution Settings The data resolution setting controls two aspects of report generation: If you use the Time dimension on the report, data resolution defines the time intervals between records on your report. For example, if you require a chart that depicts how traffic changes hourly during the day, choose the 1 hour resolution. Even if you do not use the Time dimension on your report, Resolution defines the source of data used for the generation of your report: raw data or aggregate values. If the resolution is 1 period, 1 hour, or 6 hours, the report is generated directly from the raw data gathered by the AMD. If the resolution is greater than 6 hours, the report is generated from aggregate values such as sums, averages, maximums, and minimums, depending on the type of the 69

70 Chapter 6 Tabular Reports metric. For example, if the resolution is selected to be 1 day, then daily averages, sums, and other aggregates are used for report generation. An aggregate value can be a sum, average, maximum, minimum, or other statistical value, depending on the particular metric. For example, aggregates for metrics that deal with the volume of transferred data would be sums of all the volume transferred during the given interval, and aggregates of metrics dealing with the speed of data transfer would be averages. The significance of selecting the resolution for report generation is that only one value is always used per selected resolution period, per aggregation dimension. Generating reports based on aggregates is faster than generating reports from raw data, because aggregate values are usually pre-calculated and so do not need to be computed afresh for your report. NOTE When the time range is changed, the default time resolution changes accordingly. For instance, when you change the time range from Today to Last 7 days, the resolution changes from 1 period to 1 day. Time Range List The available time range settings depend on the data provider and on the particular data view that is currently displayed. Only those ranges that are compatible with the currently displayed data view are shown in the time range list. In cases where more than one data view is displayed, the available time ranges are limited to those ranges that are valid for all of the displayed views. The list of all possible time ranges is divided into three sections: Current, Recent, and Historical. Current Last monitoring interval The last closed monitoring interval (5 minutes, by default). Last one hour The last hour, ending at the closure of the last monitoring interval. If 60 minutes is not an exact multiple of the length of a monitoring interval, this period is the maximum multiple of the monitoring interval shorter than one hour. Last 4 hours The last four hours, ending at the closure of the last monitoring interval. Last 6 hours The last six hours, ending at the closure of the last monitoring interval. Today Today from midnight to the current time. Recent This section includes the last 10 days. Historical Last 7 days The last seven days, ending at midnight last night. Last 30 days The last thirty days, ending at midnight last night. Week to date The period of time from last Monday 0:00 AM to midnight last night. Full week The last full week: the period starting on Monday 0:00 AM two weeks ago and ending at midnight on the following Sunday. Month to Date The period from 0:00 AM on the first day of the current month to midnight last night. 70

71 Chapter 6 Tabular Reports Full Month The last full month: the period starting at 0:00 AM on the first day of last month and ending at midnight on the last day of last month. Last 3 Months The last three full calendar months, ending at midnight on the last day of last month. For example, if today were August 15, this report would show May 1 this year through July 31 this year. Last 12 Months The last twelve full calendar months, ending at midnight on the last day of the last month. For example, if today were August 15, this report would show August 1 of last year through July 31 of this year. Quarter to Date The period from 0:00 AM on the first day of the current quarter (January 1, April 1, July 1, or October 1) to midnight last night. Year to Date The period from January 1 this year to midnight last night. At the bottom of the list is the Custom label, which is automatically selected after you define you own time range using the Calendar [p. 71] tool. NOTE In VantageView there are no pre-defined ranges such as Today or Last 7 days on the time bar. However, you can set all of these time periods using controls on the Date Time Filters tab. Time Range Display The selected time range is displayed as the beginning and ending dates on the right side of the screen. Figure 10. Time Range Display Calendar You can select a customized time range by clicking the calendar icon next to the Time range list, and by specifying the length of the desired time period and the start date and time. 71

72 Chapter 6 Tabular Reports Figure 11. Calendar Window You can choose from two time range types: Absolute and Relative. Use the Absolute type to specify the start date and time. You can choose the date from the calendar. To scroll through months, use the single arrows. To scroll through years, use the double arrows. Use the Relative type to specify a time period relative to the current date. If you define your own time range with the calendar, the Custom label appears in the Time range list to inform you that the selected range does not fit any of the default options. Selecting and Filtering Columns to Display Each table on a DMI report consists of columns that present various dimensions and metrics. You can modify this set of dimensions and metrics using the Customize Columns window. To specify which columns should appear in the table, choose Customize columns in the context menu for that table. Figure 12. Context Menu for the Report Section 72

73 Chapter 6 Tabular Reports Displaying and Hiding Columns If the column should appear in the table, select the corresponding Show check box. If you do not want to show a specific column in the table, clear the corresponding Show check box. Setting Filters on Columns You can set filters to narrow down the results displayed in the table. To do this, type the filtering condition in the Filter box for the selected column. The syntax used for column names can include the following symbols: Question mark:? Matches any single character. Asterisk: * Matches repetition of the previous character zero or more times. Ampersand: & Matches both the expression before and the expression after the ampersand character. Vertical bar: Matches either the expression before or the expression after the vertical bar. Tilde: ~ Negates the expression. Backslash: \ Escapes special characters such as: `\?', `\&', `\)'. Figure 13. Setting a Filter on the Tier Column Click OK to save your settings and see the results in a report. Sorting, Filtering, and Searching Data There are several methods of finding subject data faster than browsing through numerous pages of the tabular report. You can sort data in columns, look for a specific value, or narrow the scope of the report using data ranges. 73

74 Chapter 6 Tabular Reports Sorting Click any column header to sort the report by that column. Click again to reverse the sort order. The sort order is indicated by the tiny triangle icon: Ascending Descending Searching and Filtering Several search options are available in tabular reports. Filtering by name You can specify the full name of an item, specify part of the name (beginning, end, or a middle part), or use wildcards. For example, entering jupiter, jupi, upi, ter, or j*pi*r would all find the item jupiter. From the list of columns of the report, select where the filter will be applied, then click Find. Example 1. Applying Filter by Name In this example, you limit the number of displayed tiers to only those that contain web in their name. A sample report after the filter is applied: 74

75 Filtering items by numbers You can use the exact value you want to search for, for example the number of bytes or failures; or you can widen the search criteria by using relational operators: > X means all values greater than X. >= X means all values greater than or equal to X. < X means all non-negative values less than X. <= X means all non-negative values less than or equal to X. Specifying ranges A range can also be specified as X-Y, meaning equal to or greater than X, but less than or equal to Y. Using suffixes Suffixes such as k (kilo) or M (Mega) can be used for numerical values and can be combined with range specifiers or relational operators. For example, >80k is equivalent to >80000). Note, however, that you should always use absolute numeric values, with or without the use of suffixes, and not relative percentages. Thus, for example, the filtering condition >0.85% is not valid. Including special characters in searches You must enter a backslash \ as an escape character before special characters to indicate that you mean that character itself. The following characters must be preceded by a backslash if you want them to be treated literally in searches: Drilldown Links space, if it occurs at the beginning or end of the search string asterisk ( * ) backslash ( \ ) ampersand ( & ) vertical bar ( ) Chapter 6 Tabular Reports For example, to search for the user name lab\dtwlib4m you must type lab\\dtwlib4m in the Find field. When you omit the escape character, the search does not return any records. The drilldown links menu is associated with the icon beside the name of an entity on a report; for example, an application, server, operation, or software service, depending on the report you open. The Links menu items contain drilldown links for a selected report item, as configured in the report definition. For more information, see Defining a Drilldown Link in the Data Center Real User Monitoring Data Mining Interface (DMI) User Guide. 75

76 Chapter 6 Tabular Reports Figure 15. Three Drilldown Links from the Same Column Status Indicators The CAS features a number of graphical indicators that can help you interpret the report data and quickly recognize problem areas. Status Icons For more information, see Status Icons [p. 76] and Health Check Icons [p. 77]. Status icons indicate whether service performance is above or below threshold, whether there are availability problems, or whether the service is inactive. The status is calculated based on the comparison of the current value to the benchmark value and color ranges defined for the specific report column. Move the mouse pointer over a column field to open a tooltip displaying the latest values of the metrics on which the status calculation was based. The icons are as follows: good early warning severe warning bad no status (data missing or insufficient to calculate the status) 76

77 Chapter 6 Tabular Reports Figure 16. Report with Color Rendering (Status Icons) Enabled For more information, see Color Rendering Configuration in the Data Center Real User Monitoring Data Mining Interface (DMI) User Guide. Health Check Icons The health check icons are indicators of system problems that are listed on the Tools Diagnostics System Status screen; the problem must be marked as Shown as health check status for an icon to appear on CAS reports. The icon appears on the navigation bar if the number of operations, servers, clients, or sites exceeds the specified limit as preset in the server configuration files. The icon also appears during server startup to warn that data loading is in progress and that the reports may be incomplete. The icon appears on the navigation bar to warn that no current monitoring data is available from the AMD, which may be because the AMD is down or the network connection between the AMD and the report server is broken. Figure 17. Navigation Bar with a Warning Icon Click the icon to access the System Status report. For more information, see System Status in the Data Center Real User Monitoring Administration Guide. 77

78 Chapter 6 Tabular Reports Aliasing: Customizing Dimension and Metric Names You can refer to dimensions and metrics using customized names called aliases. You can define aliases for a particular data view or for a particular report. Global Aliases for Selected Data View You can define alias names for dimensions and metrics available in a particular data view on the Subject Data tab. Click Global aliases link above the table to display the Aliases window. Figure 18. Example Alias Definition Screen Your settings will be applied to all DMI reports that are based on the selected data view. If you do not want to use aliases on the current report, select Disable aliases for current report. Click OK to save the new alias definitions. Click Reset to clear all aliases from the screen. Aliases for a Single Report You can define aliases for dimensions and metrics for a particular report on the Result Display tab. Click the icon above the dimension and metric names to display the Aliases window. 78

79 Chapter 6 Tabular Reports Figure 19. Aliases Window Table Header Aliases Tab The following aliasing functionality is available: General Aliases On the General Aliases tab, you can define new names for selected dimensions and metrics. Table Header Aliases On the Table Header Aliases tab, you can define table headers, that is, new names for selected dimensions and metrics in the report table. These names override the names specified on the General Aliases tab. You can also define multi-level shared headers for a number of table columns. The configuration tab is available only if tables are actually being used by the report. Chart Aliases On the Chart Aliases tab, you can define new names for selected dimensions and metrics as they should appear on charts. These names override the names specified on the General Aliases tab. The configuration tab is available only if charts are actually being used by the report. Defining Multi-level Table Headers To define a multi-level shared header for a number of table columns, select the Table Header Aliases tab, type the shared header name in each of the applicable boxes, press [Enter], and then type the dimension or metric name as appropriate. Click OK to save the new alias 79

80 Chapter 6 Tabular Reports definitions. If you define a header alias, the name to notify you of the defined header. icon appears next to the dimension or metric As shown in the following figure, Client bytes, Server bytes, and Total bytes are grouped under one header: Usage. The following figure shows a report with the defined alias. Figure 20. Table with a Header Alias NOTE You can define more than one header level. For example, you can divide metrics grouped under the Usage header into two groups: Client/Server and Total. To do this, type the consecutive headers in separate lines. Tooltips Special Considerations for Using Aliases for Charts Chart aliases simply define a new name to display in the chart for the specified dimension or metric. This new name takes precedence over any other name, either the original name of the dimension or metric, or the alias defined on the General Aliases tab. Using chart aliases is particularly useful when metric names are long. For chart aliases, if you do not want a dimension or metric name to appear at all on the chart, insert a single hyphen ( - ) instead of any new name for the dimension or metric. A tooltip appears when you hover over (hold the mouse pointer over without clicking) a metric value or a status icon. When Color rendering is set to Custom, the tooltip provides information on the metric value and the threshold levels for the displayed status icon. 80

81 Chapter 6 Tabular Reports Figure 21. Tooltip over a Cell with Custom Rendering When Color rendering is set to Benchmark, the tooltip shows the measured and benchmark values that were used for comparison and for calculating the color status. Figure 22. Tooltip over a Cell with Benchmark Rendering Use the tooltip metrics list, which you can access by clicking a link in the Tooltip metrics column, to configure the set of metrics that are displayed in the tooltip. 81

82 Chapter 6 Tabular Reports Figure 23. Example Tooltip Configuration Options 82

83 CHAPTER 7 Data Mining Interface The Data Mining Interface (DMI) is a universal tool for report generation. It is available to selected users in your report server installation, and is accessible under Reports DMI. Refer to the Data Mining Interface User Guide for a detailed description of this tool. DMI reports have variable time-range settings, variable resolution settings, and dynamic sorting and filtering mechanisms. Trending and baseline data is also available for DMI reports. Trending data is transparently used when necessary, while baseline data is mixed with current data on the same screen. Use DMI to generate tabular reports and charts and mix multiple report sections on the same page. The reports can have a hierarchical structure with contextual drilldown, sibling, and parent reports. Report definitions are saved in the database and reports are re-run when opened. The DMI is equipped with an integrated persistent report cache that optimizes report re-run requests in the context of real-time data changes in the database. The DMI integrates with a Central Analysis Server database, providing access restrictions, based on the Central Analysis Server user identity. Predefined DMI reports are available for various types of users and include high-level scorecards for IT executives, and dedicated planning and monitoring reports for staff responsible for application service delivery. The DMI can also be integrated with VantageView and used as the custom reporting engine. DMI uses product-specific data views. Each data view supports its own set of dimensions and metrics. DMI uses product-specific data views. Each data view supports its own set of dimensions and metrics. CAS-specific data views are described in Central Analysis Server Data Views [p. 197]. 83

84 Chapter 7 Data Mining Interface 84

85 CHAPTER 8 Reports Menu CAS reports available from the Reports menu consolidate DC RUM and Enterprise Synthetic into one cohesive solution from an information delivery perspective. They enable you to see a complete view of your application performance. Each application can be divided into layers (tiers). CAS reports present this division with measurements originating from different sources: synthetic agents and passive monitoring devices. CAS reports include: Application Health Status Report [p. 86] Showing performance problems and their impact on business. EUE Overview Reports [p. 95] Showing an overview of many types of measurements collected by CAS from various sources. Synthetic Overview Reports [p. 131] Showing data from Enterprise Synthetic or Enterprise Synthetic. RUM Analysis Reports [p. 146] Providing insight into the infrastructure and application performance throughout the tiers, sites, and software services. Network Analysis Reports [p. 175] Showing a network view of the traffic. The reports are based on DMI. You can customize these reports by clicking Edit report in the Actions list to display a page containing all the data views on which the reports are based. For more information on how to use DMI, refer to the Data Mining Interface (DMI) User Guide, which you can access from Help Books Data Mining Interface User Guide. Hierarchy of Applications and Transactions Applications are containers for transactions. Transactions contain operations, such as URLs, queries, or form submissions and other reporting hierarchy levels. A unique transaction is defined by a pair of names: the application name and the transaction name. Data for a transaction can come from: 85

86 Chapter 8 Reports Menu Agentless Monitoring Device Enterprise Synthetic agent Figure 24. Hierarchy of Applications and Transactions Application Transaction 1 Transaction 2 Transaction n. Operation 1. Operation 2. Operation XML call 1 XML call Operation n. XML call n Application Health Status Report The Application Health Status (AHS) report displays, at a glance, performance problems and their impact on business. Operations teams, service desk teams, and application owners can use it as they conduct diagnostics to find and solve performance problems in the application delivery chain. The AHS report displays data from all user experience measurement sources such as probes (AMDs), browser instrumentation (User Experience Management), and synthetic sources (Enterprise Synthetic and Backbone). This data is combined to display a single integrated view of end-user experience for all monitored applications. NOTE In the AHS report, the metric Failed Operations is the same as the Failures (Total) used in the other CAS reports. How to Access This Report The AHS report is the default landing page for all CAS users. To access this report from within the CAS, select Reports Application Health Status from the CAS main menu bar. The AHS report has two perspectives: Summary and Detail. You can chose to skip the Summary perspective in the configuration. How to Access This Report on Your Smartphone or Tablet You can also access this report on your smartphone or tablet with MobileAPM, the Compuware APM Mobile Client. With a couple of taps and a few swipes, you can quickly check the health of your mission critical applications in real-time, isolate the fault domain of an application problem, and hand it off to the appropriate team to isolate the root cause. To download MobileAPM for Android, go to 86

87 Chapter 8 Reports Menu To download MobileAPM for ios, go to If you do not have direct internet access, you must configure an HTTP proxy. To configure the app, tap settings. You must connect remotely to the CAS. Summary Section The Summary perspective displays a high level overview of performance issues as well as business impact information and alert notifications. More detailed information is available when you click the numbers in each row. The following are best practices for using the Summary perspective: The AHS Summary page is designed for environments that include DC RUM as their main data source. Do not monitor applications with Sequence Transactions data only. For a Sequence Transactions application, configure the appropriate operation level rules in the Business Units Configuration screen to monitor the operation level, availability and other problems that are not visible in the Sequence Transactions data. For more information, see Using the Business Units Screen in the Data Center Real User Monitoring Administration Guide. To monitor synthetic (Enterprise Synthetic Backbone) only, or DC RUM Sequence Transactions only, monitor data applications in the CAS. Reconfigure the AHS report to show the Detail perspective (Applications) by default. Edit Application List Click to select or clear the monitored applications that are included in the Summary perspective. Monitoring Period Select a time range for which data is captured. As you analyze the performance use this to determine when issues occur. For more information, see Options on the Application Health Status Section in the Data Center Real User Monitoring Data Mining Interface (DMI) User Guide. Options Select a primary metric to act as the driving metric. Application Health Index is selected by default. Select Edit Section which includes configuration options for the data source, display settings, time range settings, and report columns. Applications This tile shows the total number applications monitored and gives you a breakdown by application state: Total The number of applications being monitored. Severe The number of applications being monitored whose state is Severe. Click the Severe total to display a report of the applications in this state. 87

88 Chapter 8 Reports Menu Warning The number of applications being monitored whose state is Warning. Click the Warning total to display a report of the applications in this state. Good The number of applications being monitored whose state is Good. Click the Good total to display a report of the applications in this state. The application state is categorized as severe, warning, or good based on the choice of primary metric as set in Options. Click any of these numbers to display the Applications detailed perspective for that category. For more information, see Application Health Status - Applications [p. 89]. If the CAS is connected to more than one User Experience Management source, the following rules apply: The real user monitoring source type (Probe or Browser) depends on the application configuration in the Business Units Configuration screen. The worst of two possible synthetic (Enterprise Synthetic or Backbone) sources is employed, if the number of active real users/visits is less than the threshold (which is 5 by default). Primary Metric, (Application Availability, Application Health Index, Application Performance, Operation Time, or Operations) This tile displays the driving metric averaged across all predefined front end data center type tier (and browser metrics, if available) for an application. Click Options to change this to another metric. Click a number to open a detailed Data Center Analysis report for an analysis of data from Probe measurements. Business Impact This tile displays the number of total unique users and users affected by performance or availability problems. Click the number to open a detailed User Health report. For more information, see User Health Report [p. 171]. All Alert Notifications This tile displays the alert notifications count. Alert notifications apply to metric alerts only. To open the Alerts report, click the number or on a graph bar. For each alert, the Alerts report shows the time it happened, the ID for the transaction or step, and a description. For more information, see Alerts Report [p. 94]. 88

89 Chapter 8 Reports Menu Network Section Network Performance This tile shows the total number of software services being monitored and a chart of software service throughput and nodes. The chart shows the node count, throughput (autodiscovered and user-defined), and baseline over the preceding Hover over any monitoring interval on the Network Performance chart to display the node count, throughput (autodiscovered and user-defined), and baseline for that interval. Click the Network Performance icon or the Software Services total to open the Software Services Overview report showing all software services being monitored. For more information, see Software Services Overview [p. 150]. NOTE Use the drilldown link from the AHS report to the Software Services report to verify the current software services configuration. Because the Software Services report may be slower to generate in this context, however, use Reports RUM Analysis Software Services when you are conducting fault domain isolation. Click a monitoring period on the Network Performance chart to open the Software Services Overview report showing all autodiscovered software services being monitored over the selected period. For more information, see Software Services Report [p. 148]. Application Health Status - Applications The Applications detail perspective displays detailed information about the monitored applications you select in the Application Health Status report Summary Perspective. The Application Health Status summary rows appear across the top of the page for reference. Hover over the charts to see detailed information about the data points. For more information, see Application Health Status Report [p. 86]. Click Summary to return to the Summary perspective which displays a high level overview of performance issues as well as business impact information and alert notifications. Edit Application List Click to select or clear the applications that are displayed on the report. Monitoring Period Select a time range during which the data is displayed. Use this to set up the time horizon for a performance analysis of your environment. For more information, see Options on the Application Health Status Section in the Data Center Real User Monitoring Data Mining Interface (DMI) User Guide. 89

90 Chapter 8 Reports Menu Options Select a primary metric to act as the driving metric which is used to determine status of applications transactions and tiers on the report. Application Health is selected by default. Select Edit Section for advanced report customization which includes configuration options for the data source, display settings, time range settings, report columns, thresholds and filters. Applications List Table and Overlays The status of each application displays in a row containing different icons and bars which represent different aspects of the application health and trend charts. Click an icon in a metric row for more detailed data and trend charts (overlay). Click a trend chart to drill down to more detailed reports in the context of a selected time period. Overall Health Status Bar Similarly to how summary report works, an application state is categorized as severe, warning or good based on the primary metric selected in Options. If your CAS is connected to more than one User Experience Management sources, the following rules apply: The real user monitoring source type (Probe or Browser) depends on the application configuration in the Business Units Configuration screen. The worst of two possible synthetic (Enterprise Synthetic or Backbone) sources is employed, if the number of active real users/visits is less than the threshold (5 by default). Application Name This is the application name as defined on the Business Units Configuration screen. Transactions / Steps Transactions and steps are groups of operations across various tiers that support business tasks for an application or steps for a transaction. Steps have sequence numbers so you can follow the order of operations for a task. This area of the report shows the percentage of transactions for an application, or steps for a transaction, that perform successfully or problematically. There must be at least two front-end tier transactions / steps for the icon to appear. The color of the icon represents the number of problematic or successful transactions / steps: 100% Green: All transactions / steps are performing successfully 85% Green: 15% of the transactions / steps are problematic 75% Green: 25% of the transactions / steps are problematic 90

91 Chapter 8 Reports Menu 50% Green: 50% of the transactions / steps are problematic 25% Green: 75% of the transactions / steps are problematic 100% Red: All transactions / steps are problematic Click the icon for a list of transactions / steps. Click a transaction / step for an overlay that includes Application Performance, Availability, Operation Time, and Usage trend charts. Primary Metric (Health Index, Availability, Performance, Operation Time, or Operations) Represents the overall health of the application or transaction based on the percentage of software service operations completed in a time shorter than the defined performance threshold, indicated by the (severe), (warning) and (good) icons. Click Options to choose which of these primary metrics to display here: Health Index The percentage of front-end operations completed in a time shorter than the defined performance threshold compared to all operation requests. Performance The percentage of front-end operations completed in a time shorter than the defined performance threshold compared to all successful operations. Availability The percentage of front-end operations successfully completed compared to all operation requests. Operation Time The current operation time of all front-end operations compared to the baseline. Operations The current operations volume of all front-end operations compared to the baseline. Click an icon for an overlay that includes Application Health, Availability, Operation Time, and Usage trend charts. The overlay can have between one and three tabs, depending on how many real user measurement sources (Probe, Browser or Sequence Transactions) are used to monitor the application. Click a chart, or on the links below the charts, to open the Data Center Analysis or other drilldown reports depending on the data source. If available, the column displays the calculated Primary Reason for Slowness Status in one of the categories below. Click the icon to see the details of slow operations. Application design Number of components redirect time response size request size 91

92 Chapter 8 Reports Menu Client/3rd party Data center latency loss rate other Network Multiple reasons The Trend column represents the performance of an application over a time range. This is measured by Operation/Transaction time in the context of traffic volume measured by Total Number of Operations across the front end Data Center tiers. Use this information to determine if the volume of traffic impacts the performance. The two micro charts represent Operation Time (sparkline above) versus Total Operations (microbars beneath). Observe the patterns of activity leading up to the current state. These may point to the current impact or fault domain. Hover over each Trend bar. The bars display the breakdown data for a specific time range. Click a node to open the Data Center Analysis report for this time range from here. For more information, see Data Center Analysis Report [p. 146]. Synthetic Represents the overall health of the application or transaction based on the percentage of synthetic transactions completed in a time shorter than the defined performance threshold. This column combines both Enterprise Synthetic and Backbone Synthetic measurements and displays the worst data source. Click the (severe), (warning), or (good) icon for an overlay that includes Health, Availability, and Transaction time trend charts, as well as a list of synthetic locations. Click a chart, the table entries, or the links below the charts to open contextual drilldown reports. Business Impact Represents the business impact expressed by the number of unique total and affected users of the application or transaction. This is based on the number of users that experience application performance or availability problems (the larger number displays). The total length of the bar is rendered in relation to the most active application on the list, while the dark grey part represents affected users. Click the bar for an overlay that includes Affected Users and Application Health trend charts, as well as a list of most affected users. Click a user to see the corresponding User Health report. For more information, see User Health Report [p. 171]. 92

93 Chapter 8 Reports Menu Network Represents the network health of the worst segment of the application delivery chain (client network, network, WAN optimized network, Citrix/WTS tier ) based on the network performance (percentage of total traffic delivered in conditions not much worse than normal). The colors inidcated status: severe, warning, or good. Click the icon for an overlay that includes Network Performance, End-to-End RTT, and Two-Way Loss Rate charts as well as a list of the most affected client sites. Click a node or a client site to see the corresponding Location Health report. For more information, see Location Health Report [p. 171]. Data Center Tiers Represents the health of each Data Center tier that is used by an application or transaction as configured in the CAS. The performance is determined by the configuration in Options for the Primary Metric. Up to ten tiers appear as full icons, after that number smaller symbols appear. The tiers are represented by following icons in colors indicating status severe, warning, good : Application tier (Generic analyzer) Cerner Database.NET EPIC Exchange FIX Load Balancer 93

94 Chapter 8 Reports Menu LDAP MQ Middleware (XML, Tuxedo Middleware, SOAP) Oracle Forms SAP Web WebSphere Click an icon for an overlay that includesapplication Health, Availability, Operation Time and Usage charts. The overlay can have one to three tabs, depending on the number of real user measurement sources (Probe, Browseror Sequence Transactions) used to monitor the application. Click a chart or the links below the charts to open the Data Center Analysis report or other drilldown reports, depending on the data source. For more information, see Data Center Analysis Report [p. 146]. Navigation Use Find at the bottom of the page to find a specific application. Alerts Report The Alerts report lists the history of alerts for a selected period of time. How to Access the Report From the CAS top menu, choose Reports Application Health Status and click the number or a graph bar in the Alert Notifications section. Report Contents and Usage The Alerts report lists metric alerts only. For each alert, you see the time it happened, the ID of the transaction or step, and a description. For more information, see Alert System [p. 191]. Drilldown Reports Click the Alarm details link in the Description column to access a detailed report for the selected alert. 94

95 EUE Overview Reports EUE Overview reports provide a complete view of your monitored environment and application performance. These reports show data collected by CAS from various sources (Enterprise Synthetic and DC RUM). The EUE Overview report workflow consists of two main parts: Business hierarchy, which shows the monitored data organized into applications, transactions, and sites. Technical hierarchy, which shows the monitored data technically divided into software services, servers, and operations. Use the business hierarchy to drill down through three levels of reports, starting from the overall view of applications. A separate path leads through applications to transactions filtered for a selected site. Each report has a high-resolution equivalent that shows data with finer granularity. Three drilldown links are available from each CAS report in the EUE Overview report workflow: From the first column on the report to Metric Charts reports. From the Unique and affected users (performance) column to the All Users report. From the Errors or TCP errors column to the Errors report. Chapter 8 Reports Menu Tiers are the links between the business hierarchy and the technical hierarchy. For more information, see Multi-Tier Reporting [p. 98]. 95

96 Chapter 8 Reports Menu Figure 25. Overview of the Report Workflow Level 1 Business Hierarchy Applications Tiers Sites Operation Level 2 Tiers for Application Transactions for Application Sites for Application Level 3 Applications for Site Tiers for Transaction Sites for Transaction Transactions for Site Software services Technical Hierarchy Software Services Servers Operations Sites Multi-level reporting hierarchy Services Modules Tasks Operations Locations Regions Areas Sites Monitored Traffic from a Business Perspective When you create applications and transactions on the CAS, you organize the monitored data according to a higher level of abstraction. You apply a business view to the data technically divided into tiers and services. After creating applications and transactions, you are able to use dedicated reports to supervise your strategic application performance. This step is mandatory only if you integrate CAS data into BSM. For other product configurations, your system will work without applications being defined. However, if you want 96

97 Chapter 8 Reports Menu to obtain a less technical view and go beyond simple network performance monitoring you should consider performing this part of the configuration. Applications and transactions defined on the CAS are based on and represent your own view of the monitored subjects. The rules are based on the logic imposed by the individual who configures the CAS. These rules are meant to represent an actual software organization running on a web server and the relations between its components. Applications, Transactions, and Tiers Application An application is a universal container that can accommodate transactions. Each application can contain one or more transactions; those transactions can originate from different sources. An application defined on the CAS is a cohesive container that helps you organize information about the application delivery chain. Applications organize the data travelling through your network into logical units or tasks. These tasks are performed over the network. You can distinguish each web application running on a single web server. Transaction A transaction consists of operations that are grouped as steps. Transactions are built out of a single step or a number of steps. For example: A simple, single-step transaction may consist of a single operation such as a web page load. An extended transaction may consist of a collection of non-sequenced operations (an unstructured transaction). A more complex transaction may consist of sequences of operations, each operation being a single step. DC RUM monitors sequences of web page loads and sequences of XML calls, and it reports on these sequences (as transactions) and on individual operations within sequences. A transaction defines a logical business goal such as registration in an online store. One or more transactions constitute an application. Note that a transaction can have only one parent application. Data for a transaction can come from: Agentless Monitoring Device Enterprise Synthetic agent The same transaction can contain data from different data sources at the same time (for example, data from AMD and from Enterprise Synthetic). However, metrics for each data source are aggregated separately. Tier A tier is a logical application layer. It is a representation of a fragment of your monitored environment. The front-end tier in a user-defined configuration is the layer that is closest to the end user. 97

98 Chapter 8 Reports Menu How Tiers, Transactions, and Applications Are Related Each application is defined as a set of transactions and can be divided into logical layers (tiers) such as web servers, middleware, or a database. Each transaction can be a collection of data gathered across various tiers, as shown in the following figure. Figure 26. Transactions Spanning Three Tiers APPLICATION Front-end Tier HTTP + SSL Middleware Tier SOAP Database Tier TDS Login GET(token) SELECT username FROM Table db_type_1 Search find_items(type2) SELECT * FROM items_table_type_2 Submit order save_order(type3) INSERT order INTO orders_table_type_3 Each transaction can be a collection of operations occurring within one tier. Each tier is defined as a set of rules dependent on the tier type. All tiers are defined globally. The global definition applied to a specific application shows the appropriate sequence of tiers for that application. Multi-Tier Reporting The Tiers report provides an overview for many types of measurements collected by the CAS from various sources. N-tier network architecture is presented as tiers (layers). The layered view facilitates the combination and presentation of data that comes from different Dynatrace or third-party components. When your application data travels through the network, its path is divided into several tiers (layers) and is ordered per measurement points (where monitoring devices record the data). The tiers result from combinations of software services that form structural layers. For example: End user Application delivery Web server Application server Database Tiers are either pre-defined or they are defined by the user on the CAS or in the RUM Console. Each tier represents the point where measurements are made as well as the method by which 98

99 Chapter 8 Reports Menu the measurements are collected. By default, software services are assigned to a default tier configuration according to the analyzer type for DC RUM data. NOTE Starting from CAS Release 12.0, multi-tier reporting replaces the DataCenter View functionality. Depending on your configuration, you can conceptualize tier view in two ways: Processing How data travels through the network in chronological order. For example, the observed route could be the following: client, load balancer, web server, database server. This is similar to the default CAS configuration. Measurement point Specific point where you measure and monitor data. A tier may not be strictly connected with topology; for example, the front-end tier may contain measurements from several web servers at different locations. Default Tiers After the CAS is installed and deployed, there are several default tiers. Default tiers have assigned rules and cannot be modified by the user. The rules are based on the type of monitored traffic (analyzer type). This is the most universal solution. The data is measured at various points, and is classified by type and presented as layers. With this automatic configuration, you can instantly monitor various points either within your network or at client locations. You can also create transactions and applications on the CAS and apply an additional, business perspective to the observed traffic. The Concept of the Front-end Tier Front-end tier is an abstract concept that is derived from user-defined configuration. The front-end tier can be understood as the layer closest to the end user. You can base your configuration on logical application architecture or on measurement point location. A front-end tier always exists as a part of an application defined on the CAS. With front-end tiers defined, you are able to see data on the Applications report. The front-end tier definition helps you set a logical pattern to the physical location of Dynatrace measuring points; you can clearly differentiate between your end users and your data center. By defining more than one front-end tier for an application, you can add a clearer perspective to the user side because users can be classified based on the way they approach your application. For more information, see Front-End Tiers for Application [p. 107]. With the default configuration, soon after the CAS is deployed, tiers based on HTTP, HTTPS, Oracle Forms, SAP, Cerner, and Exchange analyzers are always the front-end tiers for a given application. Logical Front-end Tier Assignment Front-end tiers play an important role for your applications defined on the CAS. They are displayed on the Applications report; to see all tiers for a given application, navigate to the Tiers 99

100 Chapter 8 Reports Menu report for that application). This enables you to quickly review all the layers closest to your end users. You can assign a logical front-end tier through the Business Units screen. Example Users using the same application can use different methods to access its resources. For example, two end users may be using the same application server; however, one approaches from the Internet and the data travels through a web server, while the other accesses the application server directly with a dedicated client or web service. In the first case, the web server is a front end for your application; in the second case, the application server itself is the front end. Figure 27. Different Front-end Tiers for an Application HTTP SOAP DB End user A Front-end End user B Front-end APPLICATION Multi-Level Hierarchy Reporting Business-critical applications or advanced Web applications that support core business processes typically have complex multi-level structures of business-related transactions, tasks, steps, processes, and operations. DC RUM provides data organization for reports that reflect the business structure of the monitored application. The service-module-task-operation hierarchy provides technical performance information. For some analyzers, for example SOAP, operations hierarchy is inherent. CAS reports organize data in a clear manner making fault domain isolation easy. The hierarchy levels are automatically reported by the AMD (for example, for HTTP, SOAP or Cerner), or configured on the report server (for example, for SAP) Operations performed by monitored applications are grouped and presented according to a certain pattern. Hierarchy levels above single operations are containers for the lower-level entities. You can group smaller (but numerous) items and limit the data you observe on reports. You can use this hierarchy model, with a business perspective (applications and transactions), to create greater logical compounds that reflect part or all of your monitored environment. 100

101 Chapter 8 Reports Menu The analyzer type determines if the CAS can provide data for hierarchy levels. Some analyzers provide only one or two levels or none at all. The CAS can report on up to four levels for the following traffic types: HTTP SAP Cerner SOAP Database SMB Each level is reported independently or combined with the other levels. You can use DMI to create reports with entries from arbitrarily chosen hierarchy levels like display metrics for level-one entries paired with their level-three entries. Hierarchy Levels In the current DC RUM release, the following hierarchy levels are supported: Operation For HTTP, this is the URL of the base page to which the hit belongs. For other analyzers this can be a query, operation type or an operation status. Operation is ascertained by the AMD, based on referrer, timing relations between hits and per-transaction monitoring configured on the AMD. This dimension can assume values of a particular operation - if this operation is monitored. Note: The visibility of this dimension on reports depends on whether another dimension, related to servers - e.g. server IP or server DNS - has been used when formulating the query. Task The All other operations record serves a catch-all net for al the traffic that has been seen to-from a server, but was not classified as belonging to a specific monitored-by-name operation. It accounts for statistics of: operations which were not reported in per specific operation records (for example those that fall out of topn reported operations for a specific analyzer) - in such case the number of operations and slow operations, as well as operation time and other transactional statistics will be reported as an aggregate/average; traffic which was not classified to any operations (for example, idle TCP session closure, TCP handshake without any operation, etc) - in such case only volumetric statistics (bytes, packets) will be reported for this specific traffic. Task is the second level in the reporting hierarchy. For example, in HTTP monitoring this is the page name; in database monitoring this is the operation name (may contain regular expression if configured on the AMD) or operation type prefix, and in SOAP monitoring this is the SOAP method. This entity can be broken to smaller bits such as operations or operation types. 101

102 Chapter 8 Reports Menu Module Module is the third level in the reporting hierarchy. For example, in database monitoring this is the database name, and in SOAP monitoring this is the SOAP service. This entity can be broken to smaller bits such as tasks. Service Service is the highest level of multi-level reporting hierarchy. For example, in SAP GUI monitoring this is the business process. This entity can be broken to smaller bits such as modules. Figure 28. Complete Scheme of Reporting Hierarchy in DC RUM Front-end Middleware Back-end Business Technology Application Transaction Service Module Task Operation Table 4. Hierarchy Levels for Selected Analyzer Groups Level Analyzer group Visibility HTTP SOAP SAP Cerner Database SMB Service Business process Server name CAS 102

103 Chapter 8 Reports Menu Table 4. Hierarchy Levels for Selected Analyzer Groups (continued) Level Analyzer group Visibility HTTP SOAP SAP Cerner Database SMB Module SOAP Service Process step Application Database name Share name CAS Task Page name (for DC RUM data) SOAP Method Process step Service or Transaction Operation Folder path name (with optional regular expression) or operation type prefix CAS Operation URL SOAP call Object (T-Code and operation status) Operation SQL command Read Write Control CAS Hits Hits, pages, transactions Operation, Operation, transaction window name, window status SAP operation name plus window name Full query N/A ADS In addition to CAS, ADS shows non-aggregated data, exact time stamps, and one entity per record. Note that ADS does not support the hierarchy itself. Any entity other than operation is ignored. All Other Aggregate The All other aggregate appears on reports in places where no named entity for the given level is found. This information is either derived from the AMD configuration or reflects the specifics of monitored protocols. For example, in HTTP monitoring all levels above task are aggregated as All other module and service, because they do not exist within the current model. Hierarchy on Reports The reports are available from the Reports menu. They provide a unified way of representing the hierarchy of measured entities. Section tabs represent the hierarchy levels. You can switch perspective and verify the data either up the hierarchy or down. To obtain performance information on the hierarchy levels start from the Tiers or Software services report, go down to the Servers report, then view the Operations report for the given server or all servers within a tier. View all or selected hierarchy levels with the following reports: Operations 103

104 Chapter 8 Reports Menu Tasks Modules Services Use the Operations report, to display data for selected or all software services. You can analyze how the top of the hierarchy is affected by certain operations, software service, and servers. Hierarchy Levels for Database Monitoring There are three levels of reporting hierarchy for database monitoring: Operations: Queries or procedure calls Tasks: Query names, query names plus regular expression set on the AMD, or operation type prefixes. Module: The database name is reported as module. Some hierarchy levels may not be reported because of database monitoring configuration settings. You may see missing database name, operation name, or query. The following are scenarios: Missing database name: (Module) All other (Task) Operation name + regex (Operation) SQL command Occurs when the database name monitoring is not configured or the AMD is unable to report the database name because it cannot detect the beginning of the session. Missing operation name (with regular expression): (Module) Database name (Task) SQL command (Operation) SQL command Occurs when you configure the AMD to monitor individual queries, and you choose exact queries as a configuration rule but do not set names for queries. Missing database name and missing operation name (with regular expression): (Module) All other (Task) All other (Operation) SQL command Occurs when the AMD does not report the database name and the operation name reporting is not configured. Missing operation name (with regular expression) and missing query: (Module) Database name (Task) Operation type prefix (Operation) Occurs when an operation other than query or RPC (for example, login or logout) is recorded by the AMD. Missing operation name (with regular expression), missing query and missing database: (Module) All other (Task) Operation type prefix (Operation) Occurs when database name recognition is not configured or it is not seen in the traffic; plus the operation is not a query or an RPC. If you want to remove these limitations, fine-tune the monitoring settings using the RUM Console. For more information, see Individual Query Monitoring in the Data Center Real User Monitoring Database Monitoring User Guide. 104

105 Chapter 8 Reports Menu Hierarchy Levels for SAP Monitoring The SAP reporting hierarchy is imported from a configuration file that is generated on SAP Solution Manager. Use this file to couple the DC RUM reporting hierarchy tightly with the SAP application business hierarchy. For more information, see Central Analysis Server Reporting Hierarchy for SAP in the Data Center Real User Monitoring SAP Application Monitoring User Guide. Hierarchy Levels for SMB Monitoring There are four levels of reporting hierarchy for SMB monitoring. Levels of the hierarchy are mapped as following: Services: Server name Module: Share name Task: Folder path Operations: limited to Read, Write and Control The Operation name is identified by the SMB action performed, Service, module and task (performed action: \\Service\Module\Task). For example: Read \\files-server-host\myshare\mypath\*.txt EUE Overview Reports Structure The Applications and Tiers reports represent logic behind a typical communication between users and the data center. These reports bring a business perspective to the presented data. You can see data collected on the client side, network-related data, and data collected within the data center. The strict relation between applications and tiers enables you to quickly determine where an element that is part of the monitored application is located in the delivery chain. EUE Overview reports provide a starting point for analysis from which you are able to instantly reach the application view, tier view, or network view. Depending on your Dynatrace setup, you may not be able to see all available tiers on the Tiers report. You may see just selected tier types; for example, only tiers based on DC RUM data. If your report server receives data from Enterprise Synthetic Agent Manager and contains statistics provided by Enterprise Synthetic Agents, you can see data collected on the client side and presented as part of the Synthetic tier statistics. Data for optimized WAN links (real user traffic and synthetic traffic) is shown as a Client optimized network tier if your AMD is configured to provide measurements for this kind of traffic. Real user traffic provided by Thin Client Analysis Module is shown for the Citrix/WTS (presentation) tier. In other words, you can see a complete application delivery chain that is relevant to your monitoring environment. The more data collector types you have deployed, the more accurate data will be available. Perspective tabs EUE Overview reports are grouped under three tabs allowing you to quickly change perspective of the viewed data. Each of the perspective tabs opens a separate report: 105

106 Chapter 8 Reports Menu Applications: Applications report. For more information, see Applications Report [p. 107]. Tiers: Tiers report. For more information, see Tiers Report [p. 111]. Sites: Sites report. For more information, see Sites Report [p. 123]. Note that the links to the reports are context free and therefore no filters are inherited after you change perspective. In this way you are able to obtain a full picture at all viewpoints. Default Configuration of the Tiers Report The configuration of the Tiers report is based on the default configuration, which is read during the first CAS startup. The default tier definitions are based on analyzer types, transactions, or sites. The default configuration is immediately affected by the types of data collectors connected to the CAS and by the types of software services defined. This means that the Tiers report will display only those tiers that match the data detected in the monitored traffic. The complete list of tiers includes: Synthetic, matching all traffic coming from Enterprise Synthetic agents. RUM sequence transactions, matching all sequence transactions defined on the AMD. Epic, matching Epic transactions. Client network, matching all traffic for client sites, except the All Other site. Client optimized network, matching network accelerated environment traffic. Citrix/WTS (presentation), based on the ICA analyzer. Network, matching all traffic for the All Other site. Website, based on the Web analyzer group, including HTTP, SSL, and Oracle Applications analyzers. Oracle Forms, based on the Oracle Forms analyzer group. SAP, based on the SAP analyzer group. Exchange, based on the Exchange analyzer. Cerner, based on the Cerner analyzer group. Middleware, based on the Jolt (Tuxedo) analyzer group, including Jolt, XML, SOAP and SAP RFC analyzers. Message Queue, based on the IBM MQ analyzer. Database, based on the Database analyzer group, including Oracle, Informix, TDS, and DRDA (DB2) analyzers. Data center infrastructure, based on the Datacenter analyzer group, including DNS, Generic (with transactions), ICMP, IP, NetFlow, Non IP, SMB, SMTP, and UDP analyzers. If the default configuration does not fit your network architecture, you can change the definition of the Client network and Network tiers using the Business Units screen available on the CAS. 106

107 Chapter 8 Reports Menu Use the Client network and Network tiers to separate the client traffic from the data center traffic. To do this, you must configure the Client network tier manually. For more information, see Network Tiers in the Data Center Real User Monitoring Administration Guide. Front-End Tiers for Application While configuring tiers, you can indicate which tiers will be regarded as front-end. This configuration is global for all applications. If there is any traffic detected on a front-end tier and the traffic matches your application definition, you will be able to see measurements for that tier and for that application on the Applications report. Note that it is possible for a single application to have more than one front-end tier. Application with Two Front-end Tiers For example, there may be two front-end tiers that match your application. The first tier is based on transactions originating from Enterprise Synthetic agents, referred to as Synthetic. The second tier is based on sequence transactions, referred to as RUM sequence transactions. For more information, see Configuring Tiers on CAS in the Data Center Real User Monitoring Administration Guide and Multi-Tier Reporting [p. 98]. If traffic is detected on both front-end tiers for your application in the last monitoring interval: In the Application column, you will see two rows with your application name. This is because there is data for both front-end tiers for your application. In the Tier column, you will see both front-end tiers for your application: Synthetic and RUM sequence transactions next to your application name displayed in the Application column. Figure 29. Application with Two Front-end Tiers Applications Report The Applications report presents all applications for which data was detected on front-end tiers. By default, the report shows data for the current day with a one-hour resolution. The Applications (High Resolution) report shows this data with finer granularity (a resolution of one period). How to Access the Report From the CAS top menu, choose Reports EUE Overview Applications. The report displays data if: You have configured at least one application on the CAS or your CAS receives information about applications from Enterprise Synthetic Agent Manager. 107

108 Chapter 8 Reports Menu The traffic is detected on a front-end tier and the tier matches your application definition. NOTE If the CAS is configured to receive data from Enterprise Synthetic Agent Manager, the Applications report will automatically show the applications originating from Enterprise Synthetic. The high-resolution version of the report is accessed from the Application Performance Overview chart when you click a selected point on the chart. This report shows the same metrics as the Applications report, but presented values reflect measurements for the selected time range with a resolution of one period. Report Contents and Usage This report helps you identify applications with performance or availability problems. Data is presented both on a chart and in a table. You should start analyzing data by looking at the Application Performance Overview chart, which shows the performance of the top five applications (according to the number of unique users) over the selected period. If you notice a significant drop in application performance at a particular hour, click that point on the chart to see more details. Figure 30. Application Performance Overview Chart The tabular report lists the defined applications. Each row represents one application and shows metric values corresponding to the front-end tier for the application. You can see that tier name in the Tier column. The most important statistics include Performance, Unique and affected users (performance), and Errors that occurred over the analyzed period of time. In addition to numeric data, the tabular report includes status icons to indicate different levels of problem severity. If there were performance or availability problems with any of the listed applications, you will see a yellow, orange, or red status icon in the Performance, Unique and affected users, or Availability column. In this case, click the application name to display a layered view of the application and see whether the problem is the network or the data center. For more information, see Network Performance Calculations [p. 389]. The report has additional Tiers and Sites perspective tabs that enable fast switching to other global views of your monitored environment. Drilldown Reports You can access more detailed reports from the following columns: Application 108

109 Chapter 8 Reports Menu Tiers report. Use this report to see the division of layers for a selected application and to locate and isolate the problem. For more information, see Tiers Report [p. 111]. Transactions for Application report. Use this report to identify transactions that cause application performance problems. For more information, see Transactions for Application Report [p. 110]. Sites report. Use this report to identify sites in which application performance problems occur and to see how the problems affect the users of a specific application. For more information, see Sites Report [p. 123]. Metric Charts for Application Tiers report. For more information, see Metric Charts for Application Tiers in the Central Analysis Server Online Help. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). RUM Browser - User Actions report. For more information, see RUM Browser - User Actions [p. 188]. Unique and affected users (performance) All Users report (for data center tiers). For more information, see All Users Report [p. 161]. All Users for Client Tiers report (for client tiers). For more information, see All Users Report [p. 161]. Application Performance Affected Users report (for data center tiers). For more information, see Application Performance Affected Users Report [p. 162]. Application Performance Affected Users for Client Tiers report (for client tiers). For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report (for data center tiers). For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report (for data center tiers). For more information, see Availability Affected Users Report [p. 163]. Availability Affected Users for Client Tiers report (for client tiers). For more information, see Availability Affected Users Report [p. 163]. Failures (total) Application responses report (for data center tiers). For more information, see Application Responses Report [p. 160]. Synthetic Errors (Unavailable Transactions) for synthetic tiers. For more information, see Synthetic - Unavailable Transaction Summary [p. 141]. Errors for Client Tiers report (for client tiers). 109

110 Chapter 8 Reports Menu For more information, see Errors Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Transactions for Application Report The Transactions for Application report lists all transactions that were monitored on front-end tiers for a selected application. By default, the report shows data for the current day with a one-hour resolution. The Transactions for Application (High Resolution) report shows this data with finer granularity (a resolution of one period). How to Access the Report From the CAS top menu, choose Reports EUE Overview Applications, click an application name, and then click the Transactions tab. The high-resolution version of the report is accessed from the Transaction Performance Overview chart after you click a selected point on the chart. Report Contents and Usage The Transactions for Application report helps you identify transactions that cause application performance problems. It also shows the effect of poor performance on application users. Data is presented both on a chart and in a table. The Transaction Performance Overview chart shows the performance of various transactions over the selected period. If you notice a significant drop in transaction performance at a particular hour, click that point on the chart to see details for the selected period of time. The tabular report shows detailed statistics for each transaction that belongs to the selected application. In addition to numeric data, the tabular report uses cell background colors to indicate different levels of problem severity. If there were performance or availability problems with any of the listed transactions, you will see a yellow, orange, or red cell in the Performance or Availability column. The maximum set of provided statistics includes all the metrics available on the Application, transaction, and tier data data view. For more information, see Application, transaction, and tier data [p. 290]. The Transactions for Application report has additional Tiers and Sites perspective tabs that enable fast switching to other global views of your monitored application. Drilldown Reports You can access more detailed reports from the following columns: Transaction Tiers report. Use this report to see the division of layers for a selected application and to locate and isolate the problem. For more information, see Tiers Report [p. 111]. Metric Charts for Application Tiers report. 110

111 Chapter 8 Reports Menu For more information, see Metric Charts for Application Tiers in the Central Analysis Server Online Help. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Tiers Report The Tiers report presents a technologically oriented view of your monitored environment: the division of layers for a typical business application. By default, the report shows data for the current day with a one-hour resolution. The Tiers (High Resolution) report shows this data with finer granularity (a resolution of one period). How to Access the Report From the CAS top menu, choose Reports EUE Overview Tiers. You can also display this report by clicking Tiers tab on Applications or Sites reports. The high-resolution version of the report is accessed from the End-User Performance Overview chart after you click a selected point on the chart. Report Contents and Usage Use the Tiers report if you have no applications defined. You should start analyzing data by looking at the Real-User Performance Overview chart, which shows changes in the operation time for data center tiers regarded as front-end over the selected period. If you notice a significant rise in the operation time at a particular hour, click that point on the chart to see details. Figure 31. Real-User Performance Overview Chart The Client Tiers, Network Tiers, and Data Center Tiers tables show a logical division of tiers together with detailed statistics. If there were performance or availability problems with any of the listed tiers, you will see a yellow, orange, or red status icon in the Performance, Unique and affected users, Availability, or Connectivity column. For more information, see Network Performance Calculations [p. 389]. 111

112 Chapter 8 Reports Menu The Tiers report has additional Applications and Sites perspective tabs that enable fast switching to other global views of your monitored environment. Drilldown Reports The following reports can be opened by clicking links in the report tables: Client Tiers table Overview Application Status For the Synthetic tier from the Tier column. If the traffic is monitored by both Enterprise Synthetic Agents and Dynatrace Application Monitoring, you can also drill down to the Dynatrace Application Monitoring client. Sequence Transactions Log For the RUM sequence transactions tier from the Tier column. For more information, see Sequence Transactions Log Report [p. 113]. Metric Charts for Client Tiers From the Tier column. For more information, see Metric Charts for Client Tiers in the Central Analysis Server Online Help. All Users for Client Tiers, Application Performance Affected Users, and Availability Affected Users From the Unique and affected users (performance) column. For more information, see All Users Report [p. 161]. Errors From the Failures (total) column. For more information, see Errors Report [p. 160]. Synthetic Errors (Unavailable Transactions) From the Failures (total) column for synthetic tiers. For more information, see Synthetic - Unavailable Transaction Summary [p. 141]. Network Tiers table Sites and Metric Charts For the Client network and Network tiers from the Tier column. For more information, see Sites Report [p. 123] and Metric Charts [p. 159]. Optimized WAN Environment Performance Overview - Sites and Metric Charts for Client Optimized Network Tier For the Client optimized network tier from the Tier column. For more information, see Optimized WAN Environment Performance Overview - Sites Report [p. 119] and Metric Charts for Client Optimized Network Tier in the Central Analysis Server Online Help. Citrix Landing Page and Metric Charts For the Citrix/WTS (presentation) tier from the Tier column. For more information, see Citrix Landing Page Report [p. 115] and Metric Charts [p. 159]. Data Center Tiers table Servers, Operations, and Metric Charts From the Tier column. 112

113 Chapter 8 Reports Menu For more information, see Servers Report [p. 151], Operations Report [p. 153], and Metric Charts [p. 159]. If the HTTP traffic is monitored both by DC RUM and Dynatrace Application Monitoring, you can drill down to the Dynatrace Application Monitoring client from the Tier and Failures (total) columns. NOTE Dynatrace Application Monitoring can show data for one system profile only. If a tier definition is based on software services that come from different Dynatrace Application Monitoring servers or belong to different system profiles on the same Dynatrace Application Monitoring server, Dynatrace Application Monitoring will show data for a randomly chosen system profile. All Users, Application Performance Affected Users, Network Performance Affected Users, and Availability Affected Users From the Unique and affected users (performance) column. For more information, see All Users Report [p. 161]. Application Responses: from the Failures (total) column. For more information, see Application Responses Report [p. 160]. Sequence Transactions Log Report The Sequence Transactions Log report shows measurements for sequence transactions defined on the AMD. Use it to identify operation sequences that caused application performance or application availability problems. How to Access the Report From the CAS top menu, choose Reports EUE Overview Tiers and click the RUM sequence transactions tier. Report contents and usage If any of your transactions has performance or availability problems, you will see a yellow, orange, or red status icon in the Application availability column or in the Application performance column. Move the pointer over a metric value to view the information in a tooltip. The full set of all the available statistics is provided on the Sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. Drilldown Reports If you have set up CAS and ADS to work together, you can drill down from the Time column to the Sequence Transactions Log - Details report. Use it to see detailed information for the selected transaction. For more information, see Sequence Transactions Log - Details Report [p. 114]. 113

114 Chapter 8 Reports Menu Sequence Transactions Log - Details Report The Sequence Transactions Log - Details report shows measurements for a selected transaction and time stamp. Use it to see the transaction start time, status, and time breakdown. How to Access the Report To access this report, click a metric value in the Time column on the Sequence Transactions Log report. For more information, see Sequence Transactions Log Report [p. 113]. Drilldown Reports Click a time value in the Transaction begin time column to display the Transaction Load Sequence report. For more information, see Transaction Load Sequence Report [p. 114]. Transaction Load Sequence Report Use the Transaction Load Sequence report to investigate problems with a particular transaction. How to Access the Report You can access the Transaction Load Sequence report by clicking the time value in the Transaction begin time column on the Sequence Transactions Log - Details report or by clicking a transaction name in the Transaction column on the User Activity: Sequence Transactions report. For more information, see Sequence Transactions Log - Details Report [p. 114] and User Activity Tabular Report [p. 164]. Report Contents and Usage The Transaction Load Sequence report is used in conjunction with its parent report to diagnose particular problems. The first table is a summary of the previous report. It shows the most important information about the selected transaction, such as the user name, the transaction status, and the number of pages loaded. The chart presents time metrics for the selected transaction: Transaction begin time, Server time, and Network time. The table below the chart shows detailed information about operations that belong to a selected transaction. Drilldown Reports Click an operation in the Operation column to display the Operation Load Sequence Reports [p. 170] report. Citrix/WTS (Presentation) Tier A tier is a specific point where DC RUM collects performance data. It is a logical application layer, a representation of a fragment of your monitored environment. If you use presentation servers (such as Citrix or Windows Terminal Services) and you have configured software services based on the ICA analyzer, the Citrix/WTS (presentation) tier will automatically be displayed in the Network Tiers section of the Tiers report. 114

115 Chapter 8 Reports Menu The Citrix/WTS (presentation) tier belongs to the Network Tiers section because it is treated as an application delivery channel. For Citrix and Windows Terminal Services, the key metrics are network performance, server RTT, or server loss rate; not operations or operation time as it is for data center tiers. This tier presents traffic related to the ICA analyzer for real users only. Drilldown Reports from Tier Name Click the Citrix/WTS (presentation) tier name to drill down to the Citrix Landing Page report, which shows general, network-related metrics for ICA-based software services. For more information, see Citrix Landing Page Report [p. 115]. Citrix Landing Page Report The Citrix Landing Page report lists all software services based on the ICA analyzer. How to Access the Report To access this report: Select Reports RUM Analysis Citrix Landing Page Select Reports RUM Analysis Software Services and click the Citrix Landing Page tab. Select Reports EUE Overview Tiers and click the Citrix/WTS (presentation) tier. NOTE Always start any Citrix analysis from the Citrix Landing Page, not from the Software Services report. The general Application performance and Unique and affected users (performance) on the Software Services report do not fit the Citrix transaction model. The Citrix Landing Page presents the KPIs that are specific to Citrix: Server realized bandwidth, Network performance, and Unique and affected users (network). Report Contents and Usage The Citrix Landing Page report is an access point from which you are able to analyze usage and performance of the Citrix infrastructure when delivering applications to end users. Use the drilldown available in Citrix reports to perform the fault domain isolation and answer the questions on the performance of your environment. The Citrix Landing Page report offers tabs for quick access to related reports: Software Services (default tab for the Citrix Landing Page report) Servers For more information, see Citrix Servers Report [p. 118]. Published Applications 115

116 Chapter 8 Reports Menu For more information, see Citrix Published Applications Report in the Data Center Real User Monitoring Citrix/Windows Terminal Services Monitoring User Guide. Channels For more information, see Citrix Channels Report in the Data Center Real User Monitoring Citrix/Windows Terminal Services Monitoring User Guide. Sites For more information, see Citrix Sites Report [p. 118]. Citrix measurements The Citrix reports provide the measurements of the following categories: Performance AMD monitors performance of each network session between end-users in remote locations and the datacenter where Citrix servers reside. These performance measurements reflect all key aspects of network connectivity required for smooth delivery of applications over Citrix. Network performance Network round-trip time and retransmission rate in the Network performance column. Bandwidth Total bandwidth usage broken into client and server. Server realized bandwidth Effective network throughput experienced by the end users in the Server realized bandwidth column. The Realized Bandwidth measurements are performed individually for each user and server. Realized bandwidth reflects actual network throughput during delivery of screen updates to the end user and user action data to the server. Realized bandwidth reflects the end-to-end quality perceived by the users of Citrix-delivered application. The reported realized bandwidth value triggers a state: 0-28 kbps, kbps, >128 kbps. It is based on the Citrix provided rule-of thumb network bandwidth requirements for smooth XenApp/XenDesktop application delivery. Citrix recommends that the network designed to deliver applications with ICA should offer at least 28 kbps throughput. and 128 kbps is a safer assumption regarding ICA requirements. Unique and affected users ICA decode recognizes Citrix login names, enabling presentation of the Client-datacenter connection quality measurements in context of the Citrix users. The reports display the number of unique users with the breakdown into how many were affected by network related problems and how many were not. The user columns allow you to drill down to other user related reports for detailed analysis. Availability The reports provide the number of TCP errors along with the availability ratio, which may indicate the cause of your server problems 116

117 Chapter 8 Reports Menu Citrix Statistics Additional information on resources utilization statistics of Citrix/WTS server is collected by the TCAM agent. The information includes CPU and memory load of the server and number of active terminal sessions. Channels and published applications Performance of the network link between end-user's display and the XenApp/XenDesktop server is analyzed individually for each published application and ICA channel. DC RUM reports on traffic breakdown between interactive screen updates, audio/video media, print and USB access, enabling fault domain isolation for performance problems perceived by the end users as "application slowness". Published Citrix application names are reported together with channel names, adding further precision to the fault domain isolation input data. For more information, see Citrix Channels Report in the Data Center Real User Monitoring Citrix/Windows Terminal Services Monitoring User Guide and Citrix Published Applications Report in the Data Center Real User Monitoring Citrix/Windows Terminal Services Monitoring User Guide. Commands and command delivery time For ICA channels, the AMD measures commands, and where possible operations. The difference between a command and an operation is that a command is a single-direction transmission of data, while operation has the request-response nature. The number of commands represents the number of ICA data transfers issued by server or client. Command might reflect a request to update client's screen (sent by server to client) or sending a keyboard event (sent by client to server); send a piece of document to print.. However, commands do not have a server response time - they are sent by ICA client or server and the other part receives them and processes them asynchronously. Therefore, the command delivery time for ICA channels represents time taken to transmit a command to the other side of the network link, not actual time to execute it there. The request-response type of actions, such as screen launch or logon, can be measured as operations. In this case, the Citrix reports provide the operation related measurements with breakdown into slow and fast operations. For more information, see Citrix Channels Report in the Data Center Real User Monitoring Citrix/Windows Terminal Services Monitoring User Guide and Citrix Published Applications Report in the Data Center Real User Monitoring Citrix/Windows Terminal Services Monitoring User Guide. Autodiscovery and Citrix Until an autodiscovered software service is manually configured, it uses a generic decode, which for certain types of traffic (such as FTP) may make Application Performance values less precise. For Citrix traffic, you should refer to the Citrix Landing Page report. Drilldown reports Click a software service name to drill down to a report listing all servers for this software service. For more information, see Citrix Servers Report [p. 118]. You can also access the Metric Charts reports from the software service name column. 117

118 Chapter 8 Reports Menu You can also drill down to user related reports from the Unique and affected users (Network) column. For more information, see All Users Report [p. 161], Network Performance Affected Users Report [p. 162], and Availability Affected Users Report [p. 163]. Citrix Servers Report The Citrix Servers report shows either all presentation servers that belong to the Citrix/WTS (presentation) tier or all servers on which the specific software service is offered. How to Access the Report To access this report, click a software service name on the Citrix Landing Page report. Report Contents and Usage This report shows detailed measurements for each presentation server, so you can see the number of bytes sent by a specific server and its availability. The set of default metrics is similar to the one of the Citrix Landing Page Report [p. 115] report, however it's supplemented with the Command breakdown and Command delivery time. Drilldown Reports Click the server IP address to drill down to a report showing metric charts for this server. For more information, see Metric Charts [p. 159]. Citrix Sites Report The Citrix Sites report lists all client sites in which ICA-based traffic was detected during the selected period of time. How to Access the Report To access this report, click the Citrix/WTS (presentation) tier on the Tiers report, and then click the Sites tab. Report Contents and Usage This report helps you identify sites in which performance problems occur and analyze how they affect the users. The set of default metrics in the Usage, Performance, and Availability sections of the table is the same as on the Citrix Landing Page report. For more information, see Citrix Landing Page Report [p. 115]. Drilldown Reports You can access more detailed reports from the following columns: Client site Metric Charts report. For more information, see Metric Charts [p. 159]. Unique and affected users (network) 118

119 Chapter 8 Reports Menu All Users report. For more information, see All Users Report [p. 161]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Client Optimized Network Tier The client optimized network tier allows for more precise analysis of the defined sites, links, software services, and individual operations within an optimized WAN environment. A typical user viewing the Tiers report can quickly assess the performance of the WAN optimization service. Drilling down in the optimized tier will reveal the report displaying defined sites. At this point, you can quickly ascertain whether the problem occurs with a given Client site, potentially identifying an issue with a specific application, load distribution, or misconfigured WAN Optimization Controller (WOC) at the branch end. You can also determine whether the problem is with all sites, which could indicate a problem with the local WOC or with a common application for all sites. Examining the Sites report you can drill down to the best or the worst performing link within the Client site and review the breakdown of the applications relating to that link. Once again, you can assess whether the problem relates to one specific application or to the entire link. Single application failure or performance downgrade could suggest a failed optimization policy for that service or indicate that this particular application is not suitable for optimization. Optimized WAN Environment Performance Overview - Sites Report The Optimized WAN Environment Performance Overview - Sites report is a summary of the monitored user-defined links with emphasis on optimized WAN traffic. How to Access the Report From the CAS top menu, choose Reports EUE Overview Tiers and click the Client optimized network tier name in the Network Tiers table. Report Contents and Usage This performance overview report enables you to quickly assess the performance of your WAN accelerated environment based on user-defined links within that environment; and, from a high-level perspective, to determine the health of your optimization service for each of the defined links. The report lists user-defined links observed within the network tier that forms your WAN accelerated environment. It is presented in two tables: Tier statistics This table represents the overall health of your WAN accelerated environment with specific metrics grouped into Usage, Performance, and Availability. 119

120 Chapter 8 Reports Menu Sites Each of the groups provides specific metrics for that group: Usage is represented by how many unique users are detected in your WAN accelerated environment (Unique users), the number of all transmitted bits per second (Total bandwidth usage), and the percentage of bytes compared with a baseline (Load). Performance consists of the actual transfer rate of server data (Realized bandwidth). Reduction is expressed as a percentage, where a lower byte count on the WAN side means a higher reduction (Performance). Availability indicates the percentage of successfully sent packets (Connectivity) and the number of users that experienced connectivity problems (Affected users). This table lists all user-defined links observed within your WAN accelerated environment. For this report to be a coherent list of only optimized links, at least one link must be defined for each site that is being optimized. For more information, see Adding Sites Manually in the Data Center Real User Monitoring Administration Guide. By default, the User Defined Link filter is applied for this report. The filter can be adjusted by refining the report and editing the Client site UDL dimension and condition within the DMI, but it is recommended that you retain its default settings. NOTE Because the default Client site UDL filter is enabled and configured as hidden, it will not appear in the filter list at the top of the Sites table. For each of the defined sites, there are metrics such as Operations, which provides the number of operations; Operation time breakdown, which indicates how much time particular operation took; the Application Delivery Channel Delay, which indicates the quality of the optimization for the particular site; and compression and optimized bytes, which are compared to the average of all user defined sites (UDL). Drilldown Reports Click a client site in the Sites table to drill down to a report listing all of the observed software services for this site. For more information, see Optimized WAN Environment Performance Overview - Software Services Report [p. 120]. Optimized WAN Environment Performance Overview - Software Services Report The Optimized WAN Environment Performance Overview - Software Services report lists all of the software services that have been observed for the specific site. How to Access the Report You can access this report by drilling down from the Sites report. You can access it without the Site filter, listing all of the software services, by clicking the All Software Services tab. 120

121 Chapter 8 Reports Menu Report Contents and Usage This report lists software services observed within your WAN accelerated environment. The report comprises two tables: Optimized Software Services For each of the observed software services, there are metrics such as Operations, which provides the number of operations; Operation time breakdown, which indicates how much time particular operation took; the Application Delivery Channel Delay, which indicates the quality of the optimization for the particular software service; and compression and optimized bytes, which are compared to the average of all listed software services. Non-optimized Software Services This table lists all of the software services that were observed within your WAN accelerated environment but were not optimized. Certain software services might be purposely omitted from optimization service because of policy or inability to optimize that particular service. Other omissions may be an indication of poor optimization or a problem with WAN Optimization Controller (WOC) configuration. The list contains all cases of unoptimized software services. Drilldown Reports Click a software service to drill down to a report listing all observed links for this software service. For more information, see Optimized WAN Environment Performance Overview - Links Report [p. 121]. Optimized WAN Environment Performance Overview - Links Report The Optimized WAN Environment Performance Overview - Links report is a summary of the monitored WAN links with an emphasis on optimized WAN traffic. How to Access the Report From the CAS top menu, choose Reports EUE Overview Tiers, click the Client optimized network tier name in the Network Tiers table, and then click the All Links tab. Report Contents and Usage This performance overview report enables you to quickly assess the performance of your WAN accelerated environment based on links within that environment and, from a high-level perspective, determine the health of your optimization service. This report lists observed links within the network tier that form your WAN accelerated environment. The links are grouped into three tables: WAN Accelerated Environment Performance For each link, there are metrics such as Operations, which provides the number of operations; Operation time breakdown, which indicates how much time a particular operation took; the Application Delivery Channel Delay, which indicates the quality of the optimization for the particular link; and compression and optimized bytes, which are compared to the average of all detected links. 121

122 Chapter 8 Reports Menu Optimized links breakdown This table presents observed client/server statistics for optimized traffic for each link. The breakdown displays the client and server bandwidth usage and the Total bytes transmitted by the client and server for the LAN and WAN. Non-optimized links This table lists the links on which unoptimized traffic was detected (links where Percent of optimized bytes is below 100%). The links where Percent of optimized bytes in the WAN Accelerated Environment Performance table is below 100% will also be listed in the Non-optimized links table, because only a certain percentage of the traffic observed on that link has been optimized. The Total bytes for such a link in the Optimized links breakdown table, compared to the Total bytes for the same link in the Non-optimized links table, gives the Percent of optimized bytes for that link listed in the WAN Accelerated Environment Performance table. Drilldown Reports Click a client site name to drill down to a report listing all of the observed software services for this link. For more information, see Optimized WAN Environment Performance Overview - Software Services Report [p. 120]. WAE - LAN and WAN Comparison Report The Optimized WAN Environment Performance Overview - LAN and WAN Comparison report indicates how optimized software services on the LAN side compare to those on the WAN side. How to Access the Report This report can be accessed by clicking the LAN / WAN Comparison tab from any WAN Accelerated Environment Performance Overview report. Report Contents and Usage The side-by-side comparison of the software services provides a quick picture of how your WAN accelerated environment is optimizing each of the software services. The table presents the total bytes on your WAN side, combining the bytes that have been passed through and bytes that have been optimized; the total bytes on your LAN side (which is all of your network traffic that is destined for optimization but has not been optimized yet); and the compression and optimized bytes. The compression percentage indicates how many bytes for the particular service have been compressed. If Total bytes compression equals 0%, all of the traffic for this software service has been recognized as pass-through, which would indicate that the WOC is not configured properly for compressing this particular software service, it does not support compression for this software service, or this software service purposely has not been configured for optimization. Percent of optimized bytes indicates how much of your network traffic has been optimized. For example, if the Percent of optimized bytes is 25%, only a quarter of the network traffic generated by this software service and destined for optimization is being optimized. This would be an indication that the optimization for this software service is poorly configured or the WOC is experiencing an overload. 122

123 Chapter 8 Reports Menu The two graphs represent graphically the WAN / LAN table. The visual comparisons of WAN bytes to LAN bytes and bytes compression to optimized bytes are represented for each of the software services listed in the table. Drilldown Reports Clicking on the particular software service drills down to a software services performance report that lists all of the recognized operations for this software service. For more information, see WAE - Software Services Performance Report [p. 123]. WAE - Software Services Performance Report The Optimized WAN Environment Performance Details - Software Services Performance report breaks down the software service into individual operations that enable you to determine the operations that may be causing a service reduction or bottleneck within your software service. The report includes a list of all of the operations within the filtered software service and a chart reflecting the data in the table. The table displays: The operation URL (Operation) The sum of bytes for the given operation observed before the traffic is optimized by WOC (Total bytes on LAN side) The time it took to complete the operation (Operation time) A breakdown of operation time into server, network, and redirection time (Operation time breakdown) The percentage of total traffic that did not experience problems (Network Performance) Client Network and Network Tiers If you have configured sites on the CAS or if sites have been configured automatically, the Client network and Network tiers will automatically be displayed on the Tiers report. By default, the Client network tier shows traffic for all sites except the All other site and the Network tier shows traffic for the All other site only. If you suspect that the All other site may include client traffic, you should modify the default configuration of the Client network and Network tiers. For more information, see Modifying a Network Tier in the Data Center Real User Monitoring Administration Guide. Drilldown Reports from Tier Names Click the Client network or Network tier name to drill down to the Sites report filtered for the selected tier. For more information, see Sites Report [p. 123]. Sites Report The Sites report lists all client sites in which data was detected during the selected period of time. Use it to analyze performance problems starting from the client side. By default, the report shows data for the current day with a one-hour resolution. The Sites (High Resolution) report shows this data with finer granularity (a resolution of one period). 123

124 Chapter 8 Reports Menu How to Access the Report From the CAS top menu, choose Reports EUE Overview Sites. The high-resolution version of the report is accessed from the Usage over Time chart after you click a selected bar on the chart. Report Contents and Usage This report helps you identify sites in which performance problems occur and analyze how the problems affect users. Depending on how you access this report, you will see either all client sites or only the sites that belong to a given area or tier. Data is presented both on a chart and in a table. The Usage over Time chart shows the number of all transmitted bytes for defined sites over the selected period of time. By default, the resolution is set to one hour. The tabular report shows detailed statistics for each site in which traffic was detected. You should start analyzing data by looking at the Unique and affected users (performance) and Affected users (network) columns. If you see any users affected by performance problems in a specific site, click its name to view the list of applications. In addition to providing numeric data, the tabular report uses status icons to indicate different levels of problem severity. If there were application or network performance problems in any of the listed sites, you will see a yellow, orange, or red status icon in the Application performance or Network performance column. For more information, see Network Performance Calculations [p. 389]. The Sites report has additional Applications and Tiers perspective tabs that enable fast switching to other global views of your monitored environment. Drilldown Reports You can access more detailed reports from the following columns: Client site Applications for Site report. For more information, see Applications for Site Report [p. 125]. Network Status - Software Services View report. For more information, see Network Status - Software Services View Report in the Data Center Real User Monitoring Network Performance Monitoring User Guide. Metric Charts report. For more information, see Metric Charts [p. 159]. Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. 124

125 Chapter 8 Reports Menu For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Affected users (network) Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Applications for Site Report The Applications for Site report presents all applications for which data was detected on front-end tiers for a selected site. By default, the report shows data for the current day with a one-hour resolution. The Applications for Site (High Resolution) report shows this data with finer granularity (a resolution of one period). How to Access the Report From the CAS top menu, choose Reports EUE Overview Sites and click a client site name. The high-resolution version of the report is accessed from the Application Performance Overview chart when you click a selected point on the chart. This report shows the same metrics as the Applications report, but presented values reflect measurements for the selected time range with a resolution of one period. Report Contents and Usage The Applications for Site report helps you identify which applications cause performance or availability problems to users from a particular site. Data is presented both on a chart and in a table. You should start analyzing data by looking at the Application Performance Overview chart, which shows the performance of the top five applications (according to the number of unique users) over the selected period. If you notice a significant drop in application performance at a particular hour, click that point on the chart to see more details. The tabular report lists applications for a selected site. Each row represents one application and shows metric values corresponding to the front-end tier for the application. You can see that tier name in the Tier column. The most important statistics include Performance, Unique 125

126 Chapter 8 Reports Menu and affected users (performance), and TCP errors that occurred over the analyzed period of time. In addition to numeric data, the tabular report includes status icons to indicate different levels of problem severity. If there were performance or availability problems with any of the listed applications, you will see a yellow, orange, or red status icon in the Performance, Unique and affected users, or Availability column. In this case, click the application name to display a layered view of the application and see whether the problem is the network or the data center. For more information, see Network Performance Calculations [p. 389]. Drilldown Reports You can access more detailed reports from the following columns: Application Transactions for Site report. For more information, see Transactions for Site Report [p. 126]. Metric Charts for Applications or Transactions for Site report. For more information, see Metric Charts [p. 159]. Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report TCP errors Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Transactions for Site Report The Transactions for Site report presents all transactions for which data was detected on front-end tiers for a selected site. By default, the report shows data for the current day with a one-hour 126

127 Chapter 8 Reports Menu resolution. The Transactions for Site (High Resolution) report shows this data with finer granularity (a resolution of one period). How to Access the Report From the CAS top menu, choose Reports EUE Overview Sites, click a client site name, and then click an application name. The high-resolution version of the report is accessed from the Transaction Performance Overview chart after you click a selected point on the chart. Report Contents and Usage The Transactions for Site report helps you identify transactions that cause application performance problems and analyze how they affect the users. Data is presented both on a chart and in a table. The Transaction Performance Overview chart shows the performance of various transactions over the selected period. If you notice a significant drop in transaction performance at a particular hour, click that point on the chart to see details for the selected period of time. The tabular report shows detailed statistics for each transaction that belongs to the selected application. In addition to numeric data, the tabular report uses cell background colors to indicate different levels of problem severity. If there were performance or availability problems with any of the listed transactions, you will see a yellow, orange, or red cell in the Performance or Availability column. The maximum set of provided statistics includes all the metrics available on the Software service, operation, and site data data view. For more information, see Software service, operation, and site data [p. 198]. Drilldown Reports You can access more detailed reports from the following columns: Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Failures (total) 127

128 Chapter 8 Reports Menu Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Areas Report The Areas report shows all areas for which data was detected on data center tiers. By default, the report shows data for the current day with a one-hour resolution. The Areas (High Resolution) report shows this data with finer granularity (a resolution of one period). How to Access the Report From the CAS top menu, choose Reports EUE Overview Sites and click the Areas tab. The high-resolution version of the report is accessed from the Usage over Time chart after you click a selected bar on the chart. Report Contents and Usage The Areas report helps you identify areas in which performance problems occur and analyze how they affect the users. Depending on how you access this report, you will see either all client areas or only the client areas that belong to a given region. Data is presented both on a chart and in a table. The Usage over Time chart shows the number of all transmitted bytes for defined areas over the selected period of time. By default, the resolution is set to one hour. The tabular report shows detailed statistics for each area in which traffic was detected. In addition to providing numeric data, the tabular report uses status icons to indicate different levels of problem severity. If there were application or network performance problems in any of the listed areas, you will see a yellow, orange, or red status icon in the Application performance or Network performance column. For more information, see Network Performance Calculations [p. 389]. Drilldown Reports You can access more detailed reports from the following columns: Client area Sites report. For more information, see Sites Report [p. 123]. Network Status - Software Services View report. For more information, see Network Status - Software Services View Report in the Data Center Real User Monitoring Network Performance Monitoring User Guide. Metric Charts report. For more information, see Metric Charts [p. 159]. Unique and affected users (performance) 128

129 Chapter 8 Reports Menu Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Affected users (network) Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Regions Report The Regions report shows all regions for which data was detected on data center tiers. By default, the report shows data for the current day with a one-hour resolution. The Regions (High Resolution) report shows this data with finer granularity (a resolution of one period). How to Access the Report From the CAS top menu, choose Reports EUE Overview Sites and click the Regions tab. The high-resolution version of the report is accessed from the Usage over Time chart after you click a selected bar on the chart. Report Contents and Usage The Regions report helps you identify regions in which performance problems occur and analyze how they affect the users. Data is presented both on a chart and in a table. The Usage over Time chart shows the number of all transmitted bytes for defined regions over the selected period of time. By default, the resolution is set to one hour. The tabular report shows detailed statistics for each region in which traffic was detected. In addition to providing numeric data, the tabular report uses status icons to indicate different 129

130 Chapter 8 Reports Menu levels of problem severity. If there were application or network performance problems in any of the listed regions, you will see a yellow, orange, or red status icon in the Application performance or Network performance column. For more information, see Network Performance Calculations [p. 389]. Drilldown reports You can access more detailed reports from the following columns: Client region Areas report. For more information, see Areas Report [p. 128]. Network Status - Software Services View report. For more information, see Network Status - Software Services View Report in the Data Center Real User Monitoring Network Performance Monitoring User Guide. Metric Charts report. For more information, see Metric Charts [p. 159]. Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Affected users (network) Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). 130

131 Synthetic Overview Reports Synthetic Overview reports present data from Synthetic Monitoring Backbone or Enterprise Synthetic. They enable you to monitor the performance of applications, transactions, and sites. To access the Synthetic Overview reports, choose Reports Synthetic Overview from the CAS top menu. The menu item is visible only if you have configured at least one of Enterprise Synthetic Agent Manager or Dynatrace Performance Network account as a data source for CAS. For more information, see Managing Devices in the Data Center Real User Monitoring Administration Guide and Configuring the DPN Connection in RUM Console in the Data Center Real User Monitoring Administration Guide. For more information on Enterprise Synthetic or Dynatrace Performance Network configuration, refer to the Enterprise Synthetic and Dynatrace Performance Network product documentation. If you configured both Enterprise Synthetic and the DPN as a data source for CAS, the overview for both is presented in one window. Click the Enterprise Synthetic tab to access the reports based on data the retrieved from Enterprise Synthetic Agent Manager and the Synthetic Backbone tab to access the reports based on the data retrieved from the DPN account. If you configured just one synthetic data source, clicking the Synthetic Overview menu item places you directly in the dedicated report. Enterprise Synthetic Reports Enterprise Synthetic reports present data retrieved from Enterprise Synthetic Agent Manager. To enable and populate these reports with data, a Enterprise Synthetic Agent Manager must be configured as a CAS data source. The Enterprise Synthetic matrix is composed of the following reports. Accessing the reports using links and drilldowns in the report columns implies changing the context of presented data. Refer to the individual report topics for details. Synthetic - Applications The Synthetic - Applications report provides an overview of an application and a site perspective of the transactions executed by the Enterprise Synthetic agents. Using the links and drilldowns in the report columns, you can investigate application performance, single site status, and transaction performance in the context of the selected site or application. For more information, see Synthetic - Applications [p. 134]. Synthetic - Application Performance Synthetic - Site Performance for Application Synthetic - Transaction List Synthetic - Single Site Status Synthetic - Slow Transactions Synthetic - Unavailable Transactions Synthetic - Unavailable Transaction Summary Synthetic - Metric Charts for Application dynatrace Client Chapter 8 Reports Menu 131

132 Chapter 8 Reports Menu Synthetic - Transactions The Synthetic - Transactions report provides an overview of the performance of transactions executed by the Enterprise Synthetic agents. For more information, see Synthetic - Transactions [p. 135]. Synthetic - Transaction List Synthetic - Slow Transactions Synthetic - Unavailable Transactions Synthetic - Metric Charts for Transaction dynatrace Client Synthetic - Application Performance The Synthetic - Application Performance report shows the performance details for a particular application, including the average transaction time, the status of the application at each site, and the status of each transaction. For more information, see Synthetic - Application Performance [p. 136]. Synthetic - Transaction List Synthetic - Slow Transactions Synthetic - Unavailable Transactions Synthetic - Metric Charts for Application Synthetic - Metric Charts for Transaction dynatrace Client Synthetic - Site Performance for Application The Synthetic - Site Performance for Application report shows the performance details for an application-site pair, including the average transaction time, the status of the application at the site, and the status of each transaction. For more information, see Synthetic - Site Performance for Application [p. 137]. Synthetic - Transaction List Synthetic - Slow Transactions Synthetic - Unavailable Transactions Synthetic - Metric Charts for Transaction Synthetic - Metric Charts for Application dynatrace Client Synthetic - Single Site Status The Synthetic - Single Site Status report shows the performance details for a specific site, including the average transaction time, the status of the application at the site, the transactions summary, and the status of each transaction. Synthetic - Application Performance Synthetic - Transaction List Synthetic - Slow Transactions 132

133 Chapter 8 Reports Menu Synthetic - Unavailable Transactions Synthetic - Metric Charts for Transaction Synthetic - Metric Charts for Application dynatrace Client Synthetic - Transaction List The Synthetic - Transaction List report enables you to track the performance of transactions in the context of applications and sites, depending on the way you access the report. For more information, see Synthetic - Transaction List [p. 139]. Synthetic - Application Performance Synthetic - Metric Charts for Transaction dynatrace Client Synthetic - Slow Transactions The Synthetic - Slow Transactions report enables you to track the performance of slow transactions in the context of applications and sites, depending on the way you access the report. For more information, see Synthetic - Slow Transactions [p. 140]. Synthetic - Application Performance Synthetic - Metric Charts for Transaction dynatrace Client Synthetic - Unavailable Transaction Summary The Synthetic - Unavailable Transaction Summary report presents the summary of failed transactions in context of a particular application. For more information, see Synthetic - Unavailable Transaction Summary [p. 141]. Synthetic - Unavailable Transactions Synthetic - Unavailable Transactions The Synthetic - Unavailable Transactions report enables you to track the performance of failed transactions in the context of applications and sites, depending on the way you access the report. For more information, see Synthetic - Unavailable Transactions [p. 140]. Synthetic - Application Performance Synthetic - Metric Charts for Transaction dynatrace Client Synthetic - Metric Charts for Application The Synthetic - Metric Charts for Application report shows key Enterprise Synthetic measurements related to application and transaction performance for a selected application. The report presents metric values in charts, which facilitates quick comparison and analysis of the selected data. For more information, see Synthetic - Metric Charts for Application [p. 141]. Synthetic - Metric Charts for Transaction The Synthetic - Metric Charts for Transaction report shows key Enterprise Synthetic measurements related to application and transaction performance for a selected transaction. 133

134 Chapter 8 Reports Menu The report presents metric values in charts, which facilitates quick comparison and analysis of the selected data. For more information, see Synthetic - Metric Charts for Transaction [p. 141]. Synthetic - Applications The Synthetic - Applications report provides an overview of an application and a site perspective of the transactions executed by the Enterprise Synthetic agents. Using the links and drilldowns in the report columns, you can investigate application performance, single site status, and transaction performance in the context of the selected site or application. How to Access the Report From the CAS top menu, choose Reports Synthetic Overview or click the Synthetic tier on the Tiers report. From the top of this report, you can access the Synthetic - Transactions report. For more information, see Synthetic - Transactions [p. 135]. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. Applications In the left side of the window you can see two charts and a report that present the transaction performance and availability as seen from the application perspective. The performance and availability charts present the summary of changes in the performance and availability of the five weakest applications in the reported period. Click the points on the chart to access the Synthetic - Application Performance report to analyze the performance of selected application. The Applications report gives you insight into performance and availability of all the applications, supplemented by the average transaction time and breakdown into slow, fast, and failed transactions. Use the links in the Application column to investigate the performance of the individual application from the site and transaction perspective. If the transaction monitoring data contains the Dynatrace Application Monitoring purepath identifier, you can drill down to the Dynatrace Application Monitoring client. Click the transaction columns to access the Transaction List, Slow Transactions, and Unavailable Transactions reports to see the details of all transactions executed for a particular application in the reported period. The following drilldown reports are available from the Applications report: Synthetic - Application Performance, from the Application column, and performance and availability charts. Synthetic - Site Performance for Application, from the Application column. Synthetic - Transaction List, from the Application and Transaction Requests - Total columns. Synthetic - Single Site Status, from the Application column. Synthetic - Slow Transactions, from the Transaction Requests - Slow column. 134

135 Chapter 8 Reports Menu Synthetic - Unavailable Transactions, from the Transaction Requests - Failed column. Synthetic - Unavailable Transaction Summary, from the Application column. Dynatrace Application Monitoring client, from the Application column. Sites In the right side of the window, you can see two charts and a report that present the transaction performance and availability as seen from the site perspective. The performance and availability charts present the summary of changes in the performance and availability of the five weakest sites in the reported period. Click the points on the chart to access the Synthetic - Single Site Status report to analyze the performance of selected application. The Sites report gives you insight into the performance and availability of all the sites, supplemented by the average transaction time and breakdown into slow, fast, and failed transactions. Use the links in the Sites column to investigate the performance of the individual sites from the transaction perspective. If the transaction monitoring data contains the Dynatrace Application Monitoring purepath identifier, you can drill down to the Dynatrace Application Monitoring client. Click the transaction columns to access the Transaction List, Slow Transactions, and Unavailable Transactions reports to see the details of all transactions executed from a particular site in the reported period. The following drilldown reports are available from the Applications report: Synthetic - Single Site Status, from the Client Site column, and performance and availability charts. Synthetic - Transaction List, from the Client Site and Transaction Requests - Total columns. Synthetic - Slow Transactions, from the Client Site and Transaction Requests - Slow columns. Synthetic - Unavailable Transactions, from the Client Site and Transaction Requests - Failed columns. Dynatrace Application Monitoring client, from the Client Site column. Synthetic - Transactions The Synthetic - Transactions report provides an overview of the performance of transactions executed by the Enterprise Synthetic agents. How to Access the Report From the CAS top menu, choose Reports Synthetic Overview and click the Transactions tab. From the top of this report, you can access the Synthetic- Applications report. For more information, see Synthetic - Applications [p. 134]. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. 135

136 Chapter 8 Reports Menu Report Contents and Usage The performance and availability charts present the summary of changes in the performance and availability of the five weakest transactions in the reported period. Click the points on the chart to access the Synthetic - Metric Charts report to analyze the performance of selected transaction. The Transactions table gives you insight into performance and availability of all the applications for each transaction, supplemented by the average transaction time and breakdown into slow, fast, and failed transactions, as well as transaction time breakdown into client time, client response time, server time, network time, and application processing time for each transaction. Use the links in the Transaction column to investigate the performance of individual transactions. If the transaction monitoring data contains the Dynatrace Application Monitoring purepath identifier, you can drill down to the Dynatrace Application Monitoring client. Click the Transaction Requests columns to access the Transaction List, Slow Transactions, and Unavailable Transactions reports to see the details of all the executions of a particular transaction in the reported period. Drilldown Reports The following drilldowns are available from the Applications report: Synthetic - Transaction List, from the Transaction and Transaction Requests - Total columns. Synthetic - Slow Transactions, from the Transaction and Transaction Requests - Slow column. Synthetic - Unavailable Transactions, from the Transaction and Transaction Requests - Failed column. Synthetic - Metric Charts for Transaction, from the Transaction column, and performance and availability charts. Dynatrace Application Monitoring client, from the Transaction column. Synthetic - Application Performance The Synthetic - Application Performance report shows the performance details for a particular application, including the average transaction time, the status of the application at each site, and the status of each transaction. Access this report from the Application column of the Synthetic - Applications, Synthetic - Single Site Status, Synthetic - Transaction List, Synthetic - Slow Transactions, and Synthetic - Unavailable Transactions reports, as well as from the performance and availability charts in the Synthetic - Applications report. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. The Transaction Time chart shows the average transaction time for the application over the reported period. 136

137 Chapter 8 Reports Menu The Sites table provides detailed information about the sites where the application is running. Using the links in the Client Site column and drilldowns in the Transaction Requests columns, you can investigate individual transactions performed from a particular site and application. The Transactions table gives you insight into the performance and availability of all the applications for each transaction, supplemented by the average transaction time and breakdown into slow, fast, and failed transactions, as well as transaction time breakdown into client time, client response time, server time, network time, and application processing time for each transaction. Use the links in the Transaction column and drilldowns in the Transaction Requests column to access the Transaction List, Slow Transactions, and Unavailable Transactions reports to see the details of all the executions of a particular transaction in the reported period.. If the transaction monitoring data contains the Dynatrace Application Monitoring purepath identifier, you can drill down to the Dynatrace Application Monitoring client from the Transaction column. The Hourly Transaction Summary tables gives you an hourly overview of the Transaction Request Breakdown into fast, slow, and unavailable for all transactions in the reported period. Use the links in the Transaction column to access the Synthetic - Transaction List report to analyze all the executions of a particular transaction. The following drilldown reports are available from the Synthetic - Application Performance report: Synthetic - Transaction List, from the Client Site, Transaction and Transaction Requests - Total columns. Synthetic - Slow Transactions, from the Transaction and Transaction Requests - Slow columns. Synthetic - Unavailable Transactions, from the Transaction and Transaction Requests - Failed columns. Synthetic - Metric Charts for Application, from the Client Site column. Synthetic - Metric Charts for Transaction, from the Transaction column. Dynatrace Application Monitoring client, from the Transaction column. Synthetic - Site Performance for Application The Synthetic - Site Performance for Application report shows the performance details for an application-site pair, including the average transaction time, the status of the application at the site, and the status of each transaction. Access this report from the Application column of Synthetic - Applications report. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. The Transaction Time chart shows the average transaction time for the application-site pair over the reported period. The Sites table provides detailed information about the performance of an application on the selected site. Use the links and drilldowns in the Client Site columns to access the Transaction List, Slow Transactions, and Unavailable Transactions reports to see the details of the transactions executed for an application-site pair. If the transaction monitoring data contains 137

138 Chapter 8 Reports Menu the Dynatrace Application Monitoring purepath identifier, you can drill down to the Dynatrace Application Monitoring client from the Transaction column. The Transactions table gives you insight into the performance and availability of all the transactions executed for an application-site pair, supplemented by the average transaction time and breakdown into slow, fast, and failed transactions, as well as transaction time breakdown into client time, client response time, server time, network time, and application processing time for each transaction. Use the links in the Transaction column and drilldowns in the Transaction Requests column to access the Transaction List, Slow Transactions, and Unavailable Transactions reports to see the details of all the executions of a particular transaction for an application-site pair in the reported period.. If the transaction monitoring data contains the Dynatrace Application Monitoring purepath identifier, you can drill down to the Dynatrace Application Monitoring client from the Transaction column. The Hourly Transaction Summary tables give you an hourly overview of the Transaction Request Breakdown into fast, slow, and unavailable for all transactions executed for a selected application-site pair in the reported period. Click the Client Site column to access the Synthetic - Transaction List report to analyze all the transactions executed on a particular site for selected application. The following drilldown reports are available from the Synthetic - Site Performance for Application report: Synthetic - Transaction List, from the Client Site, Transaction and Transaction Requests - Total columns. Synthetic - Slow Transactions, from the Client Site, Transaction and Transaction Requests - Slow columns. Synthetic - Unavailable Transactions, from the Client Site, Transaction and Transaction Requests - Failed columns. Synthetic - Metric Charts for Application, from the Client Site column. Synthetic - Metric Charts for Transaction, from the Transaction column. Dynatrace Application Monitoring client, from the Transaction column. Synthetic - Single Site Status The Synthetic - Single Site Status report shows the performance details for a specific site, including the average transaction time, the status of the application at the site, the transactions summary, and the status of each transaction. Access this report from the Client Site column of the Synthetic - Applications report. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. The Transaction Time chart shows the average transaction time for the site over the reported period. The Transaction Summary table shows the total number of transactions executed on the site, as well as a breakdown into slow, fast, and unavailable transactions. Click the numbers in the 138

139 Chapter 8 Reports Menu table to access the Synthetic - Transactions List, Synthetic - Unavailable Transactions, and Synthetic - Slow Transactions reports filtered for the selected site. The Applications table provides detailed information about the application performance on the selected site. Use the links in the Application column and drilldowns in the Transaction Requests column to access the Transaction List, Slow Transactions, and Unavailable Transactions reports to see the details of the transactions executed for a particular application on the selected site. Use the links in the Application column to access the Synthetic - Application Performance report for a particular application. If the transaction monitoring data contains the Dynatrace Application Monitoring purepath identifier, you can drill down to the Dynatrace Application Monitoring client. The Transactions table gives you insight into the performance and availability of all the transactions executed on the selected site, supplemented by the average transaction time and breakdown into slow, fast, and failed transactions, as well as transaction time breakdown into client time, client response time, server time, network time, and application processing time for each transaction. Use the links in the Transaction column and drilldowns in the Transaction Requests column to access the Transaction List, Slow Transactions, and Unavailable Transactions reports to see the details of all the executions of a particular transaction on the selected site in the reported period.. If the transaction monitoring data contains the Dynatrace Application Monitoring purepath identifier, you can drill down to the Dynatrace Application Monitoring client from the Transaction column. The following drilldown reports are available from the Synthetic - Single Site Status report: Synthetic - Transaction List, from the Application, Transaction and Transaction Requests columns. Synthetic - Slow Transactions, from the Application, Transaction and Slow Transactions columns. Synthetic - Unavailable Transactions, from the Application, Transaction and Failed Transactions columns. Synthetic - Metric Charts for Application, from the Application column. Synthetic - Metric Charts for Transaction, from the Application column. Dynatrace Application Monitoring client, from the Transaction column. Synthetic - Transaction List The Synthetic - Transaction List report enables you to track the performance of transactions in the context of applications and sites, depending on the way you access the report. You can access the Synthetic - Transaction List report from the Transaction, Application, and Client Site columns of most Enterprise Synthetic reports. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. Depending on the way you access the report, it gives you a list of all transactions executed in the reported period in a particular context. For example, if you access the Synthetic - Transaction 139

140 Chapter 8 Reports Menu List report from the Application column, the report will list all the transactions executed for a particular application. Click the number in the Application column of the Transaction Summary table to access the Synthetic - Application Performance report. Use the tabs at the top of the report window to access the corresponding Synthetic - Slow Transactions and Synthetic - Unavailable Transactions reports. Synthetic - Unavailable Transactions The Synthetic - Unavailable Transactions report enables you to track the performance of failed transactions in the context of applications and sites, depending on the way you access the report. You can access the Synthetic - Unavailable Transactions report from the Transaction, Application and Client Site columns of most Enterprise Synthetic reports. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. Depending on the way you access the report, it gives you a list of all failed transactions executed in the reported period in a particular context. For example, if you access the Synthetic - Unavailable Transactions report from the Application column, the report lists all the failed transactions executed for a particular application. Click the links in the Application column of the Transaction Summary table to access the Synthetic - Application Performance report. Use the tabs at the top of the report window to access the corresponding Synthetic - Transaction List and Synthetic - Slow Transactions reports. Synthetic - Slow Transactions The Synthetic - Slow Transactions report enables you to track the performance of slow transactions in the context of applications and sites, depending on the way you access the report. You can access the Synthetic - Slow Transactions report from the Transaction, Application and Client Site columns of most Enterprise Synthetic reports. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. Depending on the way you access the report, it gives you a list of all transactions regarded as slow, executed in the reported period in a particular context. For example, if you access the Synthetic - Slow Transactions report from the Application column, the report lists all the slow transactions executed for a particular application. Click the links in the Application column of the Transaction Summary table to access the Synthetic - Application Performance report. Use the tabs at the top of the report window to access the corresponding Synthetic - Transaction List and Synthetic - Unavailable Transactions reports. 140

141 Synthetic - Unavailable Transaction Summary The Synthetic - Unavailable Transaction Summary report presents the summary of failed transactions in context of a particular application. You can access the Synthetic - Unavailable Transactions Summary report from the Application column of the Synthetic - Applications report. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. The report shows a summary of all the unavailable transactions for a particular application. You can observe the number of failed transactions over time, as well as see the list of all unavailable transactions, and unavailable transactions filtered by the agent and site. Click the links in the Unavailable Transactions columns to access the Synthetic - Unavailable Transaction report in the context of your choice. Synthetic - Metric Charts for Transaction The Synthetic - Metric Charts for Transaction report shows key Enterprise Synthetic measurements related to application and transaction performance for a selected transaction. The report presents metric values in charts, which facilitates quick comparison and analysis of the selected data. You can access the report from the Transaction column of Synthetic - Transactions, Synthetic - Site Performance for Application, Synthetic - Single Site Status, and Synthetic - Application Performance reports as well as from the performance and availability charts of the Synthetic - Transactions report. Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. The report contains graphical representations of the following Enterprise Synthetic measurements, calculated in the context of selected transaction and application : Transaction Time, Transaction Requests, Transaction Availability by Site, Transaction Performance by Site, Transaction Time by Site, Transaction Requests by Site; as well as breakdowns into client, network, and server time and breakdown into fast, slow, and failed transactions. Synthetic - Metric Charts for Application Chapter 8 Reports Menu The Synthetic - Metric Charts for Application report shows key Enterprise Synthetic measurements related to application and transaction performance for a selected application. The report presents metric values in charts, which facilitates quick comparison and analysis of the selected data. You can access the report from the Application column of Synthetic - Applications, Synthetic - Single Site Status, and Synthetic - Application Performance reports and the Client SIte column of Synthetic - Site Performance for Application report, as well as from the performance and availability charts of the Synthetic - Applications report.. 141

142 Chapter 8 Reports Menu Enterprise Synthetic reports are based on DMI. These reports are based on metrics and dimensions available in the Synthetic and sequence transaction data data view. For more information, see Synthetic and sequence transaction data [p. 315]. The report contains graphical representations of the following Enterprise Synthetic measurements calculated in the context of selected application: Application Availability, Application Performance, Transaction Requests, Transaction Time, Transaction Availability, and Transaction Performance, as well as breakdowns into client, network, and server time and transaction request breakdown into fast, slow, and failed transactions. Synthetic Backbone Reports Synthetic Backbone Reports present data retrieved from Dynatrace Performance Network and at least one DPN account must be configured as the CAS data source. The Synthetic Backbone Reports matrix is composed of the following reports. Accessing the reports using links and drilldowns in the report columns implies changing the context of presented data. Refer to the individual report topics for details. Synthetic Backbone Overview The Synthetic Backbone Overview report provides a general overview of the performance of tests and backbone nodes available for the selected DPN account. For more information, see Synthetic Backbone Overview [p. 142]. Synthetic Backbone Tests The report presents the details of the performance of tests executed for the selected DPN account. For more information, see Synthetic Backbone Tests [p. 143]. Synthetic Backbone Pages The Synthetic Backbone Pages report presents the details of the performance of pages that were accessed as a result of executing tests for the selected DPN account. For more information, see Synthetic Backbone Pages [p. 145]. Synthetic Backbone Nodes The Synthetic Backbone Nodes report presents the details of the performance of nodes used to execute tests for the selected DPN account. For more information, see Synthetic Backbone Nodes [p. 146]. Synthetic Backbone Tests with errors The Synthetic Backbone Tests with Errors report presents the list of tests, executed for a selected DPN account, in which errors occurred. For more information, see Synthetic Backbone Tests with Errors [p. 144]. Synthetic Backbone Overview The Synthetic Backbone Overview report provides a general overview of the performance of tests and backbone nodes available for the selected DPN account. How to Access the Report From the CAS top menu, choose Reports Synthetic Overview. 142

143 Chapter 8 Reports Menu Report Contents and Usage The report is an entry point of the analysis of tests and backbone nodes available for the selected DPN account. To change the account, use the Account selection table in the top of the report. Data is presented on both charts and tables. The basic metrics to help you analyze the general performance of tests and nodes. The metrics used are Test availability, the breakdown of successful and failing pages and average response time. The tests and backbone charts and tables give you direct an access to deeper analysis into individual tests and backbones, as well as the most problematic pages in the tests. However, if you are particularly interested in a general overview of performance presented from the perspective of tests, pages or backbones, you can use the tabs in the top of the report to see the full data scope for a selected entity without narrowing the view to selected points. Drilldown Reports You can access more detailed reports from the following columns: Test name Synthetic Backbone Tests - Test overview to analyze the performance of a selected test. Synthetic Backbone Pages - Pages for test to analyze the performance of pages tested within a selected test/ Synthetic Backbone Nodes - Backbone nodes for test to analyze the performance of nodes available for a selected test. Pages breakdown Synthetic Backbone Tests with errors to see the list of errored tests. Backbone node Synthetic Backbone Nodes - Nodes overview to analyze the performance of a selected node. Changing the Default Report Layout To customize this report, click Edit report in the Actions list. The maximum set of provided statistics includes all the dimensions and metrics available on the Synthetic Monitoring test data and Synthetic Monitoring page data views. For more information, see Synthetic Monitoring test data [p. 361] and Synthetic Monitoring page data [p. 356]. Synthetic Backbone Tests The report presents the details of the performance of tests executed for the selected DPN account. How to Access the Report To access the Synthetic Backbone tests report in a full scope available for the selected account, click the Synthetic Backbone test tab in the Synthetic Backbone Overview report. You can also access the report by drilling down from the Test column of any Synthetic Backbone report and the Backbone node of the Internet Synthetic Nodes report, to analyze the page and node 143

144 Chapter 8 Reports Menu performance in the context of the selected test. You can drill down from the Backbone node column of the Synthetic Backbone Nodes report to analyze the performance of tests filtered for a selected node. Access the high-resolution version of the report from the Tests Response Time with Availability chart by clicking a selected point on the chart. This report shows the same metrics as the Synthetic Backbone Tests report, but it presents values that reflect measurements for the selected time range with a resolution of one period. Report Contents and Usage The report is an entry point of analysis from the perspective of tests. Use drilldowns in the report columns to analyze the performance of an individual test or the pages tested within a single test execution. Drilldown Reports You can access more detailed reports from the following columns: Test Synthetic Backbone tests - Test Overview to see the detailed performance of all executions of selected test. Depending on the level, the report shows data for all available nodes, or a selected one. Synthetic Backbone pages - Pages for Test to see the detailed performance of pages accessed in a selected test. Changing the Default Report Layout To customize this report, click Edit report in the Actions list. The maximum set of provided statistics includes all the dimensions and metrics available on the Synthetic Monitoring test data view. For more information, see Synthetic Monitoring test data [p. 361]. Synthetic Backbone Tests with Errors The Synthetic Backbone Tests with Errors report presents the list of tests, executed for a selected DPN account, in which errors occurred. How to Access the Report To access the Synthetic Backbone Tests with errors report, click the number of pages in the Pages breakdown column of Synthetic Backbone Overview report. Report Contents and Usage The report lists all executions of a selected test. From the report, you can drill down deeper into the page level, to see the performance of individual pages tested within the test. Drilldown Reports You can access more detailed reports from the following columns: 144

145 Chapter 8 Reports Menu Test time Synthetic Backbone Pages - Pages for This Test Execution to see which pages caused errors in a particular test execution, identified by its time stamp. Changing the Default Report Layout To customize this report, click Edit report in the Actions list. The maximum set of provided statistics includes all the dimensions and metrics available on the Synthetic Monitoring test data view. For more information, see Synthetic Monitoring test data [p. 361]. Synthetic Backbone Pages The Synthetic Backbone Pages report presents the details of the performance of pages that were accessed as a result of executing tests for the selected DPN account. How to Access the Report To access the Synthetic Backbone Pages report in a full scope available for the selected account, click the Synthetic Backbone Pages tab in the Synthetic Backbone Overview report. You can also access the report by drilling down from the Test column of any Synthetic Backbone report or the Backbone node column of the Synthetic Backbone Nodes report, to analyze the performance pages in the context of the selected test. You can drill down from the Backbone node column of the Synthetic Backbone Nodes report, to analyze the performance of pages filtered for a selected node. Report Contents and Usage The report is an entry point of analysis from the perspective of pages. Use drilldowns in the report columns to analyze the performance pages tested within a test execution, or the individual pages using the Pages log table in the bottom of the report. Drilldown Reports You can access more detailed reports from the following columns: Test Synthetic Backbone pages - Page List for Test to see the detailed performance of pages accessed as a result of executions of the selected test. Depending on the level, the report shows data for all available nodes or a selected node. Page Synthetic Backbone Pages - Page List for Selected Page to see the detailed performance of the selected page within a selected test. Changing the Default Report Layout To customize this report, click Edit report in the Actions list. The maximum set of provided statistics includes all the dimensions and metrics available on the Synthetic Monitoring page data view. For more information, see Synthetic Monitoring page data [p. 356]. 145

146 Chapter 8 Reports Menu Synthetic Backbone Nodes The Synthetic Backbone Nodes report presents the details of the performance of nodes used to execute tests for the selected DPN account. How to Access the Report To access the Synthetic Backbone Nodes report in a full scope available for the selected account, click the Synthetic Backbone nodes tab in th Synthetic Backbone Overview report. You can also access the report by drilling down from the Test and Backbone node columns of the Synthetic Backbone Overview report to filter the analysis for the selected test or node. Report Contents and Usage The report is an entry point of analysis from the perspective of backbone nodes. Use drilldowns in the Backbone node column to analyze the performance of tests or the pages in the context of the selected node. Drilldown Reports You can access more detailed reports from the following columns: Backbone node Synthetic Backbone Tests - Tests for Node to see the detailed performance of all test executions on a selected node. Synthetic Backbone Pages- Pages for Node to see the detailed performance of pages accessed from a selected node. Changing the Default Report Layout To customize this report, click Edit report in the Actions list. The maximum set of provided statistics includes all the dimensions and metrics available on the Synthetic Monitoring test data view. For more information, see Synthetic Monitoring test data [p. 361]. RUM Analysis Reports RUM Analysis Reports provide insight into the infrastructure and application performance throughout the tiers, sites, and software services. These reports show data collected by CAS from AMDs. Data Center Analysis Report The Data Center Analysis (DCA) report gives you insight into the performance of hardware and software infrastructure that supports the application or its transactions throughout the data center tiers. How to Access the Report You should access the Data Center Analysis report via an overlay on the Application Health Status report or other contextual link on the Location Health or User Health reports. You can 146

147 Chapter 8 Reports Menu also access this report by clicking on a chart node within an overlay. In both of these scenarios the DCA report is filtered by the application from which you entered. You can also load the DCA report directly from the AHS report. You can also choosereports RUM Analysis Data Center Analysis from the CAS top menu, however in large environments this can take a significant time to load due to the amount of data (the entire monitored data center) that is requested. Infrastructure Performance Use the Infrastructure Performance table to determine the following: The front-end problem domain (via four micro-charts tabs: Application Health, Operation Time, Availability, or Usage). The responsible tier (via statistics in the tiers table). You can also drill down to software services or servers. The timing aspect as it relates to the detected problem between the front end tiers and other tiers. You can use multi-tier charts to observe how the problem propagates through your infrastructure. The time period when the problem is most visible. Use the time range and interval and click the time column to focus on this interval. Select an application from the Application list to change a troubleshooting perspective. Choose All Applications to evaluate the entire Data Center (keep in mind the performance consideration). You can also select a transaction and steps (a group of operations across various tiers) from the Transaction/Step list. They support business tasks for an application. Steps have sequence numbers so you can follow the order of operations for a task. They are sorted by the Business Impact metric, from worst to best. If you select an application and/or a transaction, it acts as a filter for all data. Select one of the perspectives (Application Health, Operation Time, Availability, or Usage) on the Infrastructure Performance section to evaluate the performance aspect that is most interesting in the current context. This will change the metrics displayed on the charts. When you click a row in tier table, it pre-selects the corresponding chart and filters information in the Software Module Performance section. Hover over the data points on the chart for additional information on supporting metrics. Click a data point to focus the analysis on a specific time frame. Time information is published to the Tiers table (on the left) and to the Software Module Performance and User Performance tables (on the bottom of the page). The selected single tier is highlighted. The chart content is based on the selection of entities in the tiers table. Up to 20 worst performing tiers are selected by default. Each checked tier creates a separate chart. Each chart displays metrics based on the selected perspective with a micro chart (left-top corner of the screen). Use these to isolate faults by comparing and analyzing metrics over time. Click Expand to show more metrics for the tiers table and less charts. This is useful if you want to focus on a particular tier. 147

148 Chapter 8 Reports Menu Click Options Equalize Access to use common scaling on all charts for consistent and accurate analysis of comparable entities (for example, servers in the server farm). This is based on the scale containing the worst data. To return to the original chart settings, click Options Unequalize Access. Server names appear with this adjusted scaling, by default. Double-click in the tiers table to drill down to the dimensions configured for the tiers hierarchy. By default you will drill to Software Services, than Servers. Click Hierarchy for a dimension list. These dimensions establish the drilldown order and starting point. Choose at least one. Software Module Performance The Software Module Performance table displays software-based parts of the infrastructure like services, modules, tasks and operations supporting an application on a certain Tier. The Software Module Performance table typically displays information filtered by the infrastructure component (tier, software service, server) and time period, if they are selected on the chart. By default, the hierarchy of this table starts from the Operations level but can be configured to do otherwise. Use this table to find the most offending piece of software infrastructure that is responsible for performance, availability or response problems isolated in tiers and charts. From here you can drill down to further reports analyzing root cause of the problem like charts with more metrics, response analysis, slow operation analysis or Dynatrace Application Monitoring client for server-side problems. User Performance The Software Module Performance table publishes to the User Performance table. The User Performance table displays the applications that contribute to operation issues and their impact on users. By default, the hierarchy starts from User name. For more information, see User Health Report [p. 171]. Software Services Report The Software Services report shows, in one table, all software services that are relevant in a given context. For a given time interval, data collected from AMDs or Enterprise Synthetic is presented here. Accessing the Report From the CAS top menu, choose Reports RUM Analysis Software Services. From the Application Health Status report, in the Network Performance tile, click the total number of software services or click a monitoring interval in the chart. NOTE Use the drilldown link from the AHS report to the Software Services report to verify the current software services configuration. Because the Software Services report may be slower to generate in this context, however, use Reports RUM Analysis Software Services when you are conducting fault domain isolation. 148

149 Chapter 8 Reports Menu Report Contents and Usage Use this report to identify services that have performance and availability problems. In addition to numeric data, the report uses status icons (red, orange, yellow, and green) to indicate different levels of problem severity with metrics such as Application performance and Unique and affected users (performance). This report displays traffic from real users only. Some metrics, such as Slow operations and Application performance, have tooltips that display threshold values and the percentage or number. If there is a in the A (for Autodiscovered ) column of the Software Services report, the Configure User-Defined Software Service option is available from the Software service column. The green check mark indicates that there was at least one server with autodiscovered traffic present for this software service. For more information, see Configuring a User-Defined Software Service on CAS [p. 53]. NOTE Until an autodiscovered software service is manually configured, it uses a generic decode, which for certain types of traffic (such as FTP) may make Application Performance values less precise. For autodiscovered traffic, you should configure a software service using the correct analyzer to improve Application Performance values. For Citrix traffic, you should refer to the Citrix Landing Page report. Use the additional tabs on the report for the following tasks: Switch to other reports See all servers, operations, regions, areas, and sites when you access the report from the main CAS menu. See servers, operations, regions, areas, and sites for a given software service and tier when you access this report as a perspective from other reports with applied filters. By default, the time range for the report is Today (the last calendar day for which data is available). To change the time range, select a different setting in the Time range list at the top of the report. For more information, see Selecting Time Range and Resolution [p. 69]. Drilldown Reports You can access more detailed reports from the following columns: Software service Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. Servers report. For more information, see Servers Report [p. 151]. Metric Charts report. For more information, see Metric Charts [p. 159]. 149

150 Chapter 8 Reports Menu Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Application Responses report. For more information, see Application Responses Report [p. 160]. Configure User-Defined Software Service Available for autodiscovered software services only. ( A column = ) For more information, see Configuring a User-Defined Software Service on CAS [p. 53]. Slow operations Slow Operation Cause Breakdown report (only if you have set up CAS and ADS to work together). For more information, see Slow Operation Cause Breakdown Report [p. 166]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Software Services Overview Use the Software Services Overview report to get a quick overview of all software services, track trends, and quickly determine which autodiscovered software services need attention. 150

151 Chapter 8 Reports Menu How to Access the Report To display the report for one autodiscovered software service: 1. On the CAS, open the Application Health Status report. From the CAS top menu, choose Reports Application Health Status. 2. In the Application Health Status report, in the Network Performance tile, click the total number of software services or click a monitoring interval in the chart. Report Contents and Usage The table lists all software services on this CAS and charts the Total bandwidth usage for top 5 software services. Drilldown Reports You can access more detailed reports from the Software service column. Chart Servers Report The graph shows statistics on the autodiscovered software services. Hover over the chart to see details on the selected period. Depending on the way you access the Servers report, it shows detailed server information either non-filtered or filtered by one or a combination of entities such as an application, a tier, or a transaction. How to Access the Report Ways to open this report include: Open the Tiers report (Reports EUE Overview Tiers) and select Servers for Tier from the Tier column. Open the Software Services report (Reports RUM Analysis Software Services) and click the Servers tab. Report Contents and Usage The Servers report shows infrastructure that supports your applications. It lists all servers belonging to a particular tier, together with their software service associations and performance metrics. From here, you can go further to see detailed charts or reports. Note that tooltips for chosen metric values show additional information, for example, the number of affected users or percentage of aborts. Drilldown Reports You can access more detailed reports from the following columns: Server name 151

152 Chapter 8 Reports Menu Operations report. For more information, see Operations Report [p. 153]. Metric Charts report. For more information, see Metric Charts [p. 159]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). If there is a in the A (for Autodiscovered ) column of the Servers report, the Configure User-Defined Software Service option is available from the Server name column. The green check mark indicates that there was autodiscovered traffic present for this server. If there is a in the Configured column of the Servers report, this server/port combination has been configured for the given software service. Slow operations Slow Operation Cause Breakdown report (only if you have set up CAS and ADS to work together). For more information, see Slow Operation Cause Breakdown Report [p. 166]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). 152

153 Operations Report Depending on the way you access the Operations report, it shows detailed operation information either non-filtered or filtered by one or a combination of entities such as an application, a tier, or a server. The data is shown for operations provided by AMD. How to Access the Report You can access this report as a perspective tab on the Servers, Software Services, Regions, Areas, and Sites reports. You can also access the report by clicking a server name on the Servers report or by clicking a tier name on a Tiers report filtered for a selected transaction. Report Contents and Usage The Operations report displays all operations for a given tier, either in the context of an application if you drilled down from the Applications report, or in the context of the tier only if you drilled down from the Tiers report. Depending on the analyzer, operation can mean a URL, query, form (as in Oracle Forms), XML operation, etc. The following is the full list of analyzers that provide data for this report. Chapter 8 Reports Menu Cerner Cerner over MQ DRDA (DB2) Exchange over HTTP Exchange over HTTPS Exchange over RPC HTTP HTTP Express HTTPS IBM MQ Informix Jolt (Tuxedo) MySQL Database [TCP] Oracle Oracle Forms over HTTP Oracle Forms over HTTPS Oracle Forms over TCP SAP GUI SAP GUI over HTTP SAP GUI over HTTPS SAP RFC SOAP over HTTP SOAP over HTTPS TDS XML over HTTP XML over HTTPS XML over MQ XML over SSL XML By clicking the Tasks, Modules, or Services tab, you can view the data in a broader perspective on additional reports that show upper levels of the reporting hierarchy. Multi-level reporting hierarchy related to the monitored business processes enables problem identification in the context of tasks that end users are performing. For more information, see Multi-Level Hierarchy Reporting [p. 100]. Drilldown Reports You can access more detailed reports from the following columns: Operation Operation Breakdown by Server report. 153

154 Chapter 8 Reports Menu For more information, see Breakdown by Server Report [p. 158]. Metric Charts report. For more information, see Metric Charts [p. 159]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Slow operations Slow Operation Cause Breakdown report (only if you have set up CAS and ADS to work together). For more information, see Slow Operation Cause Breakdown Report [p. 166]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Tasks Report For a given software service or all software services (depending on the context you access this report), the Tasks report shows you the list of all tasks. Aggregated data for operations, user count, and other metrics enables you to analyze the chosen aspect of the monitored application or environment. For example, in the case of web monitoring, you see all page names; in the case of SOAP monitoring, you see SOAP methods. 154

155 Chapter 8 Reports Menu How to Access the Report The report can be accessed after you click the Tasks tab on the Operations, Modules, or Services report. You can also get to this report after you drill up for each task from the Operations report or after you drill down for each service from the Modules or Services report. Report Contents and Usage You can use the report to arrive at an interim view of all services, modules, or operations for a chosen segment of your monitored application or environment. From here you can go down or up the hierarchy. You can perform deeper analysis to the level of a single operation or even a single hit, if your monitoring environment settings allow for that. You can ascend one level in the hierarchy to arrive at the top and investigate overall statistics. Drilldown Reports You can access more detailed reports from the following columns: Task Operations report. For more information, see Operations Report [p. 153]. Task Breakdown by Server report. For more information, see Breakdown by Server Report [p. 158]. Metric Charts report. For more information, see Metric Charts [p. 159]. Slow operations Slow Operation Cause Breakdown report (only if you have set up CAS and ADS to work together). For more information, see Slow Operation Cause Breakdown Report [p. 166]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. 155

156 Chapter 8 Reports Menu Clients report Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Modules Report For a given software service or for all software services (depending on the context you access this report), the Modules report shows you the list of all modules. Aggregated data for operations, user count, and other metrics enables you to analyze the chosen aspect of the monitored application or environment. For example, in the case of SOAP monitoring, you will see all detected services. How to Access the Report The Modules report can be accessed after you click the Modules tab on the Operations or Services report. You can also get to this report after you drill down for each service from the Services report. Report Contents and Usage You can use this report to arrive at an interim view of all services or operations for a chosen segment of your monitored application or environment. From here you can go down or up the hierarchy. You can perform deeper analysis to the level of a single operation or even a single hit if your monitoring environment settings allow for that. You can ascend one level in the hierarchy to arrive at the top and investigate overall statistics. Drilldown Reports You can access more detailed reports from the following columns: Module Tasks report. For more information, see Tasks Report [p. 154]. Module Breakdown by Server report. For more information, see Breakdown by Server Report [p. 158]. Metric Charts report. For more information, see Metric Charts [p. 159]. Slow operations Slow Operation Cause Breakdown report (only if you have set up CAS and ADS to work together). For more information, see Slow Operation Cause Breakdown Report [p. 166]. 156

157 Chapter 8 Reports Menu Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Services Report The Services report presents the highest level of the three-level reporting hierarchy. It lists services for all or a given software service. Aggregated data for operations, user count, and other metrics enables you to analyze the chosen aspect of the monitored application or environment. How to Access the Report The Services report can be accessed after you click the Services tab on the Modules, Tasks, or Operation report. To narrow down the results displayed on the Services report, you can access it by clicking the service name for a particular operation or module on the Operations or Modules report. Report Contents and Usage You can use the report to arrive at a global view of all services to compare and verify the number of operations (including slow operations), number of users, or performance statistics for a whole segment of your monitored application or environment. From here you can go down the hierarchy to the level of a single operation or even a single hit, if your monitoring environment settings allow for that. 157

158 Chapter 8 Reports Menu Drilldown Reports You can access more detailed reports from the following columns: Service Modules report. For more information, see Modules Report [p. 156]. Service Breakdown by Server report. For more information, see Breakdown by Server Report [p. 158]. Metric Charts report. For more information, see Metric Charts [p. 159]. Slow operations Slow Operation Cause Breakdown report (only if you have set up CAS and ADS to work together). For more information, see Slow Operation Cause Breakdown Report [p. 166]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Unique and affected users (performance) Click the icon for any row in this column to open a menu of available drilldowns from the selected row. Some options may not be available for all rows. All Users report. For more information, see All Users Report [p. 161]. Application Performance Affected Users report. For more information, see Application Performance Affected Users Report [p. 162]. Network Performance Affected Users report. For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users report. For more information, see Availability Affected Users Report [p. 163]. Clients report Failures (total) Application responses report. For more information, see Application Responses Report [p. 160]. Dynatrace Application Monitoring Client (only for the HTTP traffic that is monitored both by DC RUM and Dynatrace Application Monitoring). Breakdown by Server Report The Breakdown by Server report lists all servers and software services for a selected operation, task, module, or service. 158

159 Chapter 8 Reports Menu How to Access the Report You can access the Breakdown by Server report by drilling down from the following columns: The Operation column of the Operations report The Task column of the Tasks report The Module column of the Modules report The Service column of the Services report Metric Charts Report Contents and Usage This report provides additional information to its parent report. While the parent report lists monitored operations, tasks, modules, or services with the most significant metrics, the Breakdown by Server report also shows server IP addresses and software services to which the selected entity (for example, an operation) belongs. Drilldown Reports Depending on the report, different drilldown reports are available: Click an operation name in the Operation column on the Operation Breakdown by Server report to display the Metric Charts - Performance report. For more information, see Metric Charts [p. 159]. If operations are monitored by both DC RUM and Dynatrace Application Monitoring, you can drill down to the Dynatrace Application Monitoring Client from the Operation and Slow operations columns. Click a task name in the Task column on the Task Breakdown by Server report to display the Operations report. For more information, see Operations Report [p. 153]. Click a module name in the Module column on the Module Breakdown by Server report to display the Tasks report. For more information, see Tasks Report [p. 154]. Click a service name in the Service column on the Service Breakdown by Server report to display the Modules report. For more information, see Modules Report [p. 156]. The Metric Charts reports show measurements for a selected entity, such as a software service or a tier. These reports present metric values on charts, which facilitates quick comparison and analysis of the selected data. Depending on the context and analyzer type, the Metric Charts reports can consist of several sections: Performance, Network, Availability, Users, Custom Metrics, Browsers, Distributions, Usage (for VoIP traffic), and Citrix. Some of these sections are additionally divided into charts that present: The most significant metrics and their values for the current day (Metric Charts report) 159

160 Chapter 8 Reports Menu Errors Report The most significant metric baselines and their values for the current day (Metric Baselines report) The most significant metrics with their baseline values for the current day (Metrics with Baselines report) Note that a single chart may display more than one metric. For example, the Operations chart shows Slow operations, Fast operations, and Discarded operations. The Errors report presents a complex view of the detected errors, categorized by type and top error counts grouped by user, operation, or site. How to Access the Report The Errors report can be accessed by drilling down on the TCP errors or Failures (total) columns of the EUE Overview and Software Services reports. Report Contents and Usage Use this report to verify the number of particular error categories (for example, TCP errors or HTTP errors). The graph helps you visually compare the occurrences of particular error types. The Top 10 sections enables you to identify users, operations, or sites that experience the most errors. Hover over the metric counts in the Errors by Type section to see additional counts for individual error categories. Drilldown Reports For a given user, operation, or site, you are able to perform deeper analysis on the Errors Details report by clicking metric values in the Application failures columns in all Top 10 sections. For more information, see Error Details Report [p. 160]. Error Details Report The Error Details report shows the number of errors detected for a specific user, operation, or site. Use this report to verify which users experienced the largest number of errors and where the errors occurred, and to inspect the error type. How to Access the Report You can access this report by clicking the metric values in the Application failures columns in all Top 10 sections of the Errors report. Application Responses Report The Application Responses report presents a complex view of the detected failure, categorized by type and top error counts grouped by user, operation, or site. How to Access the Report On the EUE Overview report, drill down on the Failures (Total) column. 160

161 Chapter 8 Reports Menu On the Software Services report: Drill down on the Failures (Total) column. Select Links Application Responses from the Software service column. (Not available for autodiscovered software services.) Report Contents and Usage Use this report to verify the number of particular failure categories (for example, failures (TCP) or failures (transport)). The graph helps you visually compare the occurrences of particular failure types. The Top 10 sections enable you to identify users, operations, or sites that experience the most errors. Hover over the metric counts in the Failures (total) columns to see additional counts for individual failure categories. Drilldown Reports For a given user, operation, or site, you are able to perform deeper analysis on the Application Responses - Details report by clicking the values of User Name, Operation and Client site columns in the Top 10 sections. For more information, see Application Responses - Details [p. 161]. Application Responses - Details The Application Responses - Details report shows the number of failures detected for a specific user, operation, or site. Use this report to verify which users experienced the largest number of failures and where the failures occurred, and to inspect the failure type. How to Access the Report You can access this report clicking the values of User Name, Operation and Client sitecolumns in the Top 10 sections of the Application Responses report. All Users Report The All Users report shows how metric values affect particular users. Use it to identify problems related to application or network performance. How to Access the Report You can access this report by drilling down from the Unique and affected users (performance) columns of CAS reports. Report Contents and Usage This report lists all users detected for the conditions imposed by drilldown filters, with site and performance data. To change the perspective for user statistics, click the tabs available on the report. You can switch to: Application Performance Affected Users For more information, see Application Performance Affected Users Report [p. 162]. 161

162 Chapter 8 Reports Menu Network Performance Affected Users For more information, see Network Performance Affected Users Report [p. 162]. Availability Affected Users For more information, see Availability Affected Users Report [p. 163]. Drilldown Reports Click a user name in the User name column to display the User Activity report. For more information, see User Activity Tabular Report [p. 164]. If users are monitored by both DC RUM and Dynatrace Application Monitoring, you can also drill down to the Dynatrace Application Monitoring Client from the User name column. The drilldown link will not be available for user aggregates (Client from users). In User activity details and server statistics on demand mode, the user name in the All Users report is an aggregate name for all the users from a particular site. Click the aggregate name to display the List of users (on demand) report for the site, with actual user names that can be used to drill down to the User Activity report. For more information, see List of Users On Demand Report [p. 166]. Application Performance Affected Users Report The Application Performance Affected Users report displays detailed measurements for users affected by application performance problems. For all of the listed users, you can see a set of availability-related metrics. Click a user name to view a list of user operations and transactions (User Activity report), which may help you locate the source of application performance problems. How to Access the Report You can access this report by drilling down from the Unique and affected users (performance) columns of CAS reports. This report is available as a perspective tab on the All Users report. Drilldown Reports Click a user name in the User name column to display the User Activity report. For more information, see User Activity Tabular Report [p. 164]. If users are monitored by both DC RUM and Dynatrace Application Monitoring, you can also drill down to the Dynatrace Application Monitoring Client from the User name column. The drilldown link will not be available for user aggregates (Client from users). In User activity details and server statistics on demand mode, the user name in the All Users report is an aggregate name for all the users from a particular site. Click the aggregate name to display the List of users (on demand) report for the site, with actual user names that can be used to drill down to the User Activity report. For more information, see List of Users On Demand Report [p. 166]. Network Performance Affected Users Report The Network Performance Affected Users report displays detailed measurements for users affected by network performance problems. For all of the listed users, you can see a set of 162

163 Chapter 8 Reports Menu performance-related metrics. Click a user name to view a list of user operations and transactions (User Activity report), which may help you locate the source of network performance problems. How to Access the Report You can access this report by drilling down from the Unique and affected users (performance) columns of CAS reports. This report is available as a perspective tab on the All Users report. Drilldown Reports Click a user name in the User name column to display the User Activity report. For more information, see User Activity Tabular Report [p. 164]. If users are monitored by both DC RUM and Dynatrace Application Monitoring, you can also drill down to the Dynatrace Application Monitoring Client from the User name column. The drilldown link will not be available for user aggregates (Client from users). In User activity details and server statistics on demand mode, the user name in the All Users report is an aggregate name for all the users from a particular site. Click the aggregate name to display the List of users (on demand) report for the site, with actual user names that can be used to drill down to the User Activity report. For more information, see List of Users On Demand Report [p. 166]. Availability Affected Users Report The Availability Affected Users report displays detailed measurements for users affected by availability problems. For all of the listed users, you can see a set of performance-related metrics. Click a user name to view a list of user operations and transactions (User Activity report), which may help you locate the source of availability problems. How to Access the Report You can access this report by drilling down from the Unique and affected users (performance) columns of CAS reports. This report is available as a perspective tab on the All Users report. Drilldown Reports Click a user name in the User name column to display the User Activity report. For more information, see User Activity Tabular Report [p. 164]. If users are monitored by both DC RUM and Dynatrace Application Monitoring, you can also drill down to the Dynatrace Application Monitoring Client from the User name column. The drilldown link will not be available for user aggregates (Client from users). In User activity details and server statistics on demand mode, the user name in the All Users report is an aggregate name for all the users from a particular site. Click the aggregate name to display the List of users (on demand) report for the site, with actual user names that can be used to drill down to the User Activity report. For more information, see List of Users On Demand Report [p. 166]. 163

164 Chapter 8 Reports Menu User Activity Report The User Activity report presents statistics on user activity for a single user or client for a selected period of time. It shows operations and sequence transactions together with detailed measurements presented by different metrics. How to Access the Report User Activity reports are accessed by drilling down from the following reports: EUE Overview reports Click the number of users in the Unique and affected users (performance) column and click a user name. Note that the User Activity report is not available for the Synthetic front-end tier. Top N View reports Click the number of users in the Unique users column and click a user name. Software Services reports Click the number of users in the Unique and affected users (performance) and click a user name. Network Analysis reports Click the number of users in the Unique users column and click a user name. In User activity details and server statistics on demand mode, the user name in the All Users report is an aggregate name for all the users from a particular site. Click the aggregate name to display the List of users (on demand) report for the site, with actual user names that can be used to drill down to the User Activity report. For more information, see List of Users On Demand Report [p. 166]. User Activity Tabular Report You can switch between two views of the User Activity tabular report: Operations and Sequence transactions. By default, if you drill down from the Software Services reports, the User Activity report shows operations for a selected user. How to Access the Report This report can be accessed as a drilldown report from other CAS reports. For more information, see User Activity Report [p. 164]. Report Contents and Usage The User Activity report enables you to analyze single user operations and transactions. For example, you can display software services for which the number of slow operations is the highest. You can also drill down to more detailed reports for a particular operation or for a particular time stamp. Depending on the selected view of the User Activity report, the maximum set of provided statistics includes either all the metrics available on the Software service, operation, and site data data view or all the metrics available on the Synthetic and sequence transaction data data view. For more information, see Software service, operation, and site data [p. 198] and Synthetic and sequence transaction data [p. 315]. 164

165 Chapter 8 Reports Menu Drilldown Reports from the User Activity Operations View Drilldown reports show detailed statistics for a single operation. You can access the following reports by clicking links in the report table: User Activity - Details From the Time and Operation columns (only if you have set up CAS and ADS to work together). For more information, see User Activity - Details Report [p. 165]. User Activity for Client IP Address From the Client IP address column. The User Activity for Client IP Address report shows the same statistics as the User Activity report, but filtered for a selected client IP address. Slow Operation Cause Breakdown From the Slow operations column (only if you have set up the CAS and ADS to work together). For more information, see Slow Operation Cause Breakdown Report [p. 166]. User Activity - User Diagnostics From the Network time column. For more information, see User Activity - User Diagnostics Report [p. 165]. If users are monitored both by DC RUM and Dynatrace Application Monitoring, you can also drill down to the Dynatrace Application Monitoring client from the Client IP address and Slow operations columns. Drilldown Reports from the User Activity Sequence Transactions View To drill down to a detailed report for a selected transaction, click the transaction name in the Transaction column to open the Transaction Load Sequence report. For more information, see Transaction Load Sequence Report [p. 114]. User Activity - Details Report The User Activity - Details report shows additional information about the selected time stamp or operation. Use it to view operation statistics, such as the operation begin time or operation time breakdown. How to Access the Report Access this report by clicking metric values in the Time and Operation columns on the User Activity report. User Activity - User Diagnostics Report The User Activity - User Diagnostics report lists software services for a selected user and for a particular period of time. Use it to analyze which software services are used at the same time and which of them cause any problems. How to Access This Report To access this report, click a metric value in the Network time column on the User Activity report. 165

166 Chapter 8 Reports Menu Report Contents The maximum set of provided statistics includes all the metrics available on the Software service, operation, and site data data view. For more information, see Software service, operation, and site data [p. 198]. Drilldown Reports Click a software service name to drill down to the User Activity - User Diagnostics - Details Tabular report, filtered for the whole day for the selected software service. For more information, see User Activity Tabular Report [p. 164]. List of Users On Demand Report The List of users (on demand) report is an intermediary step to access the User Activity report when the User activity details and server statistics on demand feature is enabled in the CAS. How to Access the Report The report is accessible only when CAS is working in User activity details and server statistics on demand mode. It can be accessed as a drilldown report from other CAS reports. For more information, see User Activity Report [p. 164]. In User activity details and server statistics on demand mode, the user name in the All Users report is an aggregate name for all the users from a particular site. Click the aggregate name to display the List of users (on demand) report for the site, with actual user names that can be used to drill down to the User Activity report. User activity details and server statistics on demand mode imposes a limitation in filtering. When you drill down from the All Users report to the List of users (on demand) report, you lose the contextual application and tier filters. These reports are also accessed by drilling down from the following reports: EUE Overview reports Click the number of users in the Unique and affected users (performance) column and then click a user name. Software Services reports Click the number of users in the Unique and affected users (performance) and then click a user name. Report Contents and Usage Besides acting as an intermediary step to access the User Activity report, the report displays basic performance metrics for the aggregate users. Slow Operation Cause Breakdown Report The Slow Operation Cause Breakdown report shows slow operations for a software service or a group of software services, classified according to various reasons and plotted against time. It also shows the total numbers of slow operations and the percentage breakdown for various causes of slow operation. 166

167 Chapter 8 Reports Menu How to Access the Report To access this report, click a metric value in the Slow operations column on the Software Services report. For more information, see Software Services Report [p. 148]. Report Contents and Usage The report shows performance and quality of service, expressed as the number of operations that were slow for various reasons. For more information, see Operation data in the Data Center Real User Monitoring Advanced Diagnostics Server User Guide. The report is a starting point for investigating performance problems of software services. It enables you to narrow down areas that need to be investigated, such as network, servers and application design. The information is presented in the following ways: As a graph of slow operations against time. As a pie chart giving the percentage distribution of slow operations for various reasons. As a table giving the total number of operations and the total number of slow operations due to various reasons, all at the bottom of the report page. Categories The set of available categories for slowness depends on the analyzer. The analyzers supporitng the Primary Reason for Slowness calculation will have a wider range of metrics for Apllication Design. Refer to the table for details: Analyzer HTTP HTTPS (SSL decrypted) DRDA (DB2) Corba TDS Informix Oracle Oracle Forms over TCP Oracle Forms over HTTP Oracle Forms over HTTPS SAP GUI MySQL Database [TCP] SAP GUI over HTTP SAP GUI over HTTPS CAS x x x x x x x x x x x x x x ADS x x x x x x x x x x x x x x 167

168 Chapter 8 Reports Menu Analyzer SAP RFC Simple parser JBoss RMI Sun RMI Weblogic T3 RMI RPC XML over HTTP/HTTPS/TCP Cerner Cerner (RTMS) SOAP LDAP CAS x x x x x ADS x x x x x x x x x x x Drilldown Reports The following reports can be opened by clicking links in the report table: Slow Operation Loads From the Slow operations column. Slow Operation Loads - Network (Request Time) From the Network - Request time column. Slow Operation Loads - Network (Network-Loss Rate) From the Network - Loss rate column. Slow Operation Loads - Network (Latency) From the Network - latency column. Slow Operation Loads - Network (Details Unknown) From the Network - other reasons column. Slow Operation Loads - Client Delays From the Client/3rd party column. Slow Operation Loads - Data Center From the Data center column. Slow Operation Loads - Application Design (Operation size) From the Application design - Operation size column. Slow Operation Loads - Multiple Reasons From the Multiple reasons column. For more information, see Slow Operation Loads Reports [p. 168]. Slow Operation Loads Reports The Slow Operation Loads reports are intended to be used by IT personnel to research software service, server, or URL availability problems caused by a particular reason or experienced by a particular user. Some of these reports will be of interest to a number of different IT support groups. For example, slow operations apparently caused by data center problems may be caused by server overload or application design, such as an excessive number of SQL queries. Both server maintenance 168

169 Chapter 8 Reports Menu staff and application designers may need to make use of the same Slow Operation Loads reports, though they may arrive at them from different points in the report hierarchy. Similarly, slow operations caused by client delays are problems related to delays on the client side (desktop delays). Such delays may or may not be caused by the way operations are designed, thus requiring extensive processing on the client side or causing client-side errors. How to Access the Reports You can access a Slow Operation Loads report by clicking a metric value in the Slow operations column on the Slow Operation Cause Breakdown report. For more information, see Slow Operation Cause Breakdown Report [p. 166]. Clicking a slow operation caused by a specific reason opens the Slow Operation Loads report for that reason, such as application design, data center, client delays, network, or multiple reasons. What You Can Learn from These Reports Depending on the context from which the reports were invoked, the reports show details about individual slow operations caused by different reasons or for different users. Statistics are provided for the following causes: Application design Network Network (Latency) Network (Request Time) Network (Retransmissions) Network (Details Unknown) Data center delays Client delays Multiple reasons NOTE Slow operations caused by multiple reasons are slow operations related to undetermined reasons. Note that the various categories of errors form non-intersecting sets, that is, the slow operations listed as being for multiple reasons, do not appear in the statistics for individual reasons. Report Contents and Usage By viewing higher-level reports first (while performing a drilldown, on the way to a Slow Operation Loads report), you can narrow down the problem to a particular number of URLs and a particular reason or a particular user. The Slow Operation Loads report provides details about particular instances of the slow operations, enabling you to move the problem diagnosis further. For example, having obtained details about particular slow operation instances due to data center, you could view application server logs to determine server-related causes of the problem. 169

170 Chapter 8 Reports Menu All of the Slow Operation Loads reports contain a relevant list of URLs and a column giving operation begin times. Other information varies depending on the selected reason or context. For example, the Slow Operation Loads - Application Design report also shows Root cause details, Operation time, Operation size and number of Hits. The Slow Operation Loads - Customer Care report provides the Number of operation errors, End-to-end RTT, Loss rate and Response throughput. Drilldown Reports Click the time value in the Operation begin time column to display the Operation Load Sequence report. For more information, see Operation Load Sequence Reports [p. 170]. If DC RUM is integrated with Dynatrace Application Monitoring, you can click a link in the Operation begin time column to display the Dynatrace Application Monitoring Transaction Flow for the selected operation load. Operation Load Sequence Reports Use the Operation Load Sequence reports to investigate problems with particular operations. How to Access the Reports You can access the Operation Load Sequence reports by clicking an operation in the Operation begin time column on the Slow Operation Loads report. For more information, see Slow Operation Loads Reports [p. 168]. Report Contents and Usage The Operation Load Sequence reports are used in conjunction with the Slow Operation Loads reports to diagnose particular problems. The first table is a summary of the previous report. It shows the most important information about the selected operation, such as the operation time and the number of operations returned with HTTP errors. The table below the chart shows detailed information about actual URLs that belong to a selected operation. For XML and SOAP, Operation Elements data is identical to Operation Analysis data, so, to avoid unnecessarily keeping the duplicates in the database, a VDATA_FILTER_XMLSOAP filter is set to true by default. Keeping this filter set to true saves disk space but, because the XML and SOAP entries are filtered out, it makes reporting on the Operation Elements level (elements or headers) impossible. To change the value of VDATA_FILTER_XMLSOAP property in userpropertiesadmin, type in the Web browser's Address bar and press [Enter], change the filter's property value, and click Set value to accept the change. To access this screen, you need to have administrative privileges for the report server. Drilldown Reports Click a component URL in the Component URL column to display the Hit Details report. For more information, see Hit Details [p. 171]. 170

171 Chapter 8 Reports Menu Hit Details If DC RUM is integrated with Dynatrace Application Monitoring, you can click a component URL in the Load Sequence table or an Operation in the Operation Details table to display the Dynatrace Application Monitoring Transaction Flow for the selected operation. The Hit Details report shows an operation hit, broken down into specific HTTP elements. How to Access the Report To access this report, click a component URL on the Slow Operation Load Sequence report. The Hit Details report represents an operation hit, broken down into specific HTTP elements. By default, the table is presented in a transpose manner. This makes the table more legible when displaying elements like Request header and Response header. For more information, see Operations element data in the Data Center Real User Monitoring Advanced Diagnostics Server User Guide. Location Health Report Use the Location Health report to determine performance problems from a user location's perspective. The report can be launched directly from the CAS DC RUM analysis menu or in the context of a specific application or location from the Application Health Status network overlay. The top Locations Performance table contains regions (default view), areas, and sites from where you can observe activity in a report context. You can determine the location performance from an operation and a network perspective, and the impact on users. When you select a location in the top table, you filter the bottom tables to reveal active users in the location, and applications that are used by this location. This helps you to understand if the observed problem affects only some users and/or only some applications. Do the following to continue analysis: Display more metrics with the Metric Charts link. Display network traffic generated by the location with the Software Services for Site link. Look at the individual user or application by drilling down to the User Health or DCA reports. Learn more about the responses of the application via the Application Responses link. Click Hierarchy for a dimension list for each table. These dimensions establish the drilldown order and starting point. Choose at least one.. For more information, see Options on the Interactive Table and Chart Report Section in the Data Center Real User Monitoring Data Mining Interface (DMI) User Guide. User Health Report Use the User Health report to determine the location of a performance problem in the data center infrastructure, and the impact of the problem on users. Isolate the problem to a specific operation on a specific server. 171

172 Chapter 8 Reports Menu The top User Performance table contains users (default view) which are observed as active in the report context. You can determine the performance and availability the user experiences. By selecting a user row in the top table, you filter the bottom tables to reveal applications/transactions and their operations that are used by this user. This helps to narrow down the analysis to a particular application and operation.. From here you can continue analysis by doing the following: Display more metrics with the Metric Charts link. Drill down to a user s activity with a specific application, or detailed activity for a chosen operation Understand the reasons for slow performance of a selected operation with the Slow Operation Cause Drilldown link or the Dynatrace Application Monitoring PurePath link Observe the performances of all users of a selected application by using the Data Center Analysis report link. Learn more about the responses of the application via the Application Responses link Click Hierarchy for a dimension list for each table. These dimensions establish the drilldown order and starting point. Choose at least one. The Operations section measures activity within a particular application protocol compared to the baseline. For more information, see Options on the Interactive Table and Chart Report Section in the Data Center Real User Monitoring Data Mining Interface (DMI) User Guide. Top N View Report The Top N View report shows the most problematic software services, operations, and sites on one screen. Additionally, it provides the context of the analyzer that is listed next to the software service or operation name. With this report, you can identify the top ten operations, software services, and sites that generate the longest response time, have the largest number of affected users, and cause the largest number of errors. Each table provides the context of the analyzer. How to Access the Report From the CAS top menu, choose Reports RUM Analysis Top N View. Top N By Response Time The Top N By Response Time tables show the top ten software services, operations, and sites with the slowest response time. In the Top Operations By Response Time table, you can identify the most problematic operations. In the Top Software Services By Response Time and Top Sites By Response Time tables, you can see how many operations were completed in the slowest of the software services and sites. Hover over the Count, Operations, and Time columns to see how many users are affected by the slow response conditions, the number of unique users, the number of application failures, the number of slow and fast operations, and server and network times. 172

173 Chapter 8 Reports Menu Top N By Affected Users The Top N By Affected Users tables show the top ten software services, operations, and sites with the largest number of affected users. In these tables, you can see the overall number of users for the worst software services, operations, and sites, as well as the number of affected users. Hover over the Users and Affected columns to see the number of application failures, the number of slow and fast operations, and server and network times. Top N By Error The Top N By Error tables show the top ten software services, operations, and sites with the greatest number of errors. The Application failures column shows the number of all transactions errors for the top ten software services, operations, and sites. The following errors are taken into account: DNS errors The number of DNS errors. HTTP client errors (4xx) The sum of all HTTP client errors (4xx). This includes 4 categories of errors (4xx), by default HTTP Unauthorized (401, 407) errors, HTTP Not Found (404) errors, custom client (4xx) errors and Other HTTP (4xx) errors. The contents of the first 3 categories can be configured by users. However, there are two types of the 4XX errors that are of particular importance: errors 401 related to server-level authentication, and errors 404 indicating requests for non-existent content. These two error types are reported separately, by specific metrics. 401 Unauthorized - Server reports this error when user's credentials supplied with request do not satisfy page access restrictions. The HTTP server layer, not the application layer, reports 401 errors. The AMD will report on "Unauthorized" errors only if server-level authentication has been configured. This is common practice for sites that are comfortable with very basic user access policies. Most commercial-grade applications do not rely on server-level authentication (e.g. most of online banking applications or online shopping), but rather authenticate users on the application layer. In such a case, even if authentication fails, the server will typically send 200 OK responses and authentication error information will be explained in page content. So this kind of error is not very common in commercial sites. 404 Not Found - Server reports "Not Found" errors when it cannot fulfill client request for a resource. Usually it happens due to malformed URL, which directs to a non-existing page or image. Such a URL request may result from a user, who misspelled the URL, trying to access a URL that the user stored in his "Favorites" folder a long time ago, or some other mistake. Malformed URLs may also exist in invalid or incorrectly designed Web pages so the error will be reported by browsers trying to load such a page. Significant and constant number of these errors usually indicates that some pages on the server have design-related or link validation issues. In some cases, 404 errors result from the server overload. It is good practice to check whether the percentage of errors is load-related. 173

174 Chapter 8 Reports Menu MQ errors The total number of IBM WebSphere Message Queue errors, including client errors, server errors, protocol errors and security errors. HTTP server errors (5xx) The number of observed HTTP server errors (5xx). The response status codes 5xx indicate cases, in which the Web server is aware that there was a server error or it is incapable of performing the request. Such error presence usually means that the Web server does not function as intended. The following 5xx errors are defined by the HTTP protocol standards: 500 Internal Server Error - The server encountered an unexpected condition, which prevented it from fulfilling the request. 501 Not Implemented - The server does not support the functionality required to fulfill the request. 502 Bad Gateway - The server received an invalid response from a back-end application server. 503 Service Unavailable - The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. 504 Gateway Timeout - The server did not receive response from a back-end application server. 505 HTTP Version Not Supported - The server does not support the HTTP protocol version that was used in the request message. SAP GUI errors The number of errors detected on the protocol level in communication between SAP application server and SAP GUI client as well as between SAP application server and a third party clients using Remote Function Calls (RFC). SAP RFC errors The number of errors detected on the protocol level in communication between a SAP application server and a SAP client plus the number of attributes which are defined as error indicators in the monitoring configuration. SMTP errors The total number of SMTP errors. SSL errors The number of all SSL alerts. This metric is the sum of SSL Session Fatal Errors, SSL Handshake Errors and SSL Warnings. Hover over the column to see the numbers of errors in each category. The TCP Errors column shows the total number of TCP errors in the top ten software services, operations, and sites. The following TCP errors are taken into account: TCP Errors The total number of TCP errors. Those errors may indicate server or application problems and therefore measurements of those are critical to understanding the issues that may affect end-user experience. AMDs measure and report on the following types of TCP errors: 174

175 Chapter 8 Reports Menu Connection Refused Errors - Client attempts to open a TCP session with a server, which rejects the request. SYN packet from Client is followed by RESET packet from Server, with matching TCP sequence numbers. This error is typically caused by resource exhaustion on the server, which is unable to accept more concurrent TCP sessions. This may be either a configuration issue (too few resources allocated in the kernel) or lack of memory. SYN flood attacks typically result in servers being unable to accept new connections. Server session termination error - Server is unexpectedly terminating a connection that was successfully opened. The server sends a RESET packet to the Client. Such an error originates at an application using TCP session that is monitored. It does not necessarily mean application failure; usually it means that the application encountered a condition in which it decided to immediately terminate session with the client, for example, because of an application security policy violation by the client. Session Abort - Client is unexpectedly terminating a connection that was successfully opened. The Client sends a RESET packet to the Server. These errors are inspected in the context of the client application and may or may not be reported. For example, the browser running HTTP may terminate the load of a GIF file if it is older than the one that it had previously cached and this is normal behavior. However, if all connections to the server are terminated because the user hits the STOP button, then this is abnormal session termination and is reported as "Aborted operation" or "Stopped Page". Client not responding errors (server timeout errors) - Server networking stack takes an assumption that the network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with the RESET packet. Such a condition may occur when the client has been silently disconnected from the network, for example, due to a link failure, or the client has crashed. Note that this error will not occur if the client has ended the session gracefully, e.g. by closing the client application. Server not responding errors (client timeout errors) - Client networking stack takes an assumption that network connection to the server exists, but the server remains idle and does not respond. In such a case, the client closes the TCP session with the RESET packet. This may occur either during the Session Setup phase (no response to the SYN packet), or during a normal data exchange process. Such a situation may result in the intermittent network problems between the client and the server. In the case the traffic is routed through asymmetric paths across the Internet, which is often the case, the path from the server to the client may be broken. Network Analysis Reports The reports grouped in the Reports Network Analysis menu provide a network view of the traffic. They show a picture of the monitored network operation and highlight potential problems in the network, including excessive RTT or loss rate. 175

176 Chapter 8 Reports Menu The Network Analysis reports show metrics for all the detected or defined regions, areas, and sites and for all analyzed software services. They help you to quickly troubleshoot software service or network performance problems by answering these questions: Who (which clients and servers) is sending what data? Which software services are competing for network resources? When was the software service used? Where is the troublesome software service traffic flowing through the network? Why is the software service performing poorly? How much data (bytes) is being sent/received? A healthy network is necessary as a platform for the IT applications that support key business services. The Network Analysis reports deliver metrics on the performance levels of various software services across the network. They help you find and eliminate performance problems before they affect end users. They help you to identify the business impact of the performance problem and to plan for the growth of network infrastructure by showing network performance trends for critical software services with respect to traffic load and loss rate baselines. Example Network Analysis Reports Usage The following is an example of how the Network Analysis reports can help you find the cause of network-related problems. This example shows you connections between particular reports and briefly explain the aim of those reports. Example: Network Anomaly for a Specific Day and Time Start from the Total Traffic by Hour - Today report. 1. Click Reports Network Analysis Traffic Summary and then click the Total Traffic by Hour - Today link. The report shows today's network traffic. Notice that the loss rate was acceptable at 08:00, but changed to red (warning) at 09:00. Click the 6/2/14 09:00 time stamp. 176

177 Chapter 8 Reports Menu 2. On the Traffic Fault Domain Isolation - Last Hour report, note the usage-related metrics for software services, sites, servers, and clients. One client generated much more traffic than others, so click the client IP address in the Top Client Traffic table to see which software services this client was using. 3. The selected client was using five different software services and was talking mostly to one server: the server that had the highest value of all transmitted bytes on the Traffic Fault 177

178 Chapter 8 Reports Menu Domain Isolation - Last Hour report. Now you are aware of the client activity and you can do something about it to avoid such situations in the future. You can also display clients for a specific software service, site, or server to investigate why the total bytes value is outside the normal range. Voice and Video Reports Central Analysis Server offers a set of reports providing information on VoIP calls quality and some video streaming-related statistics. Click Reports Network Analysis VoIP Overview to access the tree of reports dedicated to voice and video streams analysis. VoIP Overview is the top level report for VoIP and video RTP streams monitored by Network Monitoring Probes or AMD monitoring. For more information, see VoIP Overview Report [p. 178]. Voice and Video Status - Signaling is the top level report for VoIP and video signaling H.245, H.323, and SIP streams monitored by AMD. Signaling protocols are used to identify the state of connection between the conversation endpoints, so that they do not provide data to calculate call quality, availability and performance metrics. For more information, see Voice and Video Status - Signaling Report [p. 184]. VoIP Overview Report VoIP Overview report is the top-level report for the voice and video monitoring reports family. Use this report to recognize anomalies or verify the status of your voice and video system. How to Access the Report From the CAS top menu, choose Reports Network Analysis VoIP Overview. Report Contents and Usage The report shows the number of calls with average MOS for local sites and for particular time stamps. Click a row in the Local site column to filter charts for a particular site. Drilldown Reports You can access more detailed reports from the following table columns and charts: 178

179 Chapter 8 Reports Menu VoIP Activity - Site, from the Local site column. For more information, see VoIP Activity - Site Report [p. 179]. VoIP Activity - Interval, from the Time column. For more information, see VoIP Activity - Interval Report [p. 179]. Voice and Video - Availability Charts, from all charts. For more information, see Voice and Video Graphical Reports [p. 188]. VoIP Activity - Site Report How to Access the Report To access this report, choose Reports Network Analysis VoIP Overview and click a site name. Report Contents and Usage The VoIP Activity - Site report shows VoIP traffic for a selected site. Use it to identify call initiators with the number of calls and the time spent on those calls. This report is not a top-level report and always presents filtered results. Drilldown Reports Click the links in the report table to access the following reports: Voice and Video Status - Activity, from the Call initiator column. For more information, see Voice and Video Status - Activity Report [p. 182]. Capture packets dialog box, from the Server IP address column. For more information, see Starting a Packet Capture [p. 36]. Voice and Video - Network Charts, from the Calls column. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video - Availability Charts, from the VoIP MOS, VoIP delay, VoIP loss rate, and VoIP RTCP Jitter columns. For more information, see Voice and Video Graphical Reports [p. 188]. VoIP Activity - Interval Report How to Access the Report To access this report, choose Reports Network Analysis VoIP Overview and click a time stamp. Report Contents and Usage The VoIP Activity - Interval report shows VoIP traffic for a selected time interval. It lists call initiators and sites to which they are assigned with the number of calls and the time spent on those calls. This report is not a top-level report and always presents filtered results. 179

180 Chapter 8 Reports Menu Drilldown Reports You can access more detailed reports from the following columns: Call initiator Voice and Video Status - Activity report. For more information, see Voice and Video Status - Activity Report [p. 182]. Voice and Video - Usage Charts, Voice and Video - Network Charts, and Voice and Video - Availability Charts reports. Client site For more information, see Voice and Video Graphical Reports [p. 188]. VoIP Activity - Site report. For more information, see VoIP Activity - Site Report [p. 179]. Voice and Video Status - Software Services (Codecs) Report Voice and Video Status - Software Services (Codecs) report is the top-level report for the voice and video RTP monitoring reports family. Use this report to recognize anomalies or verify the status of your voice and video system. How to Access the Report To access this report, drill down from the Voice and Video Status - Activity report and click the Software Services - RTP tab. For more information, see Voice and Video Status - Activity Report [p. 182]. Report Contents and Usage The data is grouped in four sections: Usage, MOS, Network, and Availability. An additional Network View perspective is available to enable you to switch to network-related metric values. The key metrics are color-coded for quick identification of the abnormally performing codecs. Note that codecs, to maintain consistency with other CAS reports, are categorized as software services. For each codec, you can see: How much volume it consumes on the monitored network. Use total bytes and total bandwidth usage metrics to verify that. How extensively it is used. Verify the number of detected calls using given codec and the amount of endpoints used. A wide spectrum of performance and availability metrics that enable you to immediately see how the voice and video services are performing. This screen, with auto-refresh enabled, can be a candidate for a network operations center (NOC) dashboard. 180

181 Chapter 8 Reports Menu Drilldown Reports The drilldowns from this report enable easy root cause analysis and fault domain isolation. All performance and usage metrics lead to detailed graph reports that enable you to see the time trends and metric value changes. The Endpoints metric is linked to the Voice and Video Status - Conversations report so you can easily see details of how the selected software service was used and which stations were affected. You can also drill down to network topology oriented reports filtered by a codec. The following reports can be opened by clicking links in the report table: Voice and Video Status - Sites, from the Software service (codec) column. For more information, see Voice and Video Status - Network Topology Oriented Reports [p. 183]. Voice and Video Usage Charts, from all non-user or client related columns in Usage section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Status - Conversations, from the Endpoints A and Endpoints B columns. For more information, see Voice and Video Status - Conversations Report [p. 181]. Voice and Video - Network Charts, from columns in MOS and Network sections. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video - Availability Charts, from columns in the Availability section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Status - Conversations Report Voice and Video Status - Conversations report is a lower-level report accessed from other, more general reports. For a selected entity, it shows more detailed information a software service (codec) or a location. How to Access the Report The report is accessed by drilling down from the following contexts: From the Endpoints column on the Voice and Video Status - Software Services (Codecs) report From the Local hosts and Remote hosts columns on network topology oriented reports. Report Contents and Usage The Conversations report shows the full spectrum of metrics presented in time-aggregated format (per day by default). It enables you to instantly identify which user, from which location, performed what activity and used how much resources. Note that this report, not being a top-level report, always presents filtered results. Drilldown Reports You can access detailed charts for the metrics by drilling down to respective graphical reports. You can also drill down to location-oriented reports by selecting Region, Area, or Site for 181

182 Chapter 8 Reports Menu endpoints taking part in the conversations. You can access the detailed Activity report by selecting an endpoint that has activity of interest. The following reports can be opened by clicking links in the report table: Voice and Video Status - Activity, from the IP Address column. For more information, see Voice and Video Status - Activity Report [p. 182]. Voice and Video Status - Regions, from the Region column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Network Topology Oriented Reports [p. 183]. Voice and Video Status - Areas, from the Area column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Network Topology Oriented Reports [p. 183]. Voice and Video Status - Sites, from the Site column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Network Topology Oriented Reports [p. 183]. Voice and Video - Usage Charts, from all columns in the Usage section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video - Network Charts, from all columns in the Performance section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video - Availability Charts, from all columns in the Availability section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Status - Activity Report The Voice and Video Status - Activity report is the most detailed report you can access from voice and video status reports. How to Access the Report The Voice and Video Status - Activity report is accessed as a drilldown from Voice and Video Status - Conversations reports for each IP address of a given endpoint (Endpoint-A or Endpoint-B). Accessed in this way, it shows the activity of a given station. For more information, see Voice and Video Status - Conversations Report [p. 181]. To apply additional filters to this report while viewing it, click values in the Time column. Accessed in this way, the report shows the total activity for a given moment in time. Click the IP address of a given endpoint to limit the set of data to a particular host at a given moment. Report Contents and Usage The Activity report adds a temporal dimension to information presented by the Voice and Video Status - Conversations report. Using it, you should be able to recognize when a given user from a certain location performed a certain activity. 182

183 Chapter 8 Reports Menu Drilldown Reports The following reports can be opened by clicking links in the report table: Voice and Video Status - Regions, from the Region column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Network Topology Oriented Reports [p. 183]. Voice and Video Status - Areas, from the Area column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Network Topology Oriented Reports [p. 183]. Voice and Video Status - Sites, from the Site column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Network Topology Oriented Reports [p. 183]. Voice and Video - Usage Charts, from all columns in the Usage section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video - Network Charts, from all columns in the Performance section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video - Availability Charts, from all columns in the Availability section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Status - Network Topology Oriented Reports The set of network topology-oriented reports for voice and video status presents data on three different location-aggregation levels. Use these reports to identify where measurements pass thresholds. How to Access the Reports To access voice and video status reports for sites, areas, and regions, drill down from the Voice and Video Status - Activity report and click the Network View - RTP tab. The Network View - RTP perspective contains child links for each location type. Report Contents and Usage These reports enable you to assess the fingerprint, intensity of usage, performance, and availability from the perspective of locations. The most important metrics are color-coded against thresholds for easier identification of problems. Drilldown Reports Within this set, the reports are interlinked so you can do fault domain analysis by drilling down from higher aggregation levels (regions) to lower levels (areas or sites). All performance and usage metrics lead to detailed graph reports, allowing you to see the time trends and metric value changes. The following reports can be opened by clicking links in the report table: 183

184 Chapter 8 Reports Menu Voice and Video Status - Areas, from region-related columns (for example, Local region or Remote region) on the Voice and Video Status - Regions report. Voice and Video Status - Sites, from area-related columns (for example, Local area or Remote area) on the Voice and Video Status - Areas report. Voice and Video Status - Conversations, from the host-related columns (for example, Local hosts or Remote hosts) in the Usage section. For more information, see Voice and Video Status - Conversations Report [p. 181]. Voice and Video - Intranetwork Usage Charts, from all non-host related columns in the Usage section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video - Intranetwork Network Charts, from columns in MOS and Network sections. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video - Intranetwork Availability Charts, from columns in the Availability section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Status - Signaling Report Voice and Video Status - Signaling Report report is the top-level report for the voice and video H.245, H.323, and SIP signaling protocols monitoring reports family. Use this report to recognize anomalies or verify the status of your voice and video system. How to Access the Report To accessthis report, drill down from the Voice and Video Status - Activity report and click the Software Services - Signaling tab. For more information, see Voice and Video Status - Activity Report [p. 182]. Report Contents and Usage The data is grouped in two sections: Usage and call-related counters. Note that codecs, to maintain consistency with other CAS reports, are categorized as software services. For each codec, you can see: How much volume it consumes on the monitored network. Use total bytes and total bandwidth usage metrics to verify that. How extensively it is used. Verify the number of detected calls using given codec and the amount of endpoints used. The counters showing the overall number of call attempts accompanied by counters showing numbers of failed calls due to errors during the begin phase, calls not started due to the remote conversation peer and calls finished with the termination error. 184

185 Chapter 8 Reports Menu Drilldown Reports The drilldowns from this report enable easy root cause analysis and fault domain isolation. All performance and usage metrics lead to detailed graph reports that enable you to see the time trends and metric value changes. The Endpoints metric is linked to the Voice and Video Status - Conversations report so you can easily see details of how the selected software service was used and which stations were affected. You can also drill down to network topology oriented reports filtered by a codec. The following reports can be opened by clicking links in the report table: Voice and Video Status - Signaling - Sites, from the Software service (codec) column. For more information, see Voice and Video Status - Signaling - Network Topology Oriented Reports [p. 187]. Voice and Video Usage Charts, from all non-user or client related columns in Usage section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Status - Signaling - Conversations, from the user-related columns, that is endpoints A and B. For more information, see Voice and Video Status - Signaling - Conversations Report [p. 186]. Voice and Video Status - Signaling - Activity Report The Voice and Video Status - Activity report is the most detailed report you can access from voice and video status reports. How to Access the Report The Voice and Video Status - Signaling - Activity report is accessed as a drilldown from Voice and Video Status - Signaling Conversations reports for each IP address of a given endpoint (Endpoint-A or Endpoint-B). Accessed in this way, it shows the activity of a given station. For more information, see Voice and Video Status - Signaling - Conversations Report [p. 186]. To apply additional filters to this report while viewing it, click values in the Time column. Accessed in this way, the report shows the total activity for a given moment in time. Click the IP address of a given endpoint to limit the set of data to a particular host at a given moment. Report Contents and Usage The Activity report adds a temporal dimension to information presented by the Voice and Video Status - Signaling - Conversations report. Using it, you should be able to recognize when a given user from a certain location performed a certain activity. Drilldown Reports The following reports can be opened by clicking links in the report table: Voice and Video Status - Signaling - Activity, from the IP Address column. For more information, see Voice and Video Status - Signaling - Activity Report [p. 185]. 185

186 Chapter 8 Reports Menu Voice and Video Status - Regions, from the Region column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Signaling - Network Topology Oriented Reports [p. 187]. Voice and Video Status - Areas, from the Area column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Signaling - Network Topology Oriented Reports [p. 187]. Voice and Video Status - Sites, from the Site column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Signaling - Network Topology Oriented Reports [p. 187]. Voice and Video - Usage Charts, from all columns in the Usage section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Status - Signaling - Conversations Report Voice and Video Status - Conversations report is a lower-level report accessed from other, more general reports. For a selected entity, it shows more detailed information a software service (codec) or a location. How to Access the Report The report is accessed by drilling down from the following contexts: From the Endpoints column on the Voice and Video Status - Signaling report From the Local hosts and Remote hosts columns on network topology oriented signaling reports. Report Contents and Usage The Conversations report shows the full spectrum of metrics presented in time-aggregated format (per day by default). It enables you to instantly identify which user, from which location, performed what activity and used how much resources. Note that this report, not being a top-level report, always presents filtered results. Drilldown Reports You can access detailed charts for the metrics by drilling down to respective graphical reports. You can also drill down to location-oriented reports by selecting Region, Area, or Site for endpoints taking part in the conversations. You can access the detailed Activity report by selecting the IP address of an endpoint that has activity of interest. The following reports can be opened by clicking links in the report table: Voice and Video Status - Signaling - Activity, from the IP Address column. For more information, see Voice and Video Status - Signaling - Activity Report [p. 185]. Voice and Video Status - Regions, from the Region column for Endpoint-A and Endpoint B. 186

187 Chapter 8 Reports Menu For more information, see Voice and Video Status - Signaling - Network Topology Oriented Reports [p. 187]. Voice and Video Status - Areas, from the Area column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Signaling - Network Topology Oriented Reports [p. 187]. Voice and Video Status - Sites, from the Site column for Endpoint-A and Endpoint B. For more information, see Voice and Video Status - Signaling - Network Topology Oriented Reports [p. 187]. Voice and Video - Usage Charts, from all columns in the Usage section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Status - Signaling - Network Topology Oriented Reports The set of network topology-oriented reports for voice and video status presents data on three different location-aggregation levels. Use these reports to identify where measurements pass thresholds. How to Access the Reports Click the Network View -Signaling tab on the Voice and Video Status - Signaling report to access voice and video status reports for sites, areas, and regions. The Network View perspective contains child links for each location type. Report Contents and Usage These reports enable you to assess the fingerprint, intensity of usage, performance, and availability from the perspective of locations. The most important metrics are color-coded against thresholds for easier identification of problems. Drilldown Reports Within this set, the reports are interlinked so you can do fault domain analysis by drilling down from higher aggregation levels (regions) to lower levels (areas or sites). All performance and usage metrics lead to detailed graph reports, allowing you to see the time trends and metric value changes. The following reports can be opened by clicking links in the report table: Voice and Video Status - Signaling - Areas, from region-related columns (for example, Local region or Remote region) on the Voice and Video Status - Signaling - Regions report. Voice and Video Status - Signaling - Sites, from area-related columns (for example, Local area or Remote area) on the Voice and Video Status - Areas report. Voice and Video Status - Signaling - Conversations, from the host-related columns (for example, Local hosts or Remote hosts) in the Usage section. For more information, see Voice and Video Status - Signaling - Conversations Report [p. 186]. 187

188 Chapter 8 Reports Menu Voice and Video - Intranetwork Usage Charts, from all non-host related columns in the Usage section. For more information, see Voice and Video Graphical Reports [p. 188]. Voice and Video Graphical Reports Voice and video graphical reports show information for a selected entity in an easy-to-read format. The default time range is for the last day. How to Access the Reports The graphical reports for voice and video status are accessed as drilldowns for a number of metrics on all tabular voice and video-related reports. Report Contents and Usage To preserve clarity, voice and video reports are divided into three sections: Usage, Network, and Availability. Depending on the context of your drilldown, you receive an optional bi-directional resolution. The reports are: Voice and Video - Usage Charts Voice and Video - Network Charts Voice and Video - Availability Charts Voice and Video - Internetwork Usage Charts Voice and Video - Internetwork Network Charts Voice and Video - Internetwork Availability Charts Note that these reports are available only as drilldowns. The content is always filtered by conditions inherited from the parent report. Charts also add a temporal perspective to the filtered data. RUM Browser Reports The RUM Browser reports present the Dynatrace Application Monitoring server calculated UEM data fed to the CAS as the result of the DC RUM - Dynatrace Application Monitoring integration. RUM Browser - User Actions The report presents the details of the performance of RUM Browser user actions, also in the context of browsers, operating systems and the third party and content delivery network impact. How to Access the Report To access the RUM Browser - User Actions report in a scope available for the selected application, drill down from the Primary Metric (Health Index, Availability, Performance, Operation Time, or Operations) overlay in Application Health Status - Applications report. For more information, see Application Health Status - Applications [p. 89]. 188

189 Chapter 8 Reports Menu To access the report narrowed to a scope of selected client, use the Client name column of the RUM Browser - Visits report. You can also access the report in the scope narrowed to a selected user action through the Application column of the EUE Overview Applications report. For more information, see Applications Report [p. 107]. Report Contents and Usage The report is an entry point of analysis from the perspective of user actions. The top worst browser and os charts let you analyze the application performance in the selected time range in the context of browsers and operating systems used to perform the user actions. You can also analyze the user action time spent on downloading the third party content and content from external domains marked as CDN (Content Delivery Network) in the Dynatrace Application Monitoring server configuration. The worst performing user actions table lets you drill down to the root of the observed problems by accessing the Dynatrace Application Monitoring served report and User Action Metric Charts report narrowed to the scope of selected user action. For more information, see RUM Browser - Metric Charts [p. 190]. Changing the Default Report Layout To customize this report, click Edit report in the Actions list. The maximum set of provided statistics includes all the dimensions and metrics available on the RUM Browser data view. For more information, see RUM Browser data [p. 364]. RUM Browser - Visits The report presents the details of the performance of RUM Browser visits. How to Access the Report To access the RUM Browser - Visits report in a scope available for the selected application, drill down from the Business Impact overlay in Application Health Status - Applications report. You can also click the client name from the affected users column to narrow the scope of the report to the selected client. For more information, see Application Health Status - Applications [p. 89]. Note that the links from the Business Impact overlay are active only if Browser is selected as the primary measurement source for real user for the application. For more information, see Defining an Application Rule in the Data Center Real User Monitoring Administration Guide. Report Contents and Usage The report is an entry point of analysis from the perspective of visits. It lets you analyze the application performance in the selected time range and see the share of visits affected by performance and availability among all unique visits. Use the drill downs in the Client name column to see the RUM Browser - User Actions report in the scope of the selected client and to access the report in the Dynatrace Application Monitoring client. 189

190 Chapter 8 Reports Menu Changing the Default Report Layout To customize this report, click Edit report in the Actions list. The maximum set of provided statistics includes all the dimensions and metrics available on the RUM Browser data view. For more information, see RUM Browser data [p. 364]. RUM Browser - Metric Charts The reports present the RUM Browser metric values on charts, which facilitates quick comparison and analysis of the selected data. All the metrics presented on the RUM Browser - Metric Charts reports are grouped in the RUM Browser data view. For more information, see RUM Browser data [p. 364]. RUM Browser - Metric Charts - Performance The RUM Browser - Metric Charts - Performance report visualizes the most important performance metrics calculated for RUM Browser data presenting them on the following charts: User Action time contribution and performance Shows the user action time breakdown into network and server time accompanied by the performance. User Action requests Shows the user action requests breakdown into fast, slow and failed. CDN and 3rd party impact Shows the CDN and third party times. User Action connect performance Shows the breakdown into DNS lookup, TCP connection and SSL setup times. User Action W3C timing Shows the times for all the W3C specified navigation timing landmarks used the chart the total time it takes from a user clicking on a link or button until the browser reaches a specific point in the resulting page loading process. These include Request Start, Document Fetch Done and Document Interactive. They are accompanied by the total document download time. To access the report, use the drildown link from the User action column of the RUM Browser - User Actions report. For more information, see RUM Browser - User Actions [p. 188]. RUM Browser - Metric Charts - Visits The RUM Browser - Metric Charts - Visits report visualizes the RUM Browser visits performance presenting it on the following charts: User Action visits and affected visits Shows the breakdown into the numbers of performance and availability affected visits accompanied by the total number of unique visits. User Action response size Shows the size of user actions. 190

191 CHAPTER 9 Alerts The report server's problem detection and alert system features a four-layer architecture with advanced filtering options available at each layer. Alerts are sent to recipients based on subscriptions: you select which alerts you want to receive (which alert notifications you want to subscribe to) and thus become a subscriber to those alerts. You can also apply filtering criteria and select the delivery mechanism. Alerts can be sent to a specified address or can be sent via an SNMP trap. Alerts that are generated even if they have no subscribers assigned are recorded in the alert logs, which store records of all alerts. You can modify the existing alert definitions (those owned by System) or define new alerts. NOTE Because alert parameter names are automatically parsed, alert parameters must retain their default names, even if the user interface language is changed. Refer to the Data Center Real User Monitoring Alert System Administration Guide. Alert System The alert mechanism enables you to be proactive when dealing with problems and to remove problems before they start affecting users. In the reactive model of dealing with problems, you react to problems reported by your users (for example, website users). In such a scenario, the CAS is monitoring a given website and the AMD is measuring operation time for every operation, transaction, and user all the time. Then, using the gathered data, the report server displays all details on charts and makes it possible to measure performance and troubleshoot problems. When problems are reported by users, you look at the reports and find out that, for example, the problem is with HTTP response time from a certain server. You then go and fix the problem: reboot or restart the process or take other corrective action. In other words, you react to a problem that has already affected your users. In the proactive model, you detect problems before your users can notice them. For this, you need two things: the knowledge of how the problems manifest themselves in your particular 191

192 Chapter 9 Alerts environment, and the means of detecting such situations. For example, if long HTTP response time is the best early indicator of developing problems, you could display a chart showing the HTTP response time metric and take action if the value of the metric is above a certain value. It is even better to automate the process and let the system inform you when the metric exceeds the threshold. This is exactly what the alert mechanism was designed to do. Ideally, the system could inform a designated operator about the problem and feed data into an alert management engine. The engine could then perform a corrective action such as restarting the offending server or process. Thus, the report mechanism enables you to move some of the responsibility and intelligence from a human operator (watching the charts) to the machine (acting on alerts). Defining and Modifying Alerts For an alert to be raised, you need to specify the alert triggering conditions, which requires careful observation and knowledge of the system. You need to ensure that: You understand what you are trying to achieve. You have gathered your requirements. You know how problems in the monitored system manifest themselves. You can translate your intentions into alert configuration. You must ensure that alerts detect error situations and nothing but error situations. In other words, you must ensure that failure notifications are sent and corrective actions are performed always when needed, but only in those situations. When configuring alerts, first of all you must consider what the system would be showing if you were troubleshooting a failure in a reactive mode. These could be, for example, slow operations, HTTP response time, SSL handshake errors, stopped pages, 5xx HTTP errors on the login URL, or some textual information that needs to be captured with application error recognition. Then you need to ask yourself what values for a given time duration are still acceptable and what values mean a real problem. Thus, for example, 5 minutes of high server time might not signify a problem, but if it stays high for more than 15 minutes it might be a problem, particularly if after 30 minutes you also see 5xx HTTP errors. Then you have to react. With this type of information, you can start to think about looking for the right alerts to configure. It is not enough to detect alert conditions and then trigger and send alert notifications. You need a business process that ensures that this situation will be fixed as soon as possible. In other words, it is not enough to generate many alerts from monitoring tools if you still react to problems only when users call to complain. Usage Scenarios for Alerts The alert system can satisfy various user requirements and operational scenarios, such as: Notifying the recipient of both the beginning and the end of the alert condition. The user is notified when an alert condition is raised and also when the situation returns to normal. 192

193 Chapter 9 Alerts Notifying the recipient only if a given condition lasts for a certain period of time, or if a given event is repeated several times. This enables the user to focus on real issues and not on insignificant or intermittent glitches. Notifying the recipient several times at regular intervals throughout the duration of the problem. Types of Alerts Alerts can be divided into different categories based on the underlying detector mechanism and on their function. Alert detectors are the actual mechanisms responsible for analyzing the monitored traffic and for recognizing alert triggering events. The detector mechanism determines such things as the types and number of parameters that a given alert takes (or can be modified to take), the speed of processing, and user access to the actual detector code. In most cases, you will be working with user-defined metric alerts, which provide a simple and fast mechanism for performing complex queries on a set of predefined metrics; or on expressions combined of such metrics. They are easy to create, modify, and use. They execute quickly: up to 1,000 alert definitions can be processed in one reporting cycle. It is recommended that metric alerts be used whenever possible because of their speed of execution and ease of modification. The following types of metric alerts can be configured: Real user performance (probe) These alerts monitor traffic between a client and a server. They are based on traffic monitored by AMD, including the elements that are configured on the CAS: applications, transactions, reporting groups, tiers, regions, areas and sites. Application user experience These alerts are based on the data provided in the Application, Transcation and Tier data view. Enterprise Synthetic and sequence These alerts monitor transactions and track the HTTP-based software service activity of synthetic agents and standard users. They are based on traffic monitored by DC RUM or Enterprise Synthetic. Citrix/WTS hardware These alerts monitor the performance of Citrix servers or Windows Terminal Services (for example, the number of active or open sessions). Network link These alerts monitor link utilization. Internetwork traffic These alerts monitor traffic coming in and going out of a specific site. Synthetic backbone These alerts report problems related to Dynatrace Synthetic Monitoring transactional traffic. Although it is recommended that metric alerts be used whenever possible, not every possible alert condition can be expressed as a metric alert. This is why a set of pre-defined SQL-based alerts is provided. These alerts perform SQL queries on the traffic monitoring database. The benefit of using them is that there are no constraints to the complexity of the queries: any event 193

194 Chapter 9 Alerts that can be expressed in SQL can be detected. However, the SQL queries take a considerable amount of time to execute, so performance problems can result. You cannot create new SQL-based alert definitions or duplicate the existing ones in the RUM Console. You can, however, modify some of the detector settings for example, change the threshold values or delete the alert definitions. Among predefined alert definitions, there are also a few non-sql alerts that were designed for specific purposes and that can be modified in only limited ways. Most of them monitor and report on resources of a report server and cannot be deleted from the system. The predefined alerts are grouped based on the type of event on which they report: Anomalies Alerts sent when an abnormal situation is detected (for example, when there are too many services detected for a single user). Diagnostics Alerts that are related to the resources of a report server (for example, free space on the server hard drives or free space for the server database). New objects Alerts that are sent when a user, server, or service registers for the first time in the monitored network. Performance Alerts that report mainly errors that occur during the execution of operations and abnormal time metric values for the operations. They also notify recipients about application availability problems. Alert States and Notifications The alert system is a multi-layer mechanism. For a DC RUM user, the most important elements of this mechanism are alert states and notifications. An alert is raised if the monitored traffic meets the conditions specified in the alert definition, such as when a particular metric exceeds a defined threshold value. An optional notification can then be sent. Alert States If a given metric exceeds its threshold value, an alert state might not be triggered immediately. Exactly when an alert is triggered is defined in the alert definition. It often happens that you want to raise an alert only after a threshold has been exceeded a specific number of times in a given time interval. Similarly, notifications are not sent in direct response to the triggering conditions but in connection with alert states being raised, remaining on, or being lowered. For example, an alert can be raised: As soon as the triggering conditions are fulfilled (after just one occurrence of the alert condition). After a specified number of occurrences of a given condition. An alert state can then be lowered, or will expire: Immediately after the condition that triggered the alert has ceased to occur. 194

195 Chapter 9 Alerts If the triggering condition has not reappeared for a specified number of minutes. If the triggering condition has not reappeared for a specified number of reporting cycles. A condition can repeat a number of times, but after an alert is triggered (raised), it remains raised until it is turned off or expires (is lowered). Similarly, after an alert condition is raised, a notification can be sent zero or more times while the alert state remains on, and a notification can be sent when the alert is turned off. Notifications After an alert is raised, an optional notification can be sent. Whether a notification is sent depends on the alert definition. Later, if the alert state remains on, repeated notifications can also be sent as needed. In particular, while the alert remains on, an alert notification can be repeated: In every reporting cycle Every specified number of minutes Alert cancellation notifications are also possible: an alert definition can specify that a notification should also be sent when the alert state is lowered, that is, when the alert reverts to the off state. Notifications are sent not in direct response to triggering conditions, but in response to alert states being raised, remaining on, or being lowered. One alert can send a number of notifications. After an alert is turned on, it remains in the on state until it is turned off or expires. Means of Alert Delivery Alert notifications can be sent to a specified address, via SNMP traps, or delivered to COS. Notifications are sent to recipients based on subscriptions. Users, referred to as alert subscribers, can select which alerts they want to receive, apply additional filtering criteria, and select the delivery mechanism. When is the selected delivery mechanism, all alerts that have occurred within a single monitoring interval are by default sent in one message. Every enabled alert, even if it has no recipients defined, is generated and can be viewed in the alert logs. All alert notifications, whether ed or not, are recorded in alert logs. For more information, see Alert Log Viewer in the Data Center Real User Monitoring Administration Guide. When traps are the selected delivery medium, a separate trap is associated with each alert notification. Each trap has an associated trap definition, identified by an OID, in the MIB in the alarms.mib file. This MIB can be imported on the trap recipient to correctly interpret the meaning of the alert and automate any corrective actions. Refer to your network management platform manual for information on how to install third-party MIBs. Alert notifications can also be delivered to COS Release You can check the release number of the currently running module in the Administration Console. To open the Administration Console from the Windows Start menu, choose Programs Compuware Compuware Open Server Administration Console. 195

196 Chapter 9 Alerts 196

197 APPENDIX A Central Analysis Server Data Views Data views provide access to data stored in the CAS database. Users can build custom reports based on data views in the Data Mining Interface. Note that in 12.1 release, the CAS data view names were changed to reflect better the scope of the data they cover. Refer to the table for details. Old data view names New data view names Central Analysis Server Baseline traffic Baseline transaction Citrix statistics Internetwork traffic Low significance traffic MQ statistics Monitored links Monitored traffic Tier baseline from cache Tier data Transaction Software service, operation, and site baselines Synthetic and sequence transaction baselines Citrix/WTS hardware data Internetwork traffic data Low-significance traffic N/A Network link data Software service, operation, and site data Application, transaction, and tier baselines Application, transaction, and tier data Synthetic and sequence transaction data Gomez Synthetic Monitoring Baseline Gomez Synthetic Monitoring (Page-level) Baseline Gomez Synthetic Monitoring (Test-level) Gomez Synthetic Monitoring (Page-level) Gomez Synthetic Monitoring (Test-level) Synthetic Monitoring page baselines Synthetic Monitoring test baselines Synthetic Monitoring page data Synthetic Monitoring test data 197

198 Appendix A Central Analysis Server Data Views Central Analysis Server Software service, operation, and site data This data view provides dimensions and metrics to analyze the monitored traffic. Software service, operation, and site data dimensions Agent The name of the synthetic agent that loaded the HTTP pages, for example, Keynote, Gomez, or Mercury. The name of the agent is determined from the User-agent field of the HTTP request and/or from agent user names or IP address configured on the server. Analysis type It assumes two values: Non-transaction and Transaction. This is to determine if transactional (TCP-based) or non-transactional traffic will be considered. Analyzer The name of the traffic analyzer. For more information see Concept of Protocol Analyzers Analyzer group The logical group of analyzers based on the type of the analyzed traffic. For more information see Concept of Protocol Analyzers Application A universal container that can accommodate transactions. Application Monitoring Server ID Identifier of the Application Monitoring server. Application Monitoring System Profile Name of a used Application Monitoring System profile. Business day The classification of days as business or non-business, as defined in the Business Hours Configuration tool. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Call initiator The IP address of the party initiating the call. Call manager IP The IP address of the call manager used in a VoIP call. Class of service The name identifying a Type of Service value. The mapping of Class of Service names to different values of Type of Service is defined in Central Analysis Server configuration. Client area Sites, areas, and regions define a logical grouping of clients and servers, or Backbobne nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. 198

199 Appendix A Central Analysis Server Data Views Client city Geographical data about the client site, or the Backbone node in case of Synthetic Backbone reports. Client country Geographical data about the client country, or the Backbone node country in case of Synthetic Backbone reports. Client geographical region Geographical data about the client region, or the Backbone node region in case of Synthetic Backbone reports.. Client group The client's group, or a group of Backbone nodes in case of Synthetic Backbone reports, as manually defined in Central Analysis Server. Client hardware The user's hardware type (for example, a mobile phone model). Client internal IP address Client IP address as seen in the client's local network. Client IP address The IP address of the client, or the Backbone node in case of Internet synthetic reports. Client OS The user's operating system. Client region Sites, areas, and regions define a logical grouping of clients and servers, or Backbone nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site Sites, areas, and regions define a logical grouping of clients and servers, or Backbobne nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions, clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site description The optional description of the client site, or a Backbobne node in case of Synthetic Backbone reports. Client site ID In cases when sites are ASes, Client Site ID contains the AS number, which is also given in Client ASN, or Backbone node ASN in case of Synthetic Backbone reports. For manual sites, Client Site ID is identical to Client site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Client site type One of site types: AS, Active, CIDR Block, Default, External, Manual, Network or Predefined. External is a site defined by a user in external configuration files. Manual site 199

200 Appendix A Central Analysis Server Data Views is defined by a user by means of configuration interface on the report server. Predefined sites are based on a mapping contained in a special configuration file. Client site UDL A dimension designed to filter only the User Defined Links. By default it is set to true (Yes) for WAN Optimization Sites report. Client site WAN Optimized Link Indicates whether a site to which the client belongs is selected as both a UDL and a WAN optimized link. Client type The user's browser type. Client version The version of the Internet browser used to execute the operation. Client VPN The name of the VPN in which the user registered. Client WINS name The client's computer name resolved by a WINS server. Conference call id Identifier of the VoIP conference call. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Day of the week The textual representation of the day of the week. Grouping attributes 1 One of the grouping attributes retrieved from a HTTP request or response, specific for a particular client. Requires configuration on AMD. Grouping attributes 2 One of the grouping attributes retrieved from a HTTP request or response, specific for a particular client. Requires configuration on AMD. Grouping attributes 3 One of the grouping attributes retrieved from a HTTP request or response, specific for a particular client. Requires configuration on AMD. Hour of the day The numerical representation of the hour of the day, that is, numbers from 0 to 23. Is duplicated? Determines whether entity is duplicated. Is front-end tier? Indicates whether a given tier is a front-end tier for a selected application. Is operation name cut? Provides information if a presented operation name was cut by the AMD. Possible values: Yes, No. Link alias A custom name created by a user for a selected link. 200

201 Appendix A Central Analysis Server Data Views Link group An element of the links hierarchy tree. May contain separate links or other link groups. Link group level The hierarchy level on which a link group resides. The dimension differentiates only between 2 states: a link/group can either be on Level 1, or on any level different than Level 1. Link monitor Link information source (Network Monitoring Probe, Flow Collector, AMD). Link name A link name, as reported by the information source (Network Monitoring Probe, Flow Collector, AMD). Link traffic direction For traffic monitored with Netflow or NV WAN probes, this dimension says whether a client is on the far or near end from a point of view of a given interface. If, for a specific interface, client bytes are incoming bytes, the direction bit is cleared (dimension value equals 0) and the client is on the far end from a point of view of this interface. If client bytes are outgoing bytes, the direction bit is set (dimension value equals 1) and the client is on the near end from a point of view of this interface. Link type The type of a monitored link, for example Ethernet or Frame Relay. Miscellaneous parameter 1 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 2 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 3 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 4 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 5 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 6 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Module Module is the third level in the reporting hierarchy. For example, in database monitoring this is the database name, and in SOAP monitoring this is the SOAP service. This entity can be broken to smaller bits such as tasks. 201

202 Appendix A Central Analysis Server Data Views Network tier site selector Displays only those client sites that are assigned to the selected network tier. Operation For HTTP, this is the URL of the base page to which the hit belongs. For other analyzers this can be a query, operation type or an operation status. Operation is ascertained by the AMD, based on referrer, timing relations between hits and per-transaction monitoring configured on the AMD. This dimension can assume values of a particular operation - if this operation is monitored. Note: The visibility of this dimension on reports depends on whether another dimension, related to servers - e.g. server IP or server DNS - has been used when formulating the query. The All other operations record serves a catch-all net for al the traffic that has been seen to-from a server, but was not classified as belonging to a specific monitored-by-name operation. It accounts for statistics of: operations which were not reported in per specific operation records (for example those that fall out of topn reported operations for a specific analyzer) - in such case the number of operations and slow operations, as well as operation time and other transactional statistics will be reported as an aggregate/average; traffic which was not classified to any operations (for example, idle TCP session closure, TCP handshake without any operation, etc) - in such case only volumetric statistics (bytes, packets) will be reported for this specific traffic. Operation (incl. whole) For HTTP, this is the URL of the base page to which the hit belongs. For other analyzers this can be a query, operation type or an operation status. Operation is ascertained by the AMD, based on referrer, timing relations between hits and per-transaction monitoring configured on the AMD. This dimension can assume values of a particular operation - if this operation is monitored. Note: The visibility of this dimension on reports depends on whether another dimension, related to servers - e.g. server IP or server DNS - has been used when formulating the query. Compare "Operation (incl. whole)". Operation (not aliased) The original operation name as seen in the traffic. Physical link name The first of 2 segments in a link name. Process ID The identifier of a process running on a Citrix server on which Cerner applications are running. Protocol The IP protocol name. Reporting group Reporting group is a universal container that can accommodate software services, servers, URLs or any combination of these. Reporting groups can contain software services of every type. Advanced Diagnostics Server can import reporting group configuration from Central Analysis Server. Request method The HTTP request type: GET or POST. 202

203 Appendix A Central Analysis Server Data Views SAP operation order The sequence number of a step is used to set the relation between operations and tasks. It is unique only within the scope of a given task. Script name The name of the simple parser script. Server aggregated name >Server aggregated name Server area Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and optionally on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Server city Geographical data about the server site. Server country Geographical data about the server site. Server geographical region Geographical data about the server site. Server IP address The IP address of the server. Server name The name of the server resolved by a DNS server. Server port The TCP port number on a server that hosts a software service. Server region Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Server site Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' Autonomous System names. Sites are the smallest logical structures that comprise of clients and servers. Areas are composed of sites, and regions are composed of areas. Server site description Optional description of the server site. Server site ID In cases when sites are ASes, Server site ID contains the AS number, which is also given in Server ASN. For manual sites, Server site ID is identical to Server site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Server site WAN Optimized Link Indicates whether a site to which the server belongs is selected as both a UDL and a WAN optimized link. 203

204 Appendix A Central Analysis Server Data Views Service Service is the highest level of multi-level reporting hierarchy. For example, in SAP GUI monitoring this is the business process. This entity can be broken to smaller bits such as modules. Software service The software service name, where by a software service we understand a service implemented by a specific piece of software, offered on a TCP or UDP port of one or more servers and identified by a particular TCP port number. Software service type The type of the software service - autodiscovered or user-defined. Storage source Storage source Task Tier Task is the second level in the reporting hierarchy. For example, in HTTP monitoring this is the page name; in database monitoring this is the operation name (may contain regular expression if configured on the AMD) or operation type prefix, and in SOAP monitoring this is the SOAP method. This entity can be broken to smaller bits such as operations or operation types. A specific point of the application where we measure data. It can be a specific traffic type or a server. Tier sequence number The sequence number of a tier is determined by the order in which you define your tiers, and these numbers in turn determine the order in which data is displayed on the report. Tier type The type of a tier can be one of the following: client, network, or data center. Time The time stamp of the data presented on the report. TOS-binary A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by the AMD and displayed in reports. The use of this field is software service specific; it is used by software services to denote special types of traffic. TOS-decimal A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by AMD and displayed in reports. The use of this field is software service specific: it is used by software services to denote special types of traffic. Traffic type The type of client traffic: real or synthetic, that is, generated by a synthetic agent. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. 204

205 Appendix A Central Analysis Server Data Views Transaction step The step as configured in a transaction definition. Step configuration is built on DCRUM data using operations, tasks, modules or services. Steps are contained within transactions and carry the entire transaction configuration. Transaction step sequence number The sequence number of a step is used for presentation purposes. It marks the order of a particular step in a transaction configuration. You can order steps within each transaction if such an ordering makes sense for the overall monitored application paradigm. The transaction step sequence does not affect data aggregation. URL host The domain name, including the port number (if it is reported by the AMD). URL path The request path that points to a Web resource, including the first forward slash. URL protocol The protocol that is used to locate the resource (HTTP or HTTPS). User name Client's name determined from HTTP cookie (requires configuration on AMD), HTTP authentication header or static mapping. VC link name The second of 2 segments in a link name. Software service, operation, and site data metrics % of bad delay calls The percentage of VoIP calls with delay above the acceptable level. % of bad jitter calls The percentage of VoIP calls with jitter exceeding the acceptable level. % of bad lost packets calls The percentage of VoIP calls with loss rate above the acceptable level. % of bad MOS calls The percentage of VoIP calls with the Mean Opinion Score (MOS) rating below acceptable threshold. % of bad R-factor calls The percentage of VoIP calls with R-factor value below the acceptable value. Aborted page Application Delivery Channel Delay Application Delivery Channel Delay in tenths of milliseconds for an aborted page. Aborted page http server time A time difference between the response and request for an aborted page. Aborted page image server time The average combined image server time (HTTP) for an aborted page. Aborted page redirect time The average redirect time per operation (this includes operations with no redirects). Aborted page request size The average aborted client request size (GET or POST). 205

206 Appendix A Central Analysis Server Data Views Aborted page request time The average time from the first client SYN packet to the last request packet for an aborted operation. Aborted page server delay The time spent on the server during one operation. Aborted page size in bytes The average aborted operation size in bytes. Aborted page SSL setup time The average SSL setup time per operation. Aborted page TCP connect time The average TCP connect time for an aborted operation. Counted as an average of TCP connection times. A single TCP connection time is counted as the time difference between the first client SYN and first client ACK. Aborted page transfer time The average time it took the server to send a response to the client, averaged over all the aborted operations in the monitoring interval. Aborts The number of operations aborted by the client. It applies to all TCP-based protocols. For example, for HTTP/HTTPS, it is the number of operations manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL. Note that, in the case of HTTP, this number includes Short aborts and Long aborts. Activity time The amount of time the server or client has been active, that is, transmitted any traffic. The time resolution is equal to the length of the monitoring interval. Affected users (availability) The number of unique users that were affected by the availability problems. Affected users (availability) breakdown A breakdown of users into how many were affected by availability problems and how many were not. Affected users (network) The number of unique users that experienced network performance problems. Affected users (network) breakdown A breakdown of users into how many were affected by network performance problems and how many were not. Affected users (performance) The number of users that experienced application performance problems. For transactional protocols, a problem is noted if at least one operation is completed in time longer than the performance threshold. For transactionless TCP-based protocols, a problem is noted if user wait per kb of data is longer than the threshold value. Affected users (performance) breakdown A breakdown of users into how many were affected by application performance problems and how many were not. Aggregate data center time This is a sum of all products: server time multiplied by the number of transactions. 206

207 Appendix A Central Analysis Server Data Views Application Delivery Channel Delay In WAN optimized scenario, Application Delivery Channel Delay (ADCD) is a quality metric represented in milliseconds. The ADCD is determined by initial observation of the traffic between a client and a server. ADCD is a derivative of RTT measured on a WAN link expressed in time and as such it can be understood as latency, where the larger ADCD would indicate a higher network latency. ADCD also includes time spent in the data center WOC for traffic buffering and processing. A change of ADCD from its initial value reflects a change of quality in WAN optimization service. For example, sudden increase of ADCD would suggest that the quality of the service has worsened and conversely, a sudden decrease of ADCD value could suggest an improvement in WAN optimization. Application Delivery Channel Delay (range 1) The number of operations whose ADCD value is within range 1 as defined in the RUM Console. Application Delivery Channel Delay (range 2) The number of operations whose ADCD value is within range 2 as defined in the RUM Console. Application Delivery Channel Delay (range 3) The number of operations whose ADCD value is within range 3 as defined in the RUM Console. Application Delivery Channel Delay (range 4) The number of operations whose ADCD value is within range 4 as defined in the RUM Console. Application Delivery Channel Delay (threshold 1 [max]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 1) during the reporting period. Application Delivery Channel Delay (threshold 1 [min]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 1) during the reporting period. Application Delivery Channel Delay (threshold 2 [max]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 2) during the reporting period. Application Delivery Channel Delay (threshold 2 [min]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 2) during the reporting period. Application Delivery Channel Delay (threshold 3 [max]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 3) during the reporting period. Application Delivery Channel Delay (threshold 3 [min]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 3) during the reporting period. Application health index The percentage of fast operations calculated as "Fast Operations / (Failures + Operations) * 100%". 207

208 Appendix A Central Analysis Server Data Views Application Monitoring Flags One of 0,1 or 2, meaning: 0 - no Application Monitoring servers, 1 - one Application Monitoring server, 2 - several Application Monitoring servers. Application Monitoring Hits The number of Application Monitoring hits. Application Monitoring Operations The number of Application Monitoring operations. Application Monitoring Server Errors The number of Application Monitoring server errors. Application performance For transactional protocols, this is the percentage of software service operations completed in a time shorter than the performance threshold. For SMTP and transactionless TCP-based protocols, this is the percentage of monitoring intervals in which user wait time per kb of data was shorter than the threshold value. Attempts The number of monitoring intervals during which attempts were made to connect to a server. Note that this is counted separately for each server, client and software service. Thus, if in a given monitoring interval there are attempts to connect to three different servers, the Attempts metric will be incremented by three for that one monitoring interval. The actual value shown on the report is the sum total of all the attempts, for all the monitoring intervals, in the period covered by the report. Auto-discovery flags If a software service contains any autodiscovered traffic, the metric displays the green tick sign. Availability (application) Availability limited to the application context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (Application) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Availability (TCP) Availability limited to the network context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (TCP) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Availability (total) The percentage of successful attempts, calculated using the following formula: Availability (total) = 100% * (All Attempts All failures) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure All failures = all failures (transport) + all failures (TCP) + all failures (application). 208

209 Appendix A Central Analysis Server Data Views Availability (transport) Availability limited to the transport context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (Transport) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Bad delay calls The number of VoIP calls with delay above the acceptable level. Bad jitter calls The number of VoIP calls with jitter exceeding the acceptable level. Bad lost packets calls The number of VoIP calls with loss rate above the acceptable level. Bad MOS calls The number of VoIP calls with the Mean Opinion Score (MOS) rating below acceptable threshold. Bad R-factor calls The number of VoIP calls with R-factor value below the acceptable value. Call attempts The number of call attempts including successful and failed ones. Call duration The time the VoIP call took. Calls The total number of VoIP calls. Note that for a selected software service the number of calls as seen from the sites' perspective may differ from the number seen from the endpoints' perspective. This is because in one site we may have two users taking part in the same call. Calls finished with termination error The number of calls that finished with the termination error. Calls not started due to remote peer The number of calls that could not start due to a remote peer. Calls with error during begin phase The number of calls affected by errors occurring during the begin phase. Client ACK RTT Client ACK RTT is the time it takes for an ACK packet with no payload to travel from the user to the AMD and back again. Client ACK RTT measurements This metric keeps track of how many Client ACK RTT measurements were made. ACK measurement is performed during ACK packet transmission either from server or client side of the transaction. Client bandwidth usage The number of client bits per second. Client bytes The number of bytes sent by the clients. Note that this includes headers. 209

210 Appendix A Central Analysis Server Data Views Client IP addresses (type) This metric indicates if the reported observations are associated with one or more IP addresses. It assumes the value "multiple" or an IP address. In DMI queries, such metrics should be used in conjunction with a client name or IP address, else this metric will most likely always have the value "multiple". Client loss rate The percentage of total packets sent by a client that were lost (due to network congestion, low router queue capacity or other reasons) and needed to be retransmitted. Client loss rate (AMD to server) The percentage of total packets sent by a client that were lost - between the server and the AMD - and needed to be retransmitted. Client loss rate (client to AMD) The percentage of total packets sent by a client that were lost - between the client and the AMD - and needed to be retransmitted. Client not responding errors The number of errors of category Client not responding. Errors of this category occur when the server closes the TCP session with a RESET packet after the client has been idle for too long. Such a situation happens when the server TCP/IP stack detects that network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with a RESET packet. This may occur when the client has been silently disconnected from the network, for example, due to link failure, or the client has crashed. Note that this error will not occur if the client session has ended gracefully, that is, by closing the client application. Client operations The number of operations (for HTTP/SSL this is equivalent to the number of pages, for DB/2 it is equivalent to the number of queries) from the client side. For traffic analyzed with the analyzers General-volume and ICA (Citrix), this is the number of client data transfers for which network realized bandwidth was measured. Client operation size The size of a client operation. Note: an operation can be split over several packets. For traffic parsed with HTTP and SSL decrypted analyzers, Client operation size is the size in bytes of the operation request (HTTP GET or POST). Client packets The number of packets sent by the clients. Client packets/sec The number of packets per second, sent by the clients. Client packet size The average size of the client-originating packets (in bytes), including header. Client packets lost (AMD to server) The number of packets sent by a client that were lost - between the AMD and the server - and needed to be retransmitted. Client packets lost (client to AMD) The number of packets sent by a client that were lost - between the client and the AMD - and needed to be retransmitted. 210

211 Appendix A Central Analysis Server Data Views Client realized bandwidth Client realized bandwidth refers to the actual transfer rate of client data when the transfer attempt occurred, and takes into account factors such as loss rate (retransmissions). Thus, it is the size of an actual transfer divided by the transfer time. Client RTT Client RTT is the time it takes for a SYN packet (sent by a server) to travel from the AMD to the client and back again, as shown in the following picture. Client AMD Server T1 SYN T2 T3 T6 T7 SYN ACK ACK T5 Client RTT T8 T4 T9 A client RTT measurement begins when the SYN ACK packet from the server to the client passes by the AMD (T5). The packet reaches the client machine (T6) and is processed, while an acknowledgment is sent back to the server (T7). Client processing time impact (T7-T6) is again very low. Client RTT measurement ends when the ACK packet reaches the AMD (T8). Therefore, the Client Round Trip Time is calculated as T8-T5. Depending on the actual setup, Client RTT measurements may vary dramatically. In corporate environments, it may be a few milliseconds for LAN-connected clients or a couple dozens milliseconds for WAN-connected clients. In this case, where the client is coming from the Internet, the end-to-end Client RTT measurement is a compound of transit time through the Internet backbone as well as through the "last mile" access network. The impact of the last mile can be easily calculated, based on the connection speed and the packet size (56B in case of TCP SYN packet). For a 28 kbps dial-up connection, this amounts to 16 milliseconds one way, or 32 milliseconds for a complete round-trip measurement. For a 1.6 Mbps DSL line, this makes 56 microseconds towards complete client RTT measurement. Client RTT (range 1) The number of operations whose client RTT value is within range 1 as defined in the RUM Console. Client RTT (range 2) The number of operations whose client RTT value is within range 2 as defined in the RUM Console. Client RTT (range 3) The number of operations whose client RTT value is within range 3 as defined in the RUM Console. Client RTT (range 4) The number of operations whose client RTT value is within range 4 as defined in the RUM Console. Client RTT (threshold 1 [max]) The highest value that was set for the upper bound of Client RTT (range 1) during the reporting period. 211

212 Appendix A Central Analysis Server Data Views Client RTT (threshold 1 [min]) The lowest value that was set for the upper bound of Client RTT (range 1) during the reporting period. Client RTT (threshold 2 [max]) The highest value that was set for the upper bound of Client RTT (range 2) during the reporting period. Client RTT (threshold 2 [min]) The lowest value that was set for the upper bound of Client RTT (range 2) during the reporting period. Client RTT (threshold 3 [max]) The highest value that was set for the upper bound of Client RTT (range 3) during the reporting period. Client RTT (threshold 3 [min]) The lowest value that was set for the upper bound of Client RTT (range 3) during the reporting period. Client TCP data packets The total number of TCP packets sent by the clients, excluding the traffic control packets. Client TCP data packets lost The number of lost TCP data packets sent by the clients, excluding the traffic control packets. The number of lost TCP packets always regards the context of the counter, for example, an application, a server or any other entity. Closed TCP connections The total number of successful or failed TCP connections. Connection establishment timeout errors The number of TCP errors of category Connection establishment timeout errors. This category of errors applies when there was no response from the server to the SYN packets transmitted by the client. Connection refused errors The number of TCP errors of category Connection refused errors, also referred to as Session establishment errors. This category of errors applies when a server rejects a request from a client to open a TCP session. Such a situation usually happens when the server runs out of resources, either due to operating system kernel configuration or lack of memory. Custom metric (1)(avg) The average value of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (1)(cnt) The number of occurrences of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (1)(sum) The sum of all values of user-defined metrics in category 1 observed in the HTTP or XML traffic. 212

213 Appendix A Central Analysis Server Data Views Custom metric (2)(avg) The average value of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (2)(cnt) The number of occurrences of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (2)(sum) The sum of all values of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (3)(avg) The average value of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (3)(cnt) The number of occurrences of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (3)(sum) The sum of all values of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (4)(avg) The average value of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (4)(cnt) The number of occurrences of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (4)(sum) The sum of all values of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (5)(avg) The average value of user-defined metrics in category 5 observed in the HTTP or XML traffic. Custom metric (5)(cnt) The number of occurrences of user-defined metrics in category 5 observed in the HTTP or XML traffic. Custom metric (5)(sum) The sum of all values of user-defined metrics in category 5 observed in the HTTP or XML traffic. Database errors The number of database errors in the database analyzer: For TDS, which includes Sybase and MS SQL Server, any value from the following table is considered an error. For MySQL, if an ERR_Packet is returned, the error count is incremented. An error with a severity level of 19 or higher stops the execution of the current SQL batch and the error message is written to the error log. 213

214 Appendix A Central Analysis Server Data Views Errors that can be corrected by the user: 11: The given object or entity does not exist. 12: SQL statements that do not use locking because of special options. In some cases, read operations performed by these SQL statements could result in inconsistent data, because locks do not guarantee consistency. 13: Transaction deadlock errors. 14: Security-related errors such as permission denied. 15: Syntax errors in the SQL statement. 16: General errors that can be corrected by the user. Software errors that cannot be corrected by the user and that require system administrator action: 17: The SQL statement caused the database server to run out of resources (such as memory, locks, or disk space for the database) or to exceed some limit set by the system administrator. 18: There is a problem in the database engine software, but the SQL statement completes execution, and the connection to the instance of the database engine is maintained. System administrator action is required. 19: A non-configurable database engine limit has been exceeded and the current SQL batch has been terminated. System problems: 20-25: Fatal errors, meaning that the database engine task that was executing a SQL batch is no longer running. The task records information about what occurred and then terminates. In most cases, the application connection to the instance of the database engine also terminates. If this happens, depending on the problem, the application might not be able to reconnect. Database warnings The number of database warnings in the database analyzer: For TDS, which includes Sybase and MS SQL Server, this count will always be zero. TDS does not track anything as a warning. For MySQL, if an OK_Packet is returned, the warning count value in that packet is checked and the total warning field is updated with the returned number. Data samples The number of lines in the traffic performance data packages received from the AMDs. When clients are aggregated into so-called aggregation blocks, this is the number of software service-server-site triplets. This metric is not calculated in PVU mode. Delay Data transfer delay on a DataCenter device, such as load balancer or firewall. Discarded aborts The number of aborted hits without valid server response. DNS errors The number of DNS errors. 214

215 Appendix A Central Analysis Server Data Views DNS format errors The number of DNS format errors (DNS error code 1). DNS name errors The number of DNS name errors (DNS error code 3). DNS not implemented errors The number of DNS not-implemented errors (DNS error code 4). DNS other errors The number of DNS errors with error codes from 6 to 15, that is errors that do not fall into any of the following categories: DNS format errors (error code 1), DNS server failure errors (error code 2), DNS name errors (error code 3), DNS not implemented errors (error code 4), DNS refused errors (error code 5), DNS timeouts. DNS refused errors The number of DNS refused errors (DNS error code 5). DNS server failure errors The number of DNS server failure errors (DNS error code 2). DNS timeouts The number of DNS timeout errors. Endpoints A If voice data is provided by Network Monitoring Probe, this is the total number of callers detected during a given monitoring interval. Endpoints B If voice data is provided by Network Monitoring Probe, this is the total number of recipients detected during a given monitoring interval. End-to-end ACK RTT The time it takes for an ACK packet to travel from a client to the monitored server and back again. End-to-end RTT The time it takes for a SYN packet to travel from the client to a monitored server and back again. Error indicator The number of error indicators. Excluded operations The number of operations for which the operation time was above a safety threshold. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Failures (application) The number of operation attributes of all types set to be reported as an application failure. Failures (TCP) The total number of operations that failed due to Connection refused or Connection establishment timeout errors. 215

216 Appendix A Central Analysis Server Data Views Failures (total) The total number of failures, that is all Failures (transport) + all Failures (TCP) + all Failures (application) Failures (transport) The number of operations that failed due to the problems in the transport layer. These include protocol errors, SSL alerts classified as a failure, incomplete responses selected be classified as failures. Fast operations The number of operations for which the operation time was below a predefined threshold value. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Fatal error The number of fatal errors. Forms application error The number of Oracle Forms application errors that occurred during a given operation. For a complete list of specific Forms application errors, refer to the RUM Console Help. Forms client error The number of Oracle Forms client errors that occurred during a given operation. For a complete list of specific Forms client errors, refer to the RUM Console Help. Forms server error The number of Oracle Forms server errors that occurred during a given operation. For a complete list of specific Forms server errors, refer to the RUM Console Help. Hits The number of subcomponents of error-free operations. Note that this metric is recorded at the time when the monitored operations are closed. In case of HTTP, it is when the whole page has been loaded. Compare "Hits (started)". For example, when the user issues an HTTP GET, a "Hit (started)" is reported immediately, whereas if a whole page is loaded and the operation is closed, it is reported as a "Hit". Hits (range 1) The number of operations whose hit count is within range 1 as defined in the RUM Console. Hits (range 2) The number of operations whose hit count is within range 2 as defined in the RUM Console. Hits (range 3) The number of operations whose hit count is within range 3 as defined in the RUM Console. Hits (range 4) The number of operations whose hit count is within range 4 as defined in the RUM Console. Hits (started) The number of subcomponents of operations. Unlike the "Hits" metric, "Hits (started)" is recorded immediately, not at the end of an operation. For example, when the user issues an HTTP GET, a "Hit (started)" is reported immediately, whereas if a whole page is loaded and the operation is closed, it is reported as a "Hit". 216

217 Appendix A Central Analysis Server Data Views Hits (succeeded) The number of hits, that did not generate client or server errors (replies in range 2xx and 3xx). Hits (threshold 1 [max]) The highest value that was set for the upper bound of Hits (range 1) during the reporting period. Hits (threshold 1 [min]) The lowest value that was set for the upper bound of Hits (range 1) during the reporting period. Hits (threshold 2 [max]) The highest value that was set for the upper bound of Hits (range 2) during the reporting period. Hits (threshold 2 [min]) The lowest value that was set for the upper bound of Hits (range 2) during the reporting period. Hits (threshold 3 [max]) The highest value that was set for the upper bound of Hits (range 3) during the reporting period. Hits (threshold 3 [min]) The lowest value that was set for the upper bound of Hits (range 3) during the reporting period. Hits per operation The number of hits per operation. HTTP client errors (4xx) The sum of all HTTP client errors (4xx). This includes 4 categories of errors (4xx), by default HTTP Unauthorized (401, 407) errors, HTTP Not Found (404) errors, custom client (4xx) errors and Other HTTP (4xx) errors. The contents of the first 3 categories can be configured by users. However, there are two types of the 4XX errors that are of particular importance: errors 401 related to server-level authentication, and errors 404 indicating requests for non-existent content. These two error types are reported separately, by specific metrics. 401 Unauthorized - Server reports this error when user's credentials supplied with request do not satisfy page access restrictions. The HTTP server layer, not the application layer, reports 401 errors. The AMD will report on "Unauthorized" errors only if server-level authentication has been configured. This is common practice for sites that are comfortable with very basic user access policies. Most commercial-grade applications do not rely on server-level authentication (e.g. most of online banking applications or online shopping), but rather authenticate users on the application layer. In such a case, even if authentication fails, the server will typically send 200 OK responses and authentication error information will be explained in page content. So this kind of error is not very common in commercial sites. 404 Not Found - Server reports "Not Found" errors when it cannot fulfill client request for a resource. Usually it happens due to malformed URL, which directs to a non-existing page or image. Such a URL request may result from a user, who 217

218 Appendix A Central Analysis Server Data Views misspelled the URL, trying to access a URL that the user stored in his "Favorites" folder a long time ago, or some other mistake. Malformed URLs may also exist in invalid or incorrectly designed Web pages so the error will be reported by browsers trying to load such a page. Significant and constant number of these errors usually indicates that some pages on the server have design-related or link validation issues. In some cases, 404 errors result from the server overload. It is good practice to check whether the percentage of errors is load-related. HTTP client errors - category 3 (default name) The number of HTTP custom client errors (4xx). By default, there is no specific error type assigned here. HTTP errors The number of observed HTTP client errors (4xx) and server errors (5xx). HTTP not found errors 404 (default name) The number. These include the observed custom HTTP 404 Not found errors. HTTP other client errors (4xx) The number of HTTP other client errors (4xx). There are four categories of HTTP client errors (4xx), of which three can be configured by users. By default, the first category includes HTTP Unauthorized (401, 407) errors, the second category - HTTP Not Found (404) errors. The third category contains no default error types assigned, and can be configured by a user. Finally, a group of HTTP Other (4xx) errors contains all errors that do not fall into any other client errors category. The number is calculated based on the formula: [HTTP errors 4xx] - [HTTP Not Found errors 404] - [HTTP Not Authorized ( )] - [HTTP errors configured by user]. HTTP other server errors (5xx) The number of HTTP server errors (5xx) that do not fall into categories 1 or 2 of custom HTTP server errors (5xx). HTTP redirect time The average amount of time that was spent between the time when a user went to a particular URL and the time this user was redirected to another URL and issued a request to that new URL. The HTTP redirect time refers to the transactions for which redirection actually took place. HTTP response time (range 1) The number of operations whose HTTP response time is within range 1 as defined in the RUM Console. HTTP response time (range 2) The number of operations whose HTTP response time is within range 2 as defined in the RUM Console. HTTP response time (range 3) The number of operations whose HTTP response time is within range 3 as defined in the RUM Console. HTTP response time (range 4) The number of operations whose HTTP response time is within range 4 as defined in the RUM Console. 218

219 Appendix A Central Analysis Server Data Views HTTP response time (threshold 1 [max]) The highest value that was set for the upper bound of HTTP response time (range 1) during the reporting period. HTTP response time (threshold 1 [min]) The lowest value that was set for the upper bound of HTTP response time (range 1) during the reporting period. HTTP response time (threshold 2 [max]) The highest value that was set for the upper bound of HTTP response time (range 2) during the reporting period. HTTP response time (threshold 2 [min]) The lowest value that was set for the upper bound of HTTP response time (range 2) during the reporting period. HTTP response time (threshold 3 [max]) The highest value that was set for the upper bound of HTTP response time (range 3) during the reporting period. HTTP response time (threshold 3 [min]) The lowest value that was set for the upper bound of HTTP response time (range 3) during the reporting period. HTTP server errors (5xx) The number of observed HTTP server errors (5xx). The response status codes 5xx indicate cases, in which the Web server is aware that there was a server error or it is incapable of performing the request. Such error presence usually means that the Web server does not function as intended. The following 5xx errors are defined by the HTTP protocol standards: 500 Internal Server Error - The server encountered an unexpected condition, which prevented it from fulfilling the request. 501 Not Implemented - The server does not support the functionality required to fulfill the request. 502 Bad Gateway - The server received an invalid response from a back-end application server. 503 Service Unavailable - The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. 504 Gateway Timeout - The server did not receive response from a back-end application server. 505 HTTP Version Not Supported - The server does not support the HTTP protocol version that was used in the request message. HTTP server errors category 1 (default name) The number of custom HTTP server errors (5xx), category 1. By default, there are no specific error types assigned to this category. HTTP server errors category 2 (default name) The number of custom HTTP server errors (5xx), category 2. By default, there are no specific error types assigned to this category. 219

220 Appendix A Central Analysis Server Data Views HTTP server image time This is the total amount of time it takes for images (non-html content) to be prepared for delivery. HTTP unauthorized errors 401, 407 (default name) The number of observed custom HTTP authentication related errors. These include "HTTP 401 Unauthorized" and "HTTP 407 Proxy authentication required" errors. HTTP servers generate errors "401 Unauthorized" in cases, when anonymous clients are not authorized to view the requested content and must provide authentication information in the WWW-Authenticate request header. The 401 errors are similar to "403 Forbidden" errors, however used when authentication is possible but it has failed or not yet been provided. The 407 error is basically similar to 401, but it indicates that the client should first authenticate with a proxy server. The AMD will report these errors only if the server-level authentication has been configured. Simple and basic user access policies are common in Web sites that do not store user-sensitive and/or business critical information. Most commercial-grade applications, based on HTTP, such as home banking applications or online shopping sites, rely on the application-level authentication rather than the server-level authentication. Such applications are designed in the way that even if the user authentication fails, the HTTP server usually sends the 200 OK response code and the authentication error message in the page content. Therefore, the 401 Unauthorized and 407 Proxy authentication required error codes are quite rare in commercial environments. Idle sessions The number of idle TCP sessions, that have not been active for a period of time longer than a predefined time-out time, 5 minutes by default. Idle time The part of the operation time spent between receiving a part of the response and requesting a subsequent part. It enables you to isolate the time taken by client from the time when the data was still being transmitted on the network Incomplete Responses The number of incomplete responses, that is partial and server aborted responses, as well as situations when a server did not respond to the request at all or responded in an urecognizable way. LAN-WAN byte ratio The amount of compression performed and expressed as a percentage. 100% for pass-through. Greater than 100% if more bytes on the WAN side, including both pass-through and optimized traffic. Less than 100% if fewer bytes on the WAN side, including both pass-through and optimized traffic. LDAP client error LDAP client error Time Limit Exceeded 220

221 Appendix A Central Analysis Server Data Views LDAP critical error LDAP critical errors Busy Unwilling To Perform Operation Error Size Limit Exceeded Constraint Violation Protocol Error Invalid Attribute Syntax Naming Violation Invalid Credentials LDAP errors The number of LDAP Erros. The LDAP Errors are reported in the following categories: LDAP critical errors LDAP server errors LDAP security errors LDAP syntax errors LDAP client error LDAP client error LDAP security error LDAP security errors Authentication Method Not Supported Stronger Auth Required Admin Limit Exceeded Confidentiality Required Inappropriate Authentication Insufficient Access Rights Object Class Mods Prohibited LDAP server error LDAP server errors Unavailable Loop Detect Not Allowed On NonLeaf Not Allowed On RDN Affects Multiple DSAs Other 221

222 Appendix A Central Analysis Server Data Views Referral Unavailable Critical Extension SAS Binding Progress No Such Object Alias Problem Invalid DN Syntax Alias Dereferencing Problem LDAP syntax error LDAP syntax errors Compare False Compare True No Such Attribute Undefined Attribute Type Inappropriate Matching Attribute Or Value Exists Object Class Violation Entry Already Exists Long aborts For HTTP, this is the number of operations manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL after at least 8 seconds of waiting for the page download (8 seconds is default). For XML, this is the number of transactions stopped after at least a threshold number of seconds of waiting (8 seconds is the default). Loss rate in This is the percentage of the number of total packets that were lost and needed to be retransmitted. Loss rate in refers to traffic to a server in a DataCenter configuration. Loss rate out This is the percentage of the number of total packets that were lost and needed to be retransmitted. Loss rate out refers to traffic from a server in a DataCenter configuration. Mail server welcome msg. time The time after which the server welcome message was received. Max realized bandwidth threshold Maximum realized bandwidth threshold. Max slow operation threshold The maximum of a slow operation threshold. If the operation time is longer than the threshold, the operation is considered to be slow. Max Total bandwidth usage The maximum value of Total bandwidth usage, over the time covered by the report. Max Total packets/sec The maximum value of Total packets/sec, over the time covered by the report. 222

223 Appendix A Central Analysis Server Data Views Min realized bandwidth threshold The acceptable transfer rate or throughput of server data when the transfer attempt occurred. If the monitored transfer rate falls below the specified threshold value, the operation is flagged as slow. Min slow operation threshold The minimum slow operation threshold. MQ appl. errors The number of all API errors for IBM WebSphere Message Queue. This metric is a sum of Message Queue application. errors (1) to (5). MQ appl. errors (1) The number of API errors of type 1 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. MQ appl. errors (2) The number of API errors of type 2 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. MQ appl. errors (3) The number of API errors of type 3 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. MQ appl. errors (4) The number of API errors of type 4 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. MQ appl. errors (5) The number of API errors of type 5 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. 223

224 Appendix A Central Analysis Server Data Views MQ client errors The number of IBM WebSphere Message Queue client errors. This includes the following Message Queue errors: ERR_NO_CHANNEL (value 0x01), ERR_CHANNEL_WRONG_TYPE (value 0x02), ERR_MSG_SEQUENCE_ERROR (value 0x04), ERR_USER_CLOSED (value 0x07), ERR_TIMEOUT_EXPIRED (value 0x08), ERR_TARGET_Q_UNKNOWN (value 0x09), ERR_BATCH_FAILURE (value 0x11). MQ errors The total number of IBM WebSphere Message Queue errors, including client errors, server errors, protocol errors and security errors. MQ protocol errors The number of IBM WebSphere Message Queue protocol errors. This includes the following Message Queue errors: ERR_PROTOCOL_SEGMENT_TYPE (value 0x0a), ERR_PROTOCOL_LENGTH_ERROR (value 0x0b), ERR_PROTOCOL_INVALID_DATA (value 0x0c), ERR_PROTOCOL_SEGMENT_ERROR (value 0x0d), ERR_PROTOCOL_ID_ERROR (value 0x0e), ERR_PROTOCOL_MSH_ERROR (value 0x0f), ERR_PROTOCOL_GENERAL (value 0x10), ERR_MESSAGE_LENGTH_ERROR (value 0x12), ERR_SEGMENT_NUMBER_ERROR (value 0x13), ERR_WRAP_VALUE_ERROR (value 0x15). MQ security errors The number of IBM WebSphere Message Queue security errors. This includes the following Message Queue errors: ERR_SECURITY_FAILURE (value 0x14), ERR_SSL_REMOTE_BAD_CIPHER (value 0x18). MQ server errors The number of IBM WebSphere Message Queue server errors. This includes the following Message Queue errors: ERR_QM_UNAVAILABLE (value 0x03), ERR_QM_TERMINATING (value 0x05), ERR_CAN_NOT_STORE (value 0x06), ERR_CHANNEL_UNAVAILABLE (value 0x16), ERR_TERMINATED_BY_REMOTE_EXIT (value 0x17). MS Exchange errors Total number of RPC Server and RPC Protocol errors. Network performance The percentage of total traffic that did not experience network-related problems (traffic in which the values of loss rate and RTT did not exceed configured thresholds). Network performance affected bytes The volume of TCP traffic that did experience network-related problems. The traffic measured here includes both directions of data transfer, to and from client, or downstream and upstream, but does NOT include bytes transferred internally within the site. By network-related problems we understand excessive RTT or Loss Rate: at any given moment, traffic is considered to be experiencing network-related problems if, at that particular time, the values of Loss Rate or RTT exceed pre-configured thresholds. In situations when RTT measurements prove to be insufficient, ACK RTT may also become an additional criterion for determining network problems. 224

225 Appendix A Central Analysis Server Data Views Network performance relevant bytes The total volume of TCP traffic. Includes both directions of data transfer, to and from client, or downstream and upstream, but does NOT include bytes transferred internally within the site. Network time The time the network (between the user and the server) takes to deliver requests to the server and to deliver operation information back to the user. In other words, network time is the portion of the overall time that is due to the delivery time on the network. Not-affected users (performance) The percentage of users that did not experience application performance problems. Number of hits in an aborted operation The number of hits in an aborted operation. Operation (or network) throughput The average aborted operation (or network) throughput in bytes per second. Operation attributes The number of operation attributes of all types (type 1 to 5), observed for the given software service. Operation attributes (1) The number of operation attributes of type 1, observed for the given software service. Operation attributes (2) The number of operation attributes of type 2, observed for the given software service. Operation attributes (3) The number of operation attributes of type 3, observed for the given software service. Operation attributes (4) The number of operation attributes of type 4, observed for the given software service. Operation attributes (5) The number of operation attributes of type 5, observed for the given software service. Operation length The number of packets that contained in an average operation. Operation load time (range 1) The number of operations that were loaded in time within range 1 as defined in the RUM Console. Operation load time (range 2) The number of operations that were loaded in time within range 2 as defined in the RUM Console. Operation load time (range 3) The number of operations that were loaded in time within range 3 as defined in the RUM Console. Operation load time (range 4) The number of operations that were loaded in time within range 4 as defined in the RUM Console. Operation load time (threshold 1 [max]) The highest value that was set for the upper bound of Operation time (range 1) during the reporting period. 225

226 Appendix A Central Analysis Server Data Views Operation load time (threshold 1 [min]) The lowest value that was set for the upper bound of Operation time (range 1) during the reporting period. Operation load time (threshold 2 [max]) The highest value that was set for the upper bound of Operation time (range 2) during the reporting period. Operation load time (threshold 2 [min]) The lowest value that was set for the upper bound of Operation time (range 2) during the reporting period. Operation load time (threshold 3 [max]) The highest value that was set for the upper bound of Operation time (range 3) during the reporting period. Operation load time (threshold 3 [min]) The lowest value that was set for the upper bound of Operation time (range 3) during the reporting period. Operation percentage breakdown Operation percentage breakdown into slow and fast operations. Operation requests The number of all operation requests, both requests that became successful operations and requests that were aborted by the client. Operations The number of operations. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Operations/min The number of operations per minute. Operations/sec The number of operations per second. Operations breakdown Operation breakdown into numbers of slow and fast operations. Operation size (range 1) The number of operations whose byte count is within range 1 as defined in the RUM Console. Operation size (range 2) The number of operations whose byte count is within range 2 as defined in the RUM Console. Operation size (range 3) The number of operations whose byte count is within range 3 as defined in the RUM Console. Operation size (range 4) The number of operations whose byte count is within range 4 as defined in the RUM Console. 226

227 Appendix A Central Analysis Server Data Views Operation size (threshold 1 [max]) The highest value that was set for the upper bound of Operation size (range 1) during the reporting period. Operation size (threshold 1 [min]) The lowest value that was set for the upper bound of Operation size (range 1) during the reporting period. Operation size (threshold 2 [max]) The highest value that was set for the upper bound of Operation size (range 2) during the reporting period. Operation size (threshold 2 [min]) The lowest value that was set for the upper bound of Operation size (range 2) during the reporting period. Operation size (threshold 3 [max]) The highest value that was set for the upper bound of Operation size (range 3) during the reporting period. Operation size (threshold 3 [min]) The lowest value that was set for the upper bound of Operation size (range 3) during the reporting period. Operations with breakdown The number of operations with operation breakdown into numbers of slow and fast operations. Operation time The time it took to complete an operation. The term "operation" refers to an operation in the context of a particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Note that an operation can be split over several packets. For HTTP and HTTPS, it is equal to the redirect time plus the network time plus server HTTP time plus server think time. Operation time breakdown Operation time breakdown into the server time, the network time, the redirect time, and the other time. Operation time percentage breakdown The breakdown of the average value of operation time into percentage of the server time, the network time, the redirect time, and the other time. Operation time with breakdown The time it took to complete an operation with an operation time breakdown into the server time, the network time, the idle time, and the other time. Oracle Applications error The number of Oracle Application errors that occurred during a given operation. For a complete list of specific Forms server errors, refer to the RUM Console Help. Oracle Forms errors The number of Oracle Forms errors that occurred during a given operation. For a complete list of specific Forms server errors, refer to the RUM Console Help. 227

228 Appendix A Central Analysis Server Data Views Oracle server error The number of Oracle server errors that occurred during a given operation. For a complete list of specific Forms server errors, refer to the RUM Console Help. Orphaned redirects The number of HTTP redirects for which a matching request to the target URL was not detected before the timeout time. Other SSL errors (default name) SSL alerts other than those for SSL errors 1 and SSL errors 2. Other time Part of the operation time, calculated as Operation time - Server Time - Network Time - Idle time. Out of contract bytes The number of bytes marked as Out-of-contract in the TOS field in the TCP header. This setting can signify that the data was sent over and above a certain preset limit. Out of contract packets The number of packets marked as Out-of-contractin the TOS field in the TCP header. This can signify that the data was sent over and above a certain preset limit. Percentage of aborts The percentage of transactions aborted by the client. It applies to all TCP-based protocols. For example, for HTTP, it is the number of operations manually stopped by the user by either clicking the Stop or Refresh buttons or selecting another URL. Note that, in the case of HTTP, this number includes Short aborts and Long aborts. Percentage of affected users (availability) The percentage of users that were affected by the availability related problems. Percentage of affected users (availability) breakdown A percentage breakdown of users into how many were affected by availability problems and how many were not. Percentage of affected users (network) The percentage of unique users that experienced network performance problems. Percentage of affected users (network) breakdown A percentage breakdown of users into how many were affected by network performance problems and how many were not. Percentage of affected users (performance) The percentage of users that experienced application performance problems. Percentage of affected users (performance) breakdown A percentage breakdown of users into how many were affected by application performance problems and how many were not. Percentage of long aborts The percentage of HTTP operations manually stopped, by the user by either clicking on the Stop or Refresh buttons or selecting another URL, after significant time of waiting for the page download. The default wait time duration, classifying an abort as long, is 8 seconds. This threshold value is configurable. The same threshold value is also used to determine if an HTTP page was slow to load. Note that this metric applies exclusively to HTTP. 228

229 Appendix A Central Analysis Server Data Views Percentage of network time The network part of the transaction time, expressed as percentage. Percentage of optimized traffic (bytes) Indicates the traffic distribution in two separate branches: optimized traffic and passed-through traffic. The higher the value, the more bytes are optimized. Low values may indicate poorly configured optimization or optimization device overload. Percentage of short aborts The percentage of HTTP operations manually stopped, by the user by either clicking on the Stop or Refresh buttons or selecting another URL, before significant time of waiting for the page download. The default wait time duration, classifying an abort as long, is 8 seconds. This threshold value is configurable. The same threshold value is also used to determine if an HTTP page was slow to load. Note that this metric applies exclusively to HTTP. Percentage of slow operations The percentage of operations for which the operation time was above a predefined threshold value. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Percentage of TCP sessions w/errors The percentage of TCP sessions with errors. Person-hours lost In Central Analysis Server, the total monitoring time clients waited for operations due to bad service availability and bad application performance In Advanced Diagnostics Server, the total time clients waited for operations due to bad software service performance, that is, the total monitoring time during which operation time exceeded the predefined threshold. Note that this is not a sum of whole monitoring intervals, but only those intervals' portions during which problems occurred. This metric is not calculated in PVU mode. Person-hours lost (availability) The total time clients waited for operations due to bad service availability, that is, the total monitoring time during which attempts were made to connect to a server and these attempts were not successful - no connection was established. Person-hours lost (errors) The total time during which clients waited for operations to load due to application errors. Note that it refers only to operations detected by the following analyzers: HTTP, SSL and SSL Decrypted. This metric is not calculated in PVU mode. Person-hours lost (performance) The total time clients waited for operations to load due to bad software service performance, that is, the total monitoring time during which transaction time exceeded the predefined threshold. Note that this is not a sum of whole monitoring intervals, but only those intervals' portions during which problems occurred. This metric is not calculated in PVU mode. 229

230 Appendix A Central Analysis Server Data Views Primary Reason for Slowness Primary reason for slowness is one of the general categories causing operations to be slow. The categories include data center, network, application design, client/3rd party, and multiple reasons. Primary Reason for Slowness details The details of the primary reason for slowness. Realized bandwidth (range 1) The number of operations whose realized bandwidth is within range 1 as defined in the RUM Console. Realized bandwidth (range 2) The number of operations whose realized bandwidth is within range 2 as defined in the RUM Console. Realized bandwidth (range 3) The number of operations whose realized bandwidth is within range 3 as defined in the RUM Console. Realized bandwidth (range 4) The number of operations whose realized bandwidth is within range 4 as defined in the RUM Console. Realized bandwidth (threshold 1 [max]) The highest value that was set for the upper bound of Realized bandwidth (range 1) during the reporting period. Realized bandwidth (threshold 1 [min]) The lowest value that was set for the upper bound of Realized bandwidth (range 1) during the reporting period. Realized bandwidth (threshold 2 [max]) The highest value that was set for the upper bound of Realized bandwidth (range 2) during the reporting period. Realized bandwidth (threshold 2 [min]) The lowest value that was set for the upper bound of Realized bandwidth (range 2) during the reporting period. Realized bandwidth (threshold 3 [max]) The highest value that was set for the upper bound of Realized bandwidth (range 3) during the reporting period. Realized bandwidth (threshold 3 [min]) The lowest value that was set for the upper bound of Realized bandwidth (range 3) during the reporting period. Redirect time The average amount of time that was spent between the time when a user went to a particular URL and the time this user was redirected to another URL and issued a request to that new URL. The difference between Redirect Time and HTTP Redirect Time is that the former counts all operations, while the latter refers only to those operations for which redirection actually took place. 230

231 Appendix A Central Analysis Server Data Views Redirect time (range 1) The number of operations whose redirect time is within range 1 as defined in the RUM Console. Redirect time (range 2) The number of operations whose redirect time is within range 2 as defined in the RUM Console. Redirect time (range 3) The number of operations whose redirect time is within range 3 as defined in the RUM Console. Redirect time (range 4) The number of operations whose redirect time is within range 4 as defined in the RUM Console. Redirect time (threshold 1 [max]) The highest value that was set for the upper bound of Redirect time (range 1) during the reporting period. Redirect time (threshold 1 [min]) The lowest value that was set for the upper bound of Redirect time (range 1) during the reporting period. Redirect time (threshold 2 [max]) The highest value that was set for the upper bound of Redirect time (range 2) during the reporting period. Redirect time (threshold 2 [min]) The lowest value that was set for the upper bound of Redirect time (range 2) during the reporting period. Redirect time (threshold 3 [max]) The highest value that was set for the upper bound of Redirect time (range 3) during the reporting period. Redirect time (threshold 3 [min]) The lowest value that was set for the upper bound of Redirect time (range 3) during the reporting period. Requests breakdown The number of operation or transaction requests with operation or transaction breakdown into numbers of slow, fast, aborted, and failed operations or transactions. Requests breakdown (with tooltip) Operation or transaction breakdown into numbers of slow, fast, aborted, and failed operations or transactions. Requests percentage breakdown (with tooltip) Operation or transaction percentage breakdown into slow, fast, aborted, and failed operations or transactions. Request time The time it took the client to send the HTTP request to the server (for example, by means of an HTTP GET or HTTP POST). Note: This time includes TCP connection setup time and SSL session setup time (if any). It starts when the client starts the TCP session on the 231

232 Appendix A Central Analysis Server Data Views server and ends when the server receives the whole request. Sometimes an operation is slow because of a big request rather than due to a large response. Response messages The total number of protocol-specific server responses. That includes both errors and other identifiable response strings, as configured in monitoring. Response transfer time The time it took the server to send the response to the client (for example, by means of an HTTP GET or HTTP POST). The value is obtained by subtracting Server Time and Request Time from Operation Time. RMI/Simple parser error (1) The number of RMI/Simple parser errors of category 1. RMI/Simple parser error (2) The number of RMI/Simple parser errors of category 2. RMI/Simple parser error (3) The number of RMI/Simple parser errors of category 3. RMI/Simple parser error (4) The number of RMI/Simple parser errors of category 4. RMI/Simple parser error (5) The number of RMI/Simple parser errors of category 51. RMI/Simple parser error (6) The number of RMI/Simple parser errors of category 6. RMI/Simple parser error (7) The number of RMI/Simple parser errors of category 7. RMI/Simple parser errors Total number of RMI/Simple parser errors. RPC Protocol error The number of all RPC_X_* errors. RPC Server error The number of all RPC_S_* and EPT_S_* errors. RTT measurements The number of RTT measurements. An RTT measurement occurs during every TCP handshake, so it provides some insight into the number of attempted TCP sessions, and the potential accuracy of the RTT measurements that are reported. Sampling rate The total percentage of packets that the AMD was able to process. Numbers lower that 100% mean that a portion of packets had been dropped because of performance issues. Sampling means dropping packets when network interface driver performance is degraded on the AMD. Packets are dropped in a controlled manner and always with care to preserve complete and therefore consistent sessions. SAP GUI error indicator Errors detected by examining the error strings returned to the user in Window Status or other SAP GUI data. Detected errors are included in availability calculation for the SAP application. 232

233 Appendix A Central Analysis Server Data Views SAP GUI errors The number of errors detected on the protocol level in communication between SAP application server and SAP GUI client as well as between SAP application server and a third party clients using Remote Function Calls (RFC). SAP GUI status error This automatically created group, consists values based on default configuration of patterns for Application Responses for the most commonly used errors. SAP RFC error An error consisting values based on user-defined configuration of patterns for Application Responses for the most common errors that are traced in selected fields in the SAP RFC protocol. SAP RFC error indicator Errors detected by examining the error strings returned in the user-defined attributes that are traced in selected fields in the SAP RFC protocol. Detected errors are included in the availability calculation for the SAP application. SAP RFC errors The number of errors detected on the protocol level in communication between a SAP application server and a SAP client plus the number of attributes which are defined as error indicators in the monitoring configuration. Server ACK RTT RTT measurement performed during ACK packet transmission, from server side of the operation. Also provided are minimum, maximum and standard deviation values. Server ACK RTT measurements These metrics keep track of how many RTT of Server ACK measurements were made. ACK measurement is performed during ACK packet transmission either from server or client side of the transaction. Server bandwidth usage The number of server bits per second. Server bytes The number of bytes sent by servers. The number includes headers. Server loss rate The percentage of total packets sent from a server that were lost and needed to be retransmitted. Server loss rate (AMD to client) The percentage of total packets sent by a server that were lost - between the AMD and the client - and needed to be retransmitted. Server loss rate (range 1) The number of operations whose server loss rate is within range 1 as defined in the RUM Console. Server loss rate (range 2) The number of operations whose server loss rate is within range 2 as defined in the RUM Console. 233

234 Appendix A Central Analysis Server Data Views Server loss rate (range 3) The number of operations whose server loss rate is within range 3 as defined in the RUM Console. Server loss rate (range 4) The number of operations whose server loss rate is within range 4 as defined in the RUM Console. Server loss rate (server to AMD) The percentage of total packets sent by a server that were lost - between the AMD and the server - and needed to be retransmitted. Server loss rate (threshold 1 [max]) The highest value that was set for the upper bound of Server loss rate (range 1) during the reporting period. Server loss rate (threshold 1 [min]) The lowest value that was set for the upper bound of Server loss rate (range 1) during the reporting period. Server loss rate (threshold 2 [max]) The highest value that was set for the upper bound of Server loss rate (range 2) during the reporting period. Server loss rate (threshold 2 [min]) The lowest value that was set for the upper bound of Server loss rate (range 2) during the reporting period. Server loss rate (threshold 3 [max]) The highest value that was set for the upper bound of Server loss rate (range 3) during the reporting period. Server loss rate (threshold 3 [min]) The lowest value that was set for the upper bound of Server loss rate (range 3) during the reporting period. Server not responding errors The number of Server Not Responding errors. This category of errors applies when the client closes the TCP session with a RESET packet after the server has failed to respond for too long. Server operation size The size of a server operation. In HTTP and HTTPS (decrypted and non-decrypted), server operation size equals the operation size. Server packets The number of packets sent by the servers. Server packets/sec The number of packets per second, sent by the servers. Server packet size The average size of the server-originating packets (in bytes), including header. Server packets lost (AMD to client) The number of packets sent by a server that were lost - between the AMD and the client - and needed to be retransmitted. 234

235 Appendix A Central Analysis Server Data Views Server packets lost (server to AMD) The number packets sent by a server that were lost - between the server and the AMD - and needed to be retransmitted. Server realized bandwidth Server realized bandwidth refers to the actual transfer rate of server data when the transfer attempt occurred, and takes into account factors such as loss rate (retransmissions). Thus, it is the size of an actual transfer divided by the transfer time. Server response time This is the amount of time it takes for a server to provide its initial response to a user's operation request. Often servers will respond with some information quickly, before all the information is ready for delivery. Together with the server think time, the server response time sums to the overall server time. Note that if there was no think time recorded for the opration, it equals the server time. Server RTT The time it takes for a SYN packet (sent by a user) to travel from the AMD to a monitored server and back again. Also provided are minimum, maximum and standard deviation values. Client AMD Server T1 SYN T2 T6 SYN ACK T5 Server RTT T3 T4 T7 ACK T8 T9 Server session termination errors The number of Server Session Termination errors. This category of errors applies when the server detects an error on the software service level and closes the TCP session with a RESET packet. Server TCP data packets The total number of TCP packets sent by the servers, excluding the traffic control packets. Server TCP data packets lost The number of lost TCP data packets sent by the servers, excluding the traffic control packets. The number of lost TCP packets always regards the context of the counter, for example, an application, a client or any other entity. Server think time The time that elapsed between the moment the server received the request for the Base Page, and the time the server fully composed the response. Depending on the nature of the request, Application Servers in the Data Center may be involved to produce the content. In such a case, this additional time will be reflected in the Server Think Time metric. Server time The time it took the server to produce a response for the given request. Server time (range 1) The number of operations whose server time is within range 1 as defined in the RUM Console. 235

236 Appendix A Central Analysis Server Data Views Server time (range 2) The number of operations whose server time is within range 2 as defined in the RUM Console. Server time (range 3) The number of operations whose server time is within range 3 as defined in the RUM Console. Server time (range 4) The number of operations whose server time is within range 4 as defined in the RUM Console. Server time (threshold 1 [max]) The highest value that was set for the upper bound of Server time (range 1) during the reporting period. Server time (threshold 1 [min]) The lowest value that was set for the upper bound of Server time (range 1) during the reporting period. Server time (threshold 2 [max]) The highest value that was set for the upper bound of Server time (range 2) during the reporting period. Server time (threshold 2 [min]) The lowest value that was set for the upper bound of Server time (range 2) during the reporting period. Server time (threshold 3 [max]) The highest value that was set for the upper bound of Server time (range 3) during the reporting period. Server time (threshold 3 [min]) The lowest value that was set for the upper bound of Server time (range 3) during the reporting period. Short aborts The number of transactions stopped before timeout. For HTTP, this is the number of page loads software service manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL before 8 seconds of waiting for the page download (8 seconds is default). For XML, this is the number of transactions stopped before a threshold number of seconds of waiting (8 seconds is the default). Slow operations The number of operations for which the operation time was above a predefined threshold value. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Note that slow operations for SMB are not determined using the time threshold, but maximum and minimum realized bandwidth thresholds. Slow operations (application design) The number of slow operations caused by the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. 236

237 Appendix A Central Analysis Server Data Views Slow operations (application design - # of components) The number of slow operations caused by the number of components, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - redirect time) The number of slow operations caused by redirect time, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - request size) The number of slow operations caused by request size, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - response size) The number of slow operations caused by response size, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (client/3rd party) The number of slow operations caused by client/3rd party category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (data center) The number of slow operations caused by the data center category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (multiple reasons) The number of slow operations caused by multiple reasons, that is when the algorithm was not able to determine one primary reason for slowness. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network) The number of slow operations caused by the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - latency) The number of slow operations caused by latency, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - loss rate) The number of slow operations caused by loss rate, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. 237

238 Appendix A Central Analysis Server Data Views Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - other) The number of slow operations caused by other factors than latency or loss rate, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow user sessions The number of client sessions, which contained at least one slow operations (page load for HTTP or HTTPS). SMTP command syntax error (default name) The number of "SMTP command syntax" errors, that is server response code numbers between 500 and 504. These values are default and can be customized. SMTP errors The total number of SMTP errors. SMTP general system error (default name) The number of "SMTP general system" errors, that is server response code numbers 421, 451, 452 or 554. These values are default and can be customized. SMTP mailbox not available (default name) The number of "SMTP mailbox not available" errors, that is server response code numbers 450, 550, 551, 552, 553. These values are default and can be customized. SSL conn. setup per operation The time it took to establish an SSL connection between the client and the server, weighted per number of operations. For HTTP-based software services, a single operation means a single page. SSL conn. setup per session The time it took to establish an SSL connection between the client and the server. SSL errors The number of all SSL alerts. This metric is the sum of SSL Session Fatal Errors, SSL Handshake Errors and SSL Warnings. SSL errors 1 (default name) If not explicitly configured, general SSL alerts from the following list: 10,20,21,22,30,40,49,50,51. SSL errors 2 (default name) If not explicitly configured, general SSL alerts from the following list: 41,42,43,44,45,46,48. SSL handshakes The number of observed SSL handshakes. Standalone hits The number of hits not associated with any operation, such as orphaned redirects, unauthorized hits, and discarded hits (no server response). Successful attempts The number of monitoring intervals during which successful attempts were made to connect to a server. Note that this is counted separately for each server. Thus, if in a given 238

239 Appendix A Central Analysis Server Data Views monitoring interval there are attempts to connect to three different servers, the Successful attempts metric will be incremented by three for that one monitoring interval. Note also that, even if TCP errors occur, but the connection is established during the given monitoring interval, then this monitoring interval is counted as a success (for that server). Sum of failures and response messages Sum of failures and response messages. TCP connections attempts The number of all TCP connection attempts (successful and unsuccessful). TCP errors The total number of TCP errors. Those errors may indicate server or application problems and therefore measurements of those are critical to understanding the issues that may affect end-user experience. AMDs measure and report on the following types of TCP errors: Connection Refused Errors - Client attempts to open a TCP session with a server, which rejects the request. SYN packet from Client is followed by RESET packet from Server, with matching TCP sequence numbers. This error is typically caused by resource exhaustion on the server, which is unable to accept more concurrent TCP sessions. This may be either a configuration issue (too few resources allocated in the kernel) or lack of memory. SYN flood attacks typically result in servers being unable to accept new connections. Server session termination error - Server is unexpectedly terminating a connection that was successfully opened. The server sends a RESET packet to the Client. Such an error originates at an application using TCP session that is monitored. It does not necessarily mean application failure; usually it means that the application encountered a condition in which it decided to immediately terminate session with the client, for example, because of an application security policy violation by the client. Session Abort - Client is unexpectedly terminating a connection that was successfully opened. The Client sends a RESET packet to the Server. These errors are inspected in the context of the client application and may or may not be reported. For example, the browser running HTTP may terminate the load of a GIF file if it is older than the one that it had previously cached and this is normal behavior. However, if all connections to the server are terminated because the user hits the STOP button, then this is abnormal session termination and is reported as "Aborted operation" or "Stopped Page". Client not responding errors (server timeout errors) - Server networking stack takes an assumption that the network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with the RESET packet. Such a condition may occur when the client has been silently disconnected from the network, for example, due to a link failure, or the client has crashed. Note that this error will not occur if the client has ended the session gracefully, e.g. by closing the client application. Server not responding errors (client timeout errors) - Client networking stack takes an assumption that network connection to the server exists, but the server remains idle and does not respond. In such a case, the client closes the TCP session with the 239

240 Appendix A Central Analysis Server Data Views RESET packet. This may occur either during the Session Setup phase (no response to the SYN packet), or during a normal data exchange process. Such a situation may result in the intermittent network problems between the client and the server. In the case the traffic is routed through asymmetric paths across the Internet, which is often the case, the path from the server to the client may be broken. TCP SYN time The time needed to establish a connection on the TCP/IP layer, that is, the average time it took to transfer SYN packets. Time resolution The value of the time resolution for the given report. While being of constant value for a particular report, this metric will be present or absent, for different values of the time dimension, thus showing user activity in time. It can also provide useful information when compared - on the same graph - with aggregate time metrics. Time to abort The average aborted operation duration including the redirect time. In the case of HTTP and SSL, this is the operation time. Total bandwidth usage The number of all transmitted bits (client + server) per second. Total bandwidth usage with breakdown Total bandwidth usage with breakdown. Total bytes The number of all transmitted bytes (client + server). Total bytes compression The data optimization observed, expressed as a byte reduction and a percentage, where a lower byte count on the WAN side means a higher reduction: 0% for pass-through. Less than 0% if more bytes were observed on the WAN side, including both pass-through and optimized traffic. Greater than 0% if fewer bytes were observed on the WAN side, including both pass-through and optimized traffic. This metric should not exceed 100%. Total bytes on LAN side The sum of bytes (client's and server's) observed on the LAN side before network traffic is directed into the WAN Optimization Controller (WOC). Total bytes on WAN side The sum of bytes (client's and server's) observed on the WAN side after network traffic leaves the WAN Optimization Controller (WOC), including bytes that have been passed through and those that have been marked as optimized. Total packets The number of all transmitted packets (client + server). Total packets/sec The number of all transmitted packets (client + server) per second. 240

241 Appendix A Central Analysis Server Data Views Total wait time The total time of all transactions. Transact. errors The number of transaction errors (applies to Jolt (Tuxedo)). Transact. rollbacks The number of transaction rollbacks (applies to Jolt (Tuxedo)). Transact. rollbacks after timeout The number of transaction rollbacks that occurred after a predefined timeout (applies to Jolt (Tuxedo)). Transact. service authentication errors The number of "service transaction authentication" errors (applies to Jolt (Tuxedo)). Transact. service not found errors The number of "transaction service not found" errors (applies to Jolt (Tuxedo)). Transactional service errors The total number of transactional service errors. Transport errors The number of transport related errors. Two-way loss rate The percentage of total packets (client and server) that were lost (due to network congestion, low router queue capacity or other reasons) and needed to be retransmitted. Unavailability (total) This is the difference between 100% and availability. Unavailability (transport) (Application failures/application successful attempts)*100%. Unique and affected users (availability) The number of unique users with a breakdown of users into how many were affected by availability problems and how many were not. Unique and affected users (network) The number of unique users with a breakdown of users into how many were affected by network performance problems and how many were not. Unique and affected users (performance) The number of unique users with a breakdown of users into how many were affected by application performance problems and how many were not. Unique client groups The number of unique client groups, to which the detected clients belong. Unique client IP addresses The number of unique IP addresses of the clients. When clients are aggregated to so-called aggregation blocks, only the most active IP addresses per reported entity are kept in the database. However, the extended ISP mode enables you to count user IP addresses that are aggregated as single users. Unique internal client IP addresses The number of unique client IP addresses, as seen in their local networks. When clients are aggregated to so-called aggregation blocks, only the most active IP addresses per 241

242 Appendix A Central Analysis Server Data Views reported entity are kept in the database. However, the extended ISP mode enables you to count user IP addresses that are aggregated as single users. Unique operations The number of different operations (queries, operation types, etc.). Unique servers The number of unique servers, that is, unique server IP addresses. Unique services The number of unique services. A unique service is defined by a software service name, agent name (for synthetic traffic), analyzer name, server IP address, server name, and Type of Service value. Unique sites The number of unique client sites. Unique timestamps The number of the unique time stamps used to sign the traffic performance data packages. Unique users The number of unique users detected in the monitored traffic. User sessions The number of user HTTP sessions. The count can be identified by information contained in intercepted HTTP cookies or by HTTP authorization. User wait time per kb Reversed throughput, that is the average time spent by the user waiting for delivery of 1 kb of software service data (operations time vs. operation size). VoIP delay VoIP average networking delay, as reported by Real Time Transport Protocol (RTCP), measured for both downstream and upstream traffic. VoIP delay for client-to-server traffic VoIP average networking delay in the upstream direction, that is, from a local to a remote VoIP endpoint. VoIP delay for server-to-client traffic VoIP average networking delay in the downstream direction, that is, from a remote to the local VoIP endpoint. VoIP Jitter VoIP average jitter measured by the probe, for both downstream and upstream traffic. Jitter is a variation in voice data transit delay, in milliseconds. In general, higher levels of jitter are more likely to occur on either slow or heavily congested links. VoIP Jitter for client-to-server traffic VoIP average jitter as reported by Real Time Transport Protocol (RTCP), for upstream traffic. Jitter is a variation in voice data transit delay, in milliseconds. Higher levels of jitter are more likely to occur on either slow or heavily congested links. VoIP Jitter for server-to-client traffic VoIP average jitter measured by the probe in the downstream traffic, that is, from a remote VoIP phone to the local endpoint. Jitter is a variation in voice data transit delay, in milliseconds. 242

243 Appendix A Central Analysis Server Data Views VoIP loss rate The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for both upstream and downstream traffic. VoIP loss rate for client-to-server traffic The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for upstream traffic. VoIP loss rate for server-to-client traffic The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for downstream traffic. VoIP MOS VoIP average Mean Opinion Score (MOS) rating of the call quality, for both downstream and upstream traffic. VoIP MOS for client-to-server traffic VoIP average Mean Opinion Score (MOS) measured in the upstream direction, that is, from a subscriber to a remote VoIP phone. VoIP MOS for server-to-client traffic VoIP average Mean Opinion Score (MOS) measured in the downstream direction, that is, from a remote VoIP phone to the subscriber. VoIP R-factor VoIP average R-factor value, for both downstream and upstream traffic. It is a transmission quality rating, with a typical range of An R-Factor score is derived from multiple VoIP metrics, including latency, jitter, and loss. VoIP R-factor for client-to-server traffic VoIP average R-factor value in the upstream direction, that is, from a subscriber to a remote VoIP phone. VoIP R-factor for server-to-client traffic VoIP average R-factor value in the downstream direction, that is, from a remote VoIP phone to the subscriber. VoIP RTCP Jitter VoIP average jitter as reported by Real Time Transport Protocol (RTCP), for both downstream and upstream traffic. Jitter is a variation in voice data transit delay, in milliseconds. Higher levels of jitter are more likely to occur on either slow or heavily congested links. VoIP RTCP Jitter for client-to-server traffic VoIP average jitter as reported by Real Time Transport Protocol (RTCP) for the upstream traffic, that is, from a local VoIP endpoint to a remote one. Jitter reflects a variation in voice data transit delay, expressed in milliseconds. VoIP RTCP Jitter for server-to-client traffic VoIP average jitter as reported by Real Time Transport Protocol (RTCP) for the downstream traffic, that is, from a remote VoIP endpoint to the local endpoint. Jitter reflects a variation in voice data transit delay, in milliseconds. 243

244 Appendix A Central Analysis Server Data Views Window title SAP GUI decode has an option to retrieve form field values from selected SAP form fields. This automatically created group, aggregates metrics related to errors based on window title. Zero window size events Client sets this in TCP header when it wants the other side to slow down with data transmission because it cannot keep up with the transmission speed. Indicates that receiving machine is busy with other tasks. Software service, operation, and site baselines This data view provides dimensions and metrics to analyze the baselines of the monitored traffic. Software service, operation, and site baselines dimensions Agent The name of the synthetic agent that loaded the HTTP pages, for example, Keynote, Gomez, or Mercury. The name of the agent is determined from the User-agent field of the HTTP request and/or from agent user names or IP address configured on the server. Analysis type It assumes two values: Non-transaction and Transaction. This is to determine if transactional (TCP-based) or non-transactional traffic will be considered. Analyzer The name of the traffic analyzer. For more information see Concept of Protocol Analyzers Analyzer group The logical group of analyzers based on the type of the analyzed traffic. For more information see Concept of Protocol Analyzers Application A universal container that can accommodate transactions. Baseline Source Baseline source. Two possible values: pinned or average. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Class of service The name identifying a Type of Service value. The mapping of Class of Service names to different values of Type of Service is defined in Central Analysis Server configuration. Client area Sites, areas, and regions define a logical grouping of clients and servers, or Backbobne nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client group The client's group, or a group of Backbone nodes in case of Synthetic Backbone reports, as manually defined in Central Analysis Server. 244

245 Appendix A Central Analysis Server Data Views Client region Sites, areas, and regions define a logical grouping of clients and servers, or Backbone nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site Sites, areas, and regions define a logical grouping of clients and servers, or Backbobne nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions, clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site description The optional description of the client site, or a Backbobne node in case of Synthetic Backbone reports. Client site ID In cases when sites are ASes, Client Site ID contains the AS number, which is also given in Client ASN, or Backbone node ASN in case of Synthetic Backbone reports. For manual sites, Client Site ID is identical to Client site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Client site type One of site types: AS, Active, CIDR Block, Default, External, Manual, Network or Predefined. External is a site defined by a user in external configuration files. Manual site is defined by a user by means of configuration interface on the report server. Predefined sites are based on a mapping contained in a special configuration file. Client site UDL A dimension designed to filter only the User Defined Links. By default it is set to true (Yes) for WAN Optimization Sites report. Client site WAN Optimized Link Indicates whether a site to which the client belongs is selected as both a UDL and a WAN optimized link. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Is duplicated? Determines whether entity is duplicated. Is front-end tier? Indicates whether a given tier is a front-end tier for a selected application. Is operation name cut? Provides information if a presented operation name was cut by the AMD. Possible values: Yes, No. Link alias A custom name created by a user for a selected link. 245

246 Appendix A Central Analysis Server Data Views Link group An element of the links hierarchy tree. May contain separate links or other link groups. Link group level The hierarchy level on which a link group resides. The dimension differentiates only between 2 states: a link/group can either be on Level 1, or on any level different than Level 1. Link monitor Link information source (Network Monitoring Probe, Flow Collector, AMD). Link name A link name, as reported by the information source (Network Monitoring Probe, Flow Collector, AMD). Link type The type of a monitored link, for example Ethernet or Frame Relay. Miscellaneous parameter 1 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 2 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 3 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 4 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 5 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Miscellaneous parameter 6 One of the miscellaneous parameters retrieved from a base HTTP request or response. Requires configuration on AMD. Module Module is the third level in the reporting hierarchy. For example, in database monitoring this is the database name, and in SOAP monitoring this is the SOAP service. This entity can be broken to smaller bits such as tasks. Network tier site selector Displays only those client sites that are assigned to the selected network tier. Operation For HTTP, this is the URL of the base page to which the hit belongs. For other analyzers this can be a query, operation type or an operation status. Operation is ascertained by the AMD, based on referrer, timing relations between hits and per-transaction monitoring configured on the AMD. This dimension can assume values of a particular operation - if this operation is monitored. Note: The visibility of this dimension on reports depends on 246

247 Appendix A Central Analysis Server Data Views whether another dimension, related to servers - e.g. server IP or server DNS - has been used when formulating the query. The All other operations record serves a catch-all net for al the traffic that has been seen to-from a server, but was not classified as belonging to a specific monitored-by-name operation. It accounts for statistics of: operations which were not reported in per specific operation records (for example those that fall out of topn reported operations for a specific analyzer) - in such case the number of operations and slow operations, as well as operation time and other transactional statistics will be reported as an aggregate/average; traffic which was not classified to any operations (for example, idle TCP session closure, TCP handshake without any operation, etc) - in such case only volumetric statistics (bytes, packets) will be reported for this specific traffic. Operation (incl. whole) For HTTP, this is the URL of the base page to which the hit belongs. For other analyzers this can be a query, operation type or an operation status. Operation is ascertained by the AMD, based on referrer, timing relations between hits and per-transaction monitoring configured on the AMD. This dimension can assume values of a particular operation - if this operation is monitored. Note: The visibility of this dimension on reports depends on whether another dimension, related to servers - e.g. server IP or server DNS - has been used when formulating the query. Compare "Operation (incl. whole)". Operation (not aliased) The original operation name as seen in the traffic. Physical link name The first of 2 segments in a link name. Protocol The IP protocol name. Reporting group Reporting group is a universal container that can accommodate software services, servers, URLs or any combination of these. Reporting groups can contain software services of every type. Advanced Diagnostics Server can import reporting group configuration from Central Analysis Server. Request method The HTTP request type: GET or POST. SAP operation order The sequence number of a step is used to set the relation between operations and tasks. It is unique only within the scope of a given task. Script name The name of the simple parser script. Server aggregated name >Server aggregated name Server area Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and optionally on clients' BGP Autonomous System 247

248 Appendix A Central Analysis Server Data Views names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Server city Geographical data about the server site. Server country Geographical data about the server site. Server geographical region Geographical data about the server site. Server IP address The IP address of the server. Server name The name of the server resolved by a DNS server. Server region Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Server site Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' Autonomous System names. Sites are the smallest logical structures that comprise of clients and servers. Areas are composed of sites, and regions are composed of areas. Server site description Optional description of the server site. Server site ID In cases when sites are ASes, Server site ID contains the AS number, which is also given in Server ASN. For manual sites, Server site ID is identical to Server site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Server site WAN Optimized Link Indicates whether a site to which the server belongs is selected as both a UDL and a WAN optimized link. Service Service is the highest level of multi-level reporting hierarchy. For example, in SAP GUI monitoring this is the business process. This entity can be broken to smaller bits such as modules. Software service The software service name, where by a software service we understand a service implemented by a specific piece of software, offered on a TCP or UDP port of one or more servers and identified by a particular TCP port number. Software service type The type of the software service - autodiscovered or user-defined. Storage source Storage source 248

249 Appendix A Central Analysis Server Data Views Task Task is the second level in the reporting hierarchy. For example, in HTTP monitoring this is the page name; in database monitoring this is the operation name (may contain regular expression if configured on the AMD) or operation type prefix, and in SOAP monitoring this is the SOAP method. This entity can be broken to smaller bits such as operations or operation types. Tier A specific point of the application where we measure data. It can be a specific traffic type or a server. Tier sequence number The sequence number of a tier is determined by the order in which you define your tiers, and these numbers in turn determine the order in which data is displayed on the report. Tier type The type of a tier can be one of the following: client, network, or data center. Time The time stamp of the data presented on the report. TOS-binary A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by the AMD and displayed in reports. The use of this field is software service specific; it is used by software services to denote special types of traffic. TOS-decimal A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by AMD and displayed in reports. The use of this field is software service specific: it is used by software services to denote special types of traffic. Traffic type The type of client traffic: real or synthetic, that is, generated by a synthetic agent. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. Transaction step The step as configured in a transaction definition. Step configuration is built on DCRUM data using operations, tasks, modules or services. Steps are contained within transactions and carry the entire transaction configuration. Transaction step sequence number The sequence number of a step is used for presentation purposes. It marks the order of a particular step in a transaction configuration. You can order steps within each transaction if such an ordering makes sense for the overall monitored application paradigm. The transaction step sequence does not affect data aggregation. URL host The domain name, including the port number (if it is reported by the AMD). URL path The request path that points to a Web resource, including the first forward slash. URL protocol The protocol that is used to locate the resource (HTTP or HTTPS). 249

250 Appendix A Central Analysis Server Data Views VC link name The second of 2 segments in a link name. Software service, operation, and site baselines metrics % of bad delay calls The percentage of VoIP calls with delay above the acceptable level. % of bad jitter calls The percentage of VoIP calls with jitter exceeding the acceptable level. % of bad lost packets calls The percentage of VoIP calls with loss rate above the acceptable level. % of bad MOS calls The percentage of VoIP calls with the Mean Opinion Score (MOS) rating below acceptable threshold. % of bad R-factor calls The percentage of VoIP calls with R-factor value below the acceptable value. Aborted page Application Delivery Channel Delay Application Delivery Channel Delay in tenths of milliseconds for an aborted page. Aborted page http server time A time difference between the response and request for an aborted page. Aborted page image server time The average combined image server time (HTTP) for an aborted page. Aborted page redirect time The average redirect time per operation (this includes operations with no redirects). Aborted page request size The average aborted client request size (GET or POST). Aborted page request time The average time from the first client SYN packet to the last request packet for an aborted operation. Aborted page server delay The time spent on the server during one operation. Aborted page size in bytes The average aborted operation size in bytes. Aborted page SSL setup time The average SSL setup time per operation. Aborted page TCP connect time The average TCP connect time for an aborted operation. Counted as an average of TCP connection times. A single TCP connection time is counted as the time difference between the first client SYN and first client ACK. Aborted page transfer time The average time it took the server to send a response to the client, averaged over all the aborted operations in the monitoring interval. 250

251 Appendix A Central Analysis Server Data Views Aborts The number of operations aborted by the client. It applies to all TCP-based protocols. For example, for HTTP/HTTPS, it is the number of operations manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL. Note that, in the case of HTTP, this number includes Short aborts and Long aborts. Aggregate data center time This is a sum of all products: server time multiplied by the number of transactions. Application Delivery Channel Delay In WAN optimized scenario, Application Delivery Channel Delay (ADCD) is a quality metric represented in milliseconds. The ADCD is determined by initial observation of the traffic between a client and a server. ADCD is a derivative of RTT measured on a WAN link expressed in time and as such it can be understood as latency, where the larger ADCD would indicate a higher network latency. ADCD also includes time spent in the data center WOC for traffic buffering and processing. A change of ADCD from its initial value reflects a change of quality in WAN optimization service. For example, sudden increase of ADCD would suggest that the quality of the service has worsened and conversely, a sudden decrease of ADCD value could suggest an improvement in WAN optimization. Application Delivery Channel Delay (range 1) The number of operations whose ADCD value is within range 1 as defined in the RUM Console. Application Delivery Channel Delay (range 2) The number of operations whose ADCD value is within range 2 as defined in the RUM Console. Application Delivery Channel Delay (range 3) The number of operations whose ADCD value is within range 3 as defined in the RUM Console. Application Delivery Channel Delay (range 4) The number of operations whose ADCD value is within range 4 as defined in the RUM Console. Application Delivery Channel Delay (threshold 1 [max]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 1) during the reporting period. Application Delivery Channel Delay (threshold 1 [min]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 1) during the reporting period. Application Delivery Channel Delay (threshold 2 [max]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 2) during the reporting period. Application Delivery Channel Delay (threshold 2 [min]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 2) during the reporting period. Application Delivery Channel Delay (threshold 3 [max]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 3) during the reporting period. 251

252 Appendix A Central Analysis Server Data Views Application Delivery Channel Delay (threshold 3 [min]) The lowest value that was set for the upper bound of Application Delivery Channel Delay (range 3) during the reporting period. Application health index The percentage of fast operations calculated as "Fast Operations / (Failures + Operations) * 100%". Application Monitoring Flags One of 0,1 or 2, meaning: 0 - no Application Monitoring servers, 1 - one Application Monitoring server, 2 - several Application Monitoring servers. Application Monitoring Hits The number of Application Monitoring hits. Application Monitoring Operations The number of Application Monitoring operations. Application Monitoring Server Errors The number of Application Monitoring server errors. Application performance For transactional protocols, this is the percentage of software service operations completed in a time shorter than the performance threshold. For SMTP and transactionless TCP-based protocols, this is the percentage of monitoring intervals in which user wait time per kb of data was shorter than the threshold value. Attempts The number of monitoring intervals during which attempts were made to connect to a server. Note that this is counted separately for each server, client and software service. Thus, if in a given monitoring interval there are attempts to connect to three different servers, the Attempts metric will be incremented by three for that one monitoring interval. The actual value shown on the report is the sum total of all the attempts, for all the monitoring intervals, in the period covered by the report. Auto-discovery flags If a software service contains any autodiscovered traffic, the metric displays the green tick sign. Availability (application) Availability limited to the application context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (Application) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Availability (TCP) Availability limited to the network context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (TCP) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. 252

253 Appendix A Central Analysis Server Data Views Availability (total) The percentage of successful attempts, calculated using the following formula: Availability (total) = 100% * (All Attempts All failures) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure All failures = all failures (transport) + all failures (TCP) + all failures (application). Availability (transport) Availability limited to the transport context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (Transport) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Bad delay calls The number of VoIP calls with delay above the acceptable level. Bad jitter calls The number of VoIP calls with jitter exceeding the acceptable level. Bad lost packets calls The number of VoIP calls with loss rate above the acceptable level. Bad MOS calls The number of VoIP calls with the Mean Opinion Score (MOS) rating below acceptable threshold. Bad R-factor calls The number of VoIP calls with R-factor value below the acceptable value. Call attempts The number of call attempts including successful and failed ones. Call duration The time the VoIP call took. Calls The total number of VoIP calls. Note that for a selected software service the number of calls as seen from the sites' perspective may differ from the number seen from the endpoints' perspective. This is because in one site we may have two users taking part in the same call. Calls finished with termination error The number of calls that finished with the termination error. Calls not started due to remote peer The number of calls that could not start due to a remote peer. Calls with error during begin phase The number of calls affected by errors occurring during the begin phase. Client ACK RTT Client ACK RTT is the time it takes for an ACK packet with no payload to travel from the user to the AMD and back again. 253

254 Appendix A Central Analysis Server Data Views Client ACK RTT + 2 stdv Client ACK RTT increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client ACK RTT + stdv Client ACK RTT increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client ACK RTT - 2 stdv Client ACK RTT decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client ACK RTT measurements This metric keeps track of how many Client ACK RTT measurements were made. ACK measurement is performed during ACK packet transmission either from server or client side of the transaction. Client ACK RTT stdv The standard deviation for client ACK RTT calculated in relation to the selected baseline. Client ACK RTT - stdv Client ACK RTT decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client bandwidth usage The number of client bits per second. Client bytes The number of bytes sent by the clients. Note that this includes headers. Client loss rate The percentage of total packets sent by a client that were lost (due to network congestion, low router queue capacity or other reasons) and needed to be retransmitted. Client loss rate (AMD to server) The percentage of total packets sent by a client that were lost - between the server and the AMD - and needed to be retransmitted. Client loss rate (client to AMD) The percentage of total packets sent by a client that were lost - between the client and the AMD - and needed to be retransmitted. Client loss rate + 2 stdv Client loss rate increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client loss rate + stdv Client loss rate increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. 254

255 Appendix A Central Analysis Server Data Views Client loss rate - 2 stdv Client loss rate decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client loss rate stdv The standard deviation for client loss rate calculated in relation to the selected baseline. Client loss rate - stdv Client loss rate decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client not responding errors The number of errors of category Client not responding. Errors of this category occur when the server closes the TCP session with a RESET packet after the client has been idle for too long. Such a situation happens when the server TCP/IP stack detects that network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with a RESET packet. This may occur when the client has been silently disconnected from the network, for example, due to link failure, or the client has crashed. Note that this error will not occur if the client session has ended gracefully, that is, by closing the client application. Client operations The number of operations (for HTTP/SSL this is equivalent to the number of pages, for DB/2 it is equivalent to the number of queries) from the client side. For traffic analyzed with the analyzers General-volume and ICA (Citrix), this is the number of client data transfers for which network realized bandwidth was measured. Client operation size The size of a client operation. Note: an operation can be split over several packets. For traffic parsed with HTTP and SSL decrypted analyzers, Client operation size is the size in bytes of the operation request (HTTP GET or POST). Client packets The number of packets sent by the clients. Client packets/sec The number of packets per second, sent by the clients. Client packet size The average size of the client-originating packets (in bytes), including header. Client packets lost (AMD to server) The number of packets sent by a client that were lost - between the AMD and the server - and needed to be retransmitted. Client packets lost (client to AMD) The number of packets sent by a client that were lost - between the client and the AMD - and needed to be retransmitted. Client realized bandwidth Client realized bandwidth refers to the actual transfer rate of client data when the transfer attempt occurred, and takes into account factors such as loss rate (retransmissions). Thus, it is the size of an actual transfer divided by the transfer time. 255

256 Appendix A Central Analysis Server Data Views Client RTT Client RTT is the time it takes for a SYN packet (sent by a server) to travel from the AMD to the client and back again, as shown in the following picture. Client AMD Server T1 SYN T2 T3 T6 T7 SYN ACK ACK T5 Client RTT T8 T4 T9 A client RTT measurement begins when the SYN ACK packet from the server to the client passes by the AMD (T5). The packet reaches the client machine (T6) and is processed, while an acknowledgment is sent back to the server (T7). Client processing time impact (T7-T6) is again very low. Client RTT measurement ends when the ACK packet reaches the AMD (T8). Therefore, the Client Round Trip Time is calculated as T8-T5. Depending on the actual setup, Client RTT measurements may vary dramatically. In corporate environments, it may be a few milliseconds for LAN-connected clients or a couple dozens milliseconds for WAN-connected clients. In this case, where the client is coming from the Internet, the end-to-end Client RTT measurement is a compound of transit time through the Internet backbone as well as through the "last mile" access network. The impact of the last mile can be easily calculated, based on the connection speed and the packet size (56B in case of TCP SYN packet). For a 28 kbps dial-up connection, this amounts to 16 milliseconds one way, or 32 milliseconds for a complete round-trip measurement. For a 1.6 Mbps DSL line, this makes 56 microseconds towards complete client RTT measurement. Client RTT (range 1) The number of operations whose client RTT value is within range 1 as defined in the RUM Console. Client RTT (range 2) The number of operations whose client RTT value is within range 2 as defined in the RUM Console. Client RTT (range 3) The number of operations whose client RTT value is within range 3 as defined in the RUM Console. Client RTT (range 4) The number of operations whose client RTT value is within range 4 as defined in the RUM Console. Client RTT (threshold 1 [max]) The highest value that was set for the upper bound of Client RTT (range 1) during the reporting period. Client RTT (threshold 1 [min]) The lowest value that was set for the upper bound of Client RTT (range 1) during the reporting period. 256

257 Appendix A Central Analysis Server Data Views Client RTT (threshold 2 [max]) The highest value that was set for the upper bound of Client RTT (range 2) during the reporting period. Client RTT (threshold 2 [min]) The lowest value that was set for the upper bound of Client RTT (range 2) during the reporting period. Client RTT (threshold 3 [max]) The highest value that was set for the upper bound of Client RTT (range 3) during the reporting period. Client RTT (threshold 3 [min]) The lowest value that was set for the upper bound of Client RTT (range 3) during the reporting period. Client RTT + 2 stdv Client RTT increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client RTT + stdv Client RTT increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client RTT - 2 stdv Client RTT decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client RTT stdv The standard deviation for client RTT calculated in relation to the selected baseline. Client RTT - stdv Client RTT decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Client TCP data packets The total number of TCP packets sent by the clients, excluding the traffic control packets. Client TCP data packets lost The number of lost TCP data packets sent by the clients, excluding the traffic control packets. The number of lost TCP packets always regards the context of the counter, for example, an application, a server or any other entity. Closed TCP connections The total number of successful or failed TCP connections. Connection establishment timeout errors The number of TCP errors of category Connection establishment timeout errors. This category of errors applies when there was no response from the server to the SYN packets transmitted by the client. 257

258 Appendix A Central Analysis Server Data Views Connection refused errors The number of TCP errors of category Connection refused errors, also referred to as Session establishment errors. This category of errors applies when a server rejects a request from a client to open a TCP session. Such a situation usually happens when the server runs out of resources, either due to operating system kernel configuration or lack of memory. Custom metric (1)(avg) The average value of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (1)(cnt) The number of occurrences of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (1)(sum) The sum of all values of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (2)(avg) The average value of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (2)(cnt) The number of occurrences of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (2)(sum) The sum of all values of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (3)(avg) The average value of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (3)(cnt) The number of occurrences of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (3)(sum) The sum of all values of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (4)(avg) The average value of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (4)(cnt) The number of occurrences of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (4)(sum) The sum of all values of user-defined metrics in category 4 observed in the HTTP or XML traffic. 258

259 Appendix A Central Analysis Server Data Views Custom metric (5)(avg) The average value of user-defined metrics in category 5 observed in the HTTP or XML traffic. Custom metric (5)(cnt) The number of occurrences of user-defined metrics in category 5 observed in the HTTP or XML traffic. Custom metric (5)(sum) The sum of all values of user-defined metrics in category 5 observed in the HTTP or XML traffic. Database errors The number of database errors in the database analyzer: For TDS, which includes Sybase and MS SQL Server, any value from the following table is considered an error. For MySQL, if an ERR_Packet is returned, the error count is incremented. An error with a severity level of 19 or higher stops the execution of the current SQL batch and the error message is written to the error log. Errors that can be corrected by the user: 11: The given object or entity does not exist. 12: SQL statements that do not use locking because of special options. In some cases, read operations performed by these SQL statements could result in inconsistent data, because locks do not guarantee consistency. 13: Transaction deadlock errors. 14: Security-related errors such as permission denied. 15: Syntax errors in the SQL statement. 16: General errors that can be corrected by the user. Software errors that cannot be corrected by the user and that require system administrator action: 17: The SQL statement caused the database server to run out of resources (such as memory, locks, or disk space for the database) or to exceed some limit set by the system administrator. 18: There is a problem in the database engine software, but the SQL statement completes execution, and the connection to the instance of the database engine is maintained. System administrator action is required. 19: A non-configurable database engine limit has been exceeded and the current SQL batch has been terminated. System problems: 20-25: Fatal errors, meaning that the database engine task that was executing a SQL batch is no longer running. The task records information about what occurred and then terminates. In most cases, the application connection to the instance of the database engine also terminates. If this happens, depending on the problem, the application might not be able to reconnect. 259

260 Appendix A Central Analysis Server Data Views Database warnings The number of database warnings in the database analyzer: For TDS, which includes Sybase and MS SQL Server, this count will always be zero. TDS does not track anything as a warning. For MySQL, if an OK_Packet is returned, the warning count value in that packet is checked and the total warning field is updated with the returned number. Delay Data transfer delay on a DataCenter device, such as load balancer or firewall. Discarded aborts The number of aborted hits without valid server response. DNS errors The number of DNS errors. DNS format errors The number of DNS format errors (DNS error code 1). DNS name errors The number of DNS name errors (DNS error code 3). DNS not implemented errors The number of DNS not-implemented errors (DNS error code 4). DNS other errors The number of DNS errors with error codes from 6 to 15, that is errors that do not fall into any of the following categories: DNS format errors (error code 1), DNS server failure errors (error code 2), DNS name errors (error code 3), DNS not implemented errors (error code 4), DNS refused errors (error code 5), DNS timeouts. DNS refused errors The number of DNS refused errors (DNS error code 5). DNS server failure errors The number of DNS server failure errors (DNS error code 2). DNS timeouts The number of DNS timeout errors. End-to-end ACK RTT The time it takes for an ACK packet to travel from a client to the monitored server and back again. End-to-end RTT The time it takes for a SYN packet to travel from the client to a monitored server and back again. End-to-end RTT + stdv End-to-end RTT increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Error indicator The number of error indicators. 260

261 Appendix A Central Analysis Server Data Views Excluded operations The number of operations for which the operation time was above a safety threshold. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Failures (application) The number of operation attributes of all types set to be reported as an application failure. Failures (TCP) The total number of operations that failed due to Connection refused or Connection establishment timeout errors. Failures (total) The total number of failures, that is all Failures (transport) + all Failures (TCP) + all Failures (application) Failures (transport) The number of operations that failed due to the problems in the transport layer. These include protocol errors, SSL alerts classified as a failure, incomplete responses selected be classified as failures. Fast operations The number of operations for which the operation time was below a predefined threshold value. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Fatal error The number of fatal errors. Forms application error The number of Oracle Forms application errors that occurred during a given operation. For a complete list of specific Forms application errors, refer to the RUM Console Help. Forms client error The number of Oracle Forms client errors that occurred during a given operation. For a complete list of specific Forms client errors, refer to the RUM Console Help. Forms server error The number of Oracle Forms server errors that occurred during a given operation. For a complete list of specific Forms server errors, refer to the RUM Console Help. Hits The number of subcomponents of error-free operations. Note that this metric is recorded at the time when the monitored operations are closed. In case of HTTP, it is when the whole page has been loaded. Compare "Hits (started)". For example, when the user issues an HTTP GET, a "Hit (started)" is reported immediately, whereas if a whole page is loaded and the operation is closed, it is reported as a "Hit". Hits (range 1) The number of operations whose hit count is within range 1 as defined in the RUM Console. 261

262 Appendix A Central Analysis Server Data Views Hits (range 2) The number of operations whose hit count is within range 2 as defined in the RUM Console. Hits (range 3) The number of operations whose hit count is within range 3 as defined in the RUM Console. Hits (range 4) The number of operations whose hit count is within range 4 as defined in the RUM Console. Hits (started) The number of subcomponents of operations. Unlike the "Hits" metric, "Hits (started)" is recorded immediately, not at the end of an operation. For example, when the user issues an HTTP GET, a "Hit (started)" is reported immediately, whereas if a whole page is loaded and the operation is closed, it is reported as a "Hit". Hits (succeeded) The number of hits, that did not generate client or server errors (replies in range 2xx and 3xx). Hits (threshold 1 [max]) The highest value that was set for the upper bound of Hits (range 1) during the reporting period. Hits (threshold 1 [min]) The lowest value that was set for the upper bound of Hits (range 1) during the reporting period. Hits (threshold 2 [max]) The highest value that was set for the upper bound of Hits (range 2) during the reporting period. Hits (threshold 2 [min]) The lowest value that was set for the upper bound of Hits (range 2) during the reporting period. Hits (threshold 3 [max]) The highest value that was set for the upper bound of Hits (range 3) during the reporting period. Hits (threshold 3 [min]) The lowest value that was set for the upper bound of Hits (range 3) during the reporting period. Hits per operation The number of hits per operation. HTTP client errors (4xx) The sum of all HTTP client errors (4xx). This includes 4 categories of errors (4xx), by default HTTP Unauthorized (401, 407) errors, HTTP Not Found (404) errors, custom client (4xx) errors and Other HTTP (4xx) errors. The contents of the first 3 categories can be configured by users. However, there are two types of the 4XX errors that are of particular importance: errors 401 related to server-level authentication, and errors 404 indicating requests for non-existent content. These two error types are reported separately, by specific metrics. 401 Unauthorized - Server reports this error when user's credentials supplied with request do not satisfy page access restrictions. The HTTP server layer, not the 262

263 Appendix A Central Analysis Server Data Views application layer, reports 401 errors. The AMD will report on "Unauthorized" errors only if server-level authentication has been configured. This is common practice for sites that are comfortable with very basic user access policies. Most commercial-grade applications do not rely on server-level authentication (e.g. most of online banking applications or online shopping), but rather authenticate users on the application layer. In such a case, even if authentication fails, the server will typically send 200 OK responses and authentication error information will be explained in page content. So this kind of error is not very common in commercial sites. 404 Not Found - Server reports "Not Found" errors when it cannot fulfill client request for a resource. Usually it happens due to malformed URL, which directs to a non-existing page or image. Such a URL request may result from a user, who misspelled the URL, trying to access a URL that the user stored in his "Favorites" folder a long time ago, or some other mistake. Malformed URLs may also exist in invalid or incorrectly designed Web pages so the error will be reported by browsers trying to load such a page. Significant and constant number of these errors usually indicates that some pages on the server have design-related or link validation issues. In some cases, 404 errors result from the server overload. It is good practice to check whether the percentage of errors is load-related. HTTP client errors - category 3 (default name) The number of HTTP custom client errors (4xx). By default, there is no specific error type assigned here. HTTP errors The number of observed HTTP client errors (4xx) and server errors (5xx). HTTP not found errors 404 (default name) The number. These include the observed custom HTTP 404 Not found errors. HTTP other client errors (4xx) The number of HTTP other client errors (4xx). There are four categories of HTTP client errors (4xx), of which three can be configured by users. By default, the first category includes HTTP Unauthorized (401, 407) errors, the second category - HTTP Not Found (404) errors. The third category contains no default error types assigned, and can be configured by a user. Finally, a group of HTTP Other (4xx) errors contains all errors that do not fall into any other client errors category. The number is calculated based on the formula: [HTTP errors 4xx] - [HTTP Not Found errors 404] - [HTTP Not Authorized ( )] - [HTTP errors configured by user]. HTTP other server errors (5xx) The number of HTTP server errors (5xx) that do not fall into categories 1 or 2 of custom HTTP server errors (5xx). HTTP redirect time The average amount of time that was spent between the time when a user went to a particular URL and the time this user was redirected to another URL and issued a request to that new URL. The HTTP redirect time refers to the transactions for which redirection actually took place. 263

264 Appendix A Central Analysis Server Data Views HTTP response time (range 1) The number of operations whose HTTP response time is within range 1 as defined in the RUM Console. HTTP response time (range 2) The number of operations whose HTTP response time is within range 2 as defined in the RUM Console. HTTP response time (range 3) The number of operations whose HTTP response time is within range 3 as defined in the RUM Console. HTTP response time (range 4) The number of operations whose HTTP response time is within range 4 as defined in the RUM Console. HTTP response time (threshold 1 [max]) The highest value that was set for the upper bound of HTTP response time (range 1) during the reporting period. HTTP response time (threshold 1 [min]) The lowest value that was set for the upper bound of HTTP response time (range 1) during the reporting period. HTTP response time (threshold 2 [max]) The highest value that was set for the upper bound of HTTP response time (range 2) during the reporting period. HTTP response time (threshold 2 [min]) The lowest value that was set for the upper bound of HTTP response time (range 2) during the reporting period. HTTP response time (threshold 3 [max]) The highest value that was set for the upper bound of HTTP response time (range 3) during the reporting period. HTTP response time (threshold 3 [min]) The lowest value that was set for the upper bound of HTTP response time (range 3) during the reporting period. HTTP server errors (5xx) The number of observed HTTP server errors (5xx). The response status codes 5xx indicate cases, in which the Web server is aware that there was a server error or it is incapable of performing the request. Such error presence usually means that the Web server does not function as intended. The following 5xx errors are defined by the HTTP protocol standards: 500 Internal Server Error - The server encountered an unexpected condition, which prevented it from fulfilling the request. 501 Not Implemented - The server does not support the functionality required to fulfill the request. 502 Bad Gateway - The server received an invalid response from a back-end application server. 264

265 Appendix A Central Analysis Server Data Views 503 Service Unavailable - The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. 504 Gateway Timeout - The server did not receive response from a back-end application server. 505 HTTP Version Not Supported - The server does not support the HTTP protocol version that was used in the request message. HTTP server errors category 1 (default name) The number of custom HTTP server errors (5xx), category 1. By default, there are no specific error types assigned to this category. HTTP server errors category 2 (default name) The number of custom HTTP server errors (5xx), category 2. By default, there are no specific error types assigned to this category. HTTP server image time This is the total amount of time it takes for images (non-html content) to be prepared for delivery. HTTP unauthorized errors 401, 407 (default name) The number of observed custom HTTP authentication related errors. These include "HTTP 401 Unauthorized" and "HTTP 407 Proxy authentication required" errors. HTTP servers generate errors "401 Unauthorized" in cases, when anonymous clients are not authorized to view the requested content and must provide authentication information in the WWW-Authenticate request header. The 401 errors are similar to "403 Forbidden" errors, however used when authentication is possible but it has failed or not yet been provided. The 407 error is basically similar to 401, but it indicates that the client should first authenticate with a proxy server. The AMD will report these errors only if the server-level authentication has been configured. Simple and basic user access policies are common in Web sites that do not store user-sensitive and/or business critical information. Most commercial-grade applications, based on HTTP, such as home banking applications or online shopping sites, rely on the application-level authentication rather than the server-level authentication. Such applications are designed in the way that even if the user authentication fails, the HTTP server usually sends the 200 OK response code and the authentication error message in the page content. Therefore, the 401 Unauthorized and 407 Proxy authentication required error codes are quite rare in commercial environments. Idle sessions The number of idle TCP sessions, that have not been active for a period of time longer than a predefined time-out time, 5 minutes by default. Idle time The part of the operation time spent between receiving a part of the response and requesting a subsequent part. It enables you to isolate the time taken by client from the time when the data was still being transmitted on the network 265

266 Appendix A Central Analysis Server Data Views Incomplete Responses The number of incomplete responses, that is partial and server aborted responses, as well as situations when a server did not respond to the request at all or responded in an urecognizable way. LAN-WAN byte ratio The amount of compression performed and expressed as a percentage. 100% for pass-through. Greater than 100% if more bytes on the WAN side, including both pass-through and optimized traffic. Less than 100% if fewer bytes on the WAN side, including both pass-through and optimized traffic. LDAP client error LDAP client error Time Limit Exceeded LDAP critical error LDAP critical errors Busy Unwilling To Perform Operation Error Size Limit Exceeded Constraint Violation Protocol Error Invalid Attribute Syntax Naming Violation Invalid Credentials LDAP errors The number of LDAP Erros. The LDAP Errors are reported in the following categories: LDAP critical errors LDAP server errors LDAP security errors LDAP syntax errors LDAP client error LDAP client error LDAP security error LDAP security errors Authentication Method Not Supported Stronger Auth Required 266

267 Appendix A Central Analysis Server Data Views Admin Limit Exceeded Confidentiality Required Inappropriate Authentication Insufficient Access Rights Object Class Mods Prohibited LDAP server error LDAP server errors Unavailable Loop Detect Not Allowed On NonLeaf Not Allowed On RDN Affects Multiple DSAs Other Referral Unavailable Critical Extension SAS Binding Progress No Such Object Alias Problem Invalid DN Syntax Alias Dereferencing Problem LDAP syntax error LDAP syntax errors Compare False Compare True No Such Attribute Undefined Attribute Type Inappropriate Matching Attribute Or Value Exists Object Class Violation Entry Already Exists Long aborts For HTTP, this is the number of operations manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL after at least 8 seconds of waiting for the page download (8 seconds is default). For XML, this is the number of transactions stopped after at least a threshold number of seconds of waiting (8 seconds is the default). 267

268 Appendix A Central Analysis Server Data Views Loss rate in This is the percentage of the number of total packets that were lost and needed to be retransmitted. Loss rate in refers to traffic to a server in a DataCenter configuration. Loss rate out This is the percentage of the number of total packets that were lost and needed to be retransmitted. Loss rate out refers to traffic from a server in a DataCenter configuration. Mail server welcome msg. time The time after which the server welcome message was received. Max slow operation threshold The maximum of a slow operation threshold. If the operation time is longer than the threshold, the operation is considered to be slow. Min slow operation threshold The minimum slow operation threshold. MQ appl. errors The number of all API errors for IBM WebSphere Message Queue. This metric is a sum of Message Queue application. errors (1) to (5). MQ appl. errors (1) The number of API errors of type 1 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. MQ appl. errors (2) The number of API errors of type 2 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. MQ appl. errors (3) The number of API errors of type 3 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. MQ appl. errors (4) The number of API errors of type 4 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. 268

269 Appendix A Central Analysis Server Data Views MQ appl. errors (5) The number of API errors of type 5 for IBM WebSphere Message Queue. Real User Monitoring distinguishes five types of API errors for IBM WebSphere Message Queue. The assignment of actual MQ return values to a particular error type number is configured on the AMD device, on per software service and per server basis. The error type numbers can then be mapped, on Central Analysis Server, onto named error categories. This is performed, on per-software service basis, on Message Queue reports. MQ client errors The number of IBM WebSphere Message Queue client errors. This includes the following Message Queue errors: ERR_NO_CHANNEL (value 0x01), ERR_CHANNEL_WRONG_TYPE (value 0x02), ERR_MSG_SEQUENCE_ERROR (value 0x04), ERR_USER_CLOSED (value 0x07), ERR_TIMEOUT_EXPIRED (value 0x08), ERR_TARGET_Q_UNKNOWN (value 0x09), ERR_BATCH_FAILURE (value 0x11). MQ errors The total number of IBM WebSphere Message Queue errors, including client errors, server errors, protocol errors and security errors. MQ protocol errors The number of IBM WebSphere Message Queue protocol errors. This includes the following Message Queue errors: ERR_PROTOCOL_SEGMENT_TYPE (value 0x0a), ERR_PROTOCOL_LENGTH_ERROR (value 0x0b), ERR_PROTOCOL_INVALID_DATA (value 0x0c), ERR_PROTOCOL_SEGMENT_ERROR (value 0x0d), ERR_PROTOCOL_ID_ERROR (value 0x0e), ERR_PROTOCOL_MSH_ERROR (value 0x0f), ERR_PROTOCOL_GENERAL (value 0x10), ERR_MESSAGE_LENGTH_ERROR (value 0x12), ERR_SEGMENT_NUMBER_ERROR (value 0x13), ERR_WRAP_VALUE_ERROR (value 0x15). MQ security errors The number of IBM WebSphere Message Queue security errors. This includes the following Message Queue errors: ERR_SECURITY_FAILURE (value 0x14), ERR_SSL_REMOTE_BAD_CIPHER (value 0x18). MQ server errors The number of IBM WebSphere Message Queue server errors. This includes the following Message Queue errors: ERR_QM_UNAVAILABLE (value 0x03), ERR_QM_TERMINATING (value 0x05), ERR_CAN_NOT_STORE (value 0x06), ERR_CHANNEL_UNAVAILABLE (value 0x16), ERR_TERMINATED_BY_REMOTE_EXIT (value 0x17). MS Exchange errors Total number of RPC Server and RPC Protocol errors. Network performance The percentage of total traffic that did not experience network-related problems (traffic in which the values of loss rate and RTT did not exceed configured thresholds). Network performance affected bytes The volume of TCP traffic that did experience network-related problems. The traffic measured here includes both directions of data transfer, to and from client, or downstream 269

270 Appendix A Central Analysis Server Data Views and upstream, but does NOT include bytes transferred internally within the site. By network-related problems we understand excessive RTT or Loss Rate: at any given moment, traffic is considered to be experiencing network-related problems if, at that particular time, the values of Loss Rate or RTT exceed pre-configured thresholds. In situations when RTT measurements prove to be insufficient, ACK RTT may also become an additional criterion for determining network problems. Network performance relevant bytes The total volume of TCP traffic. Includes both directions of data transfer, to and from client, or downstream and upstream, but does NOT include bytes transferred internally within the site. Network time The time the network (between the user and the server) takes to deliver requests to the server and to deliver operation information back to the user. In other words, network time is the portion of the overall time that is due to the delivery time on the network. Number of hits in an aborted operation The number of hits in an aborted operation. Operation (or network) throughput The average aborted operation (or network) throughput in bytes per second. Operation attributes The number of operation attributes of all types (type 1 to 5), observed for the given software service. Operation attributes (1) The number of operation attributes of type 1, observed for the given software service. Operation attributes (2) The number of operation attributes of type 2, observed for the given software service. Operation attributes (3) The number of operation attributes of type 3, observed for the given software service. Operation attributes (4) The number of operation attributes of type 4, observed for the given software service. Operation attributes (5) The number of operation attributes of type 5, observed for the given software service. Operation length The number of packets that contained in an average operation. Operation load time (range 1) The number of operations that were loaded in time within range 1 as defined in the RUM Console. Operation load time (range 2) The number of operations that were loaded in time within range 2 as defined in the RUM Console. Operation load time (range 3) The number of operations that were loaded in time within range 3 as defined in the RUM Console. 270

271 Appendix A Central Analysis Server Data Views Operation load time (range 4) The number of operations that were loaded in time within range 4 as defined in the RUM Console. Operation load time (threshold 1 [max]) The highest value that was set for the upper bound of Operation time (range 1) during the reporting period. Operation load time (threshold 1 [min]) The lowest value that was set for the upper bound of Operation time (range 1) during the reporting period. Operation load time (threshold 2 [max]) The highest value that was set for the upper bound of Operation time (range 2) during the reporting period. Operation load time (threshold 2 [min]) The lowest value that was set for the upper bound of Operation time (range 2) during the reporting period. Operation load time (threshold 3 [max]) The highest value that was set for the upper bound of Operation time (range 3) during the reporting period. Operation load time (threshold 3 [min]) The lowest value that was set for the upper bound of Operation time (range 3) during the reporting period. Operation percentage breakdown Operation percentage breakdown into slow and fast operations. Operation requests The number of all operation requests, both requests that became successful operations and requests that were aborted by the client. Operations The number of operations. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Operations/min The number of operations per minute. Operations/sec The number of operations per second. Operations breakdown Operation breakdown into numbers of slow and fast operations. Operation size (range 1) The number of operations whose byte count is within range 1 as defined in the RUM Console. Operation size (range 2) The number of operations whose byte count is within range 2 as defined in the RUM Console. 271

272 Appendix A Central Analysis Server Data Views Operation size (range 3) The number of operations whose byte count is within range 3 as defined in the RUM Console. Operation size (range 4) The number of operations whose byte count is within range 4 as defined in the RUM Console. Operation size (threshold 1 [max]) The highest value that was set for the upper bound of Operation size (range 1) during the reporting period. Operation size (threshold 1 [min]) The lowest value that was set for the upper bound of Operation size (range 1) during the reporting period. Operation size (threshold 2 [max]) The highest value that was set for the upper bound of Operation size (range 2) during the reporting period. Operation size (threshold 2 [min]) The lowest value that was set for the upper bound of Operation size (range 2) during the reporting period. Operation size (threshold 3 [max]) The highest value that was set for the upper bound of Operation size (range 3) during the reporting period. Operation size (threshold 3 [min]) The lowest value that was set for the upper bound of Operation size (range 3) during the reporting period. Operations with breakdown The number of operations with operation breakdown into numbers of slow and fast operations. Operation time The time it took to complete an operation. The term "operation" refers to an operation in the context of a particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Note that an operation can be split over several packets. For HTTP and HTTPS, it is equal to the redirect time plus the network time plus server HTTP time plus server think time. Operation time + 2 stdv The operation time increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Operation time + stdv The operation time increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. 272

273 Appendix A Central Analysis Server Data Views Operation time - 2 stdv The operation time decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Operation time breakdown Operation time breakdown into the server time, the network time, the redirect time, and the other time. Operation time percentage breakdown The breakdown of the average value of operation time into percentage of the server time, the network time, the redirect time, and the other time. Operation time stdv The standard deviation for operation time calculated in relation to the selected baseline. Operation time - stdv The operation time decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Operation time with breakdown The time it took to complete an operation with an operation time breakdown into the server time, the network time, the idle time, and the other time. Oracle Applications error The number of Oracle Application errors that occurred during a given operation. For a complete list of specific Forms server errors, refer to the RUM Console Help. Oracle Forms errors The number of Oracle Forms errors that occurred during a given operation. For a complete list of specific Forms server errors, refer to the RUM Console Help. Oracle server error The number of Oracle server errors that occurred during a given operation. For a complete list of specific Forms server errors, refer to the RUM Console Help. Orphaned redirects The number of HTTP redirects for which a matching request to the target URL was not detected before the timeout time. Other SSL errors (default name) SSL alerts other than those for SSL errors 1 and SSL errors 2. Other time Part of the operation time, calculated as Operation time - Server Time - Network Time - Idle time. Percentage of aborts The percentage of transactions aborted by the client. It applies to all TCP-based protocols. For example, for HTTP, it is the number of operations manually stopped by the user by either clicking the Stop or Refresh buttons or selecting another URL. Note that, in the case of HTTP, this number includes Short aborts and Long aborts. Percentage of long aborts The percentage of HTTP operations manually stopped, by the user by either clicking on the Stop or Refresh buttons or selecting another URL, after significant time of waiting for 273

274 Appendix A Central Analysis Server Data Views the page download. The default wait time duration, classifying an abort as long, is 8 seconds. This threshold value is configurable. The same threshold value is also used to determine if an HTTP page was slow to load. Note that this metric applies exclusively to HTTP. Percentage of network time The network part of the transaction time, expressed as percentage. Percentage of optimized traffic (bytes) Indicates the traffic distribution in two separate branches: optimized traffic and passed-through traffic. The higher the value, the more bytes are optimized. Low values may indicate poorly configured optimization or optimization device overload. Percentage of short aborts The percentage of HTTP operations manually stopped, by the user by either clicking on the Stop or Refresh buttons or selecting another URL, before significant time of waiting for the page download. The default wait time duration, classifying an abort as long, is 8 seconds. This threshold value is configurable. The same threshold value is also used to determine if an HTTP page was slow to load. Note that this metric applies exclusively to HTTP. Percentage of slow operations The percentage of operations for which the operation time was above a predefined threshold value. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Percentage of TCP sessions w/errors The percentage of TCP sessions with errors. Primary Reason for Slowness Primary reason for slowness is one of the general categories causing operations to be slow. The categories include data center, network, application design, client/3rd party, and multiple reasons. Primary Reason for Slowness details The details of the primary reason for slowness. Realized bandwidth (range 1) The number of operations whose realized bandwidth is within range 1 as defined in the RUM Console. Realized bandwidth (range 2) The number of operations whose realized bandwidth is within range 2 as defined in the RUM Console. Realized bandwidth (range 3) The number of operations whose realized bandwidth is within range 3 as defined in the RUM Console. Realized bandwidth (range 4) The number of operations whose realized bandwidth is within range 4 as defined in the RUM Console. 274

275 Appendix A Central Analysis Server Data Views Realized bandwidth (threshold 1 [max]) The highest value that was set for the upper bound of Realized bandwidth (range 1) during the reporting period. Realized bandwidth (threshold 1 [min]) The lowest value that was set for the upper bound of Realized bandwidth (range 1) during the reporting period. Realized bandwidth (threshold 2 [max]) The highest value that was set for the upper bound of Realized bandwidth (range 2) during the reporting period. Realized bandwidth (threshold 2 [min]) The lowest value that was set for the upper bound of Realized bandwidth (range 2) during the reporting period. Realized bandwidth (threshold 3 [max]) The highest value that was set for the upper bound of Realized bandwidth (range 3) during the reporting period. Realized bandwidth (threshold 3 [min]) The lowest value that was set for the upper bound of Realized bandwidth (range 3) during the reporting period. Redirect time The average amount of time that was spent between the time when a user went to a particular URL and the time this user was redirected to another URL and issued a request to that new URL. The difference between Redirect Time and HTTP Redirect Time is that the former counts all operations, while the latter refers only to those operations for which redirection actually took place. Redirect time (range 1) The number of operations whose redirect time is within range 1 as defined in the RUM Console. Redirect time (range 2) The number of operations whose redirect time is within range 2 as defined in the RUM Console. Redirect time (range 3) The number of operations whose redirect time is within range 3 as defined in the RUM Console. Redirect time (range 4) The number of operations whose redirect time is within range 4 as defined in the RUM Console. Redirect time (threshold 1 [max]) The highest value that was set for the upper bound of Redirect time (range 1) during the reporting period. Redirect time (threshold 1 [min]) The lowest value that was set for the upper bound of Redirect time (range 1) during the reporting period. 275

276 Appendix A Central Analysis Server Data Views Redirect time (threshold 2 [max]) The highest value that was set for the upper bound of Redirect time (range 2) during the reporting period. Redirect time (threshold 2 [min]) The lowest value that was set for the upper bound of Redirect time (range 2) during the reporting period. Redirect time (threshold 3 [max]) The highest value that was set for the upper bound of Redirect time (range 3) during the reporting period. Redirect time (threshold 3 [min]) The lowest value that was set for the upper bound of Redirect time (range 3) during the reporting period. Requests breakdown The number of operation or transaction requests with operation or transaction breakdown into numbers of slow, fast, aborted, and failed operations or transactions. Requests breakdown (with tooltip) Operation or transaction breakdown into numbers of slow, fast, aborted, and failed operations or transactions. Requests percentage breakdown (with tooltip) Operation or transaction percentage breakdown into slow, fast, aborted, and failed operations or transactions. Request time The time it took the client to send the HTTP request to the server (for example, by means of an HTTP GET or HTTP POST). Note: This time includes TCP connection setup time and SSL session setup time (if any). It starts when the client starts the TCP session on the server and ends when the server receives the whole request. Sometimes an operation is slow because of a big request rather than due to a large response. Response messages The total number of protocol-specific server responses. That includes both errors and other identifiable response strings, as configured in monitoring. Response transfer time The time it took the server to send the response to the client (for example, by means of an HTTP GET or HTTP POST). The value is obtained by subtracting Server Time and Request Time from Operation Time. RMI/Simple parser error (1) The number of RMI/Simple parser errors of category 1. RMI/Simple parser error (2) The number of RMI/Simple parser errors of category 2. RMI/Simple parser error (3) The number of RMI/Simple parser errors of category 3. RMI/Simple parser error (4) The number of RMI/Simple parser errors of category 4. RMI/Simple parser error (5) The number of RMI/Simple parser errors of category

277 Appendix A Central Analysis Server Data Views RMI/Simple parser error (6) The number of RMI/Simple parser errors of category 6. RMI/Simple parser error (7) The number of RMI/Simple parser errors of category 7. RMI/Simple parser errors Total number of RMI/Simple parser errors. RPC Protocol error The number of all RPC_X_* errors. RPC Server error The number of all RPC_S_* and EPT_S_* errors. RTT measurements The number of RTT measurements. An RTT measurement occurs during every TCP handshake, so it provides some insight into the number of attempted TCP sessions, and the potential accuracy of the RTT measurements that are reported. Sampling rate The total percentage of packets that the AMD was able to process. Numbers lower that 100% mean that a portion of packets had been dropped because of performance issues. Sampling means dropping packets when network interface driver performance is degraded on the AMD. Packets are dropped in a controlled manner and always with care to preserve complete and therefore consistent sessions. SAP GUI error indicator Errors detected by examining the error strings returned to the user in Window Status or other SAP GUI data. Detected errors are included in availability calculation for the SAP application. SAP GUI errors The number of errors detected on the protocol level in communication between SAP application server and SAP GUI client as well as between SAP application server and a third party clients using Remote Function Calls (RFC). SAP GUI status error This automatically created group, consists values based on default configuration of patterns for Application Responses for the most commonly used errors. SAP RFC error An error consisting values based on user-defined configuration of patterns for Application Responses for the most common errors that are traced in selected fields in the SAP RFC protocol. SAP RFC error indicator Errors detected by examining the error strings returned in the user-defined attributes that are traced in selected fields in the SAP RFC protocol. Detected errors are included in the availability calculation for the SAP application. SAP RFC errors The number of errors detected on the protocol level in communication between a SAP application server and a SAP client plus the number of attributes which are defined as error indicators in the monitoring configuration. 277

278 Appendix A Central Analysis Server Data Views Server ACK RTT RTT measurement performed during ACK packet transmission, from server side of the operation. Also provided are minimum, maximum and standard deviation values. Server ACK RTT measurements These metrics keep track of how many RTT of Server ACK measurements were made. ACK measurement is performed during ACK packet transmission either from server or client side of the transaction. Server bandwidth usage The number of server bits per second. Server bytes The number of bytes sent by servers. The number includes headers. Server loss rate The percentage of total packets sent from a server that were lost and needed to be retransmitted. Server loss rate (AMD to client) The percentage of total packets sent by a server that were lost - between the AMD and the client - and needed to be retransmitted. Server loss rate (range 1) The number of operations whose server loss rate is within range 1 as defined in the RUM Console. Server loss rate (range 2) The number of operations whose server loss rate is within range 2 as defined in the RUM Console. Server loss rate (range 3) The number of operations whose server loss rate is within range 3 as defined in the RUM Console. Server loss rate (range 4) The number of operations whose server loss rate is within range 4 as defined in the RUM Console. Server loss rate (server to AMD) The percentage of total packets sent by a server that were lost - between the AMD and the server - and needed to be retransmitted. Server loss rate (threshold 1 [max]) The highest value that was set for the upper bound of Server loss rate (range 1) during the reporting period. Server loss rate (threshold 1 [min]) The lowest value that was set for the upper bound of Server loss rate (range 1) during the reporting period. Server loss rate (threshold 2 [max]) The highest value that was set for the upper bound of Server loss rate (range 2) during the reporting period. Server loss rate (threshold 2 [min]) The lowest value that was set for the upper bound of Server loss rate (range 2) during the reporting period. 278

279 Appendix A Central Analysis Server Data Views Server loss rate (threshold 3 [max]) The highest value that was set for the upper bound of Server loss rate (range 3) during the reporting period. Server loss rate (threshold 3 [min]) The lowest value that was set for the upper bound of Server loss rate (range 3) during the reporting period. Server loss rate + 2 stdv The server loss rate increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server loss rate + stdv The server loss rate increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server loss rate - 2 stdv The server loss rate decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server loss rate stdv The standard deviation for server loss rate calculated in relation to the selected baseline. Server loss rate - stdv The server loss rate decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server not responding errors The number of Server Not Responding errors. This category of errors applies when the client closes the TCP session with a RESET packet after the server has failed to respond for too long. Server operation size The size of a server operation. In HTTP and HTTPS (decrypted and non-decrypted), server operation size equals the operation size. Server operation size + 2 stdv The size of a server operation increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server operation size + stdv The size of a server operation increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server operation size - 2 stdv The size of a server operation decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. 279

280 Appendix A Central Analysis Server Data Views Server operation size stdv The standard deviation for server operation size calculated in relation to the selected baseline. Server operation size - stdv The size of a server operation decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server packets The number of packets sent by the servers. Server packets/sec The number of packets per second, sent by the servers. Server packet size The average size of the server-originating packets (in bytes), including header. Server packets lost (AMD to client) The number of packets sent by a server that were lost - between the AMD and the client - and needed to be retransmitted. Server packets lost (server to AMD) The number packets sent by a server that were lost - between the server and the AMD - and needed to be retransmitted. Server realized bandwidth Server realized bandwidth refers to the actual transfer rate of server data when the transfer attempt occurred, and takes into account factors such as loss rate (retransmissions). Thus, it is the size of an actual transfer divided by the transfer time. Server realized bandwidth + 2 stdv Server realized bandwidth increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server realized bandwidth + stdv Server realized bandwidth increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server realized bandwidth - 2 stdv Server realized bandwidth decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server realized bandwidth stdv The standard deviation for server realized bandwidth calculated in relation to the selected baseline. Server realized bandwidth - stdv Server realized bandwidth decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. 280

281 Appendix A Central Analysis Server Data Views Server response time This is the amount of time it takes for a server to provide its initial response to a user's operation request. Often servers will respond with some information quickly, before all the information is ready for delivery. Together with the server think time, the server response time sums to the overall server time. Note that if there was no think time recorded for the opration, it equals the server time. Server RTT The time it takes for a SYN packet (sent by a user) to travel from the AMD to a monitored server and back again. Also provided are minimum, maximum and standard deviation values. Client AMD Server T1 SYN T2 T6 SYN ACK T5 Server RTT T3 T4 T7 ACK T8 T9 Server RTT + 2 stdv Server RTT increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server RTT + stdv Server RTT increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server RTT - 2 stdv Server RTT decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server RTT stdv The standard deviation for server RTT calculated in relation to the selected baseline. Server RTT - stdv Server RTT decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server session termination errors The number of Server Session Termination errors. This category of errors applies when the server detects an error on the software service level and closes the TCP session with a RESET packet. Server TCP data packets The total number of TCP packets sent by the servers, excluding the traffic control packets. 281

282 Appendix A Central Analysis Server Data Views Server TCP data packets lost The number of lost TCP data packets sent by the servers, excluding the traffic control packets. The number of lost TCP packets always regards the context of the counter, for example, an application, a client or any other entity. Server think time The time that elapsed between the moment the server received the request for the Base Page, and the time the server fully composed the response. Depending on the nature of the request, Application Servers in the Data Center may be involved to produce the content. In such a case, this additional time will be reflected in the Server Think Time metric. Server time The time it took the server to produce a response for the given request. Server time (range 1) The number of operations whose server time is within range 1 as defined in the RUM Console. Server time (range 2) The number of operations whose server time is within range 2 as defined in the RUM Console. Server time (range 3) The number of operations whose server time is within range 3 as defined in the RUM Console. Server time (range 4) The number of operations whose server time is within range 4 as defined in the RUM Console. Server time (threshold 1 [max]) The highest value that was set for the upper bound of Server time (range 1) during the reporting period. Server time (threshold 1 [min]) The lowest value that was set for the upper bound of Server time (range 1) during the reporting period. Server time (threshold 2 [max]) The highest value that was set for the upper bound of Server time (range 2) during the reporting period. Server time (threshold 2 [min]) The lowest value that was set for the upper bound of Server time (range 2) during the reporting period. Server time (threshold 3 [max]) The highest value that was set for the upper bound of Server time (range 3) during the reporting period. Server time (threshold 3 [min]) The lowest value that was set for the upper bound of Server time (range 3) during the reporting period. 282

283 Appendix A Central Analysis Server Data Views Server time + 2 stdv The server time increased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server time + stdv The server time increased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server time - 2 stdv The server time decreased by two standard deviations. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Server time stdv The standard deviation for server time calculated in relation to the selected baseline. Server time - stdv The server time decreased by one standard deviation. The value of the metric is an average, and the standard deviation is calculated from the spread of the individual values used for calculating the average. Short aborts The number of transactions stopped before timeout. For HTTP, this is the number of page loads software service manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL before 8 seconds of waiting for the page download (8 seconds is default). For XML, this is the number of transactions stopped before a threshold number of seconds of waiting (8 seconds is the default). Slow operations The number of operations for which the operation time was above a predefined threshold value. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Note that slow operations for SMB are not determined using the time threshold, but maximum and minimum realized bandwidth thresholds. Slow operations (application design) The number of slow operations caused by the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - # of components) The number of slow operations caused by the number of components, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - redirect time) The number of slow operations caused by redirect time, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness 283

284 Appendix A Central Analysis Server Data Views algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - request size) The number of slow operations caused by request size, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - response size) The number of slow operations caused by response size, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (client/3rd party) The number of slow operations caused by client/3rd party category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (data center) The number of slow operations caused by the data center category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (multiple reasons) The number of slow operations caused by multiple reasons, that is when the algorithm was not able to determine one primary reason for slowness. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network) The number of slow operations caused by the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - latency) The number of slow operations caused by latency, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - loss rate) The number of slow operations caused by loss rate, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - other) The number of slow operations caused by other factors than latency or loss rate, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. 284

285 Appendix A Central Analysis Server Data Views Slow user sessions The number of client sessions, which contained at least one slow operations (page load for HTTP or HTTPS). SMTP command syntax error (default name) The number of "SMTP command syntax" errors, that is server response code numbers between 500 and 504. These values are default and can be customized. SMTP errors The total number of SMTP errors. SMTP general system error (default name) The number of "SMTP general system" errors, that is server response code numbers 421, 451, 452 or 554. These values are default and can be customized. SMTP mailbox not available (default name) The number of "SMTP mailbox not available" errors, that is server response code numbers 450, 550, 551, 552, 553. These values are default and can be customized. SSL conn. setup per operation The time it took to establish an SSL connection between the client and the server, weighted per number of operations. For HTTP-based software services, a single operation means a single page. SSL conn. setup per session The time it took to establish an SSL connection between the client and the server. SSL errors The number of all SSL alerts. This metric is the sum of SSL Session Fatal Errors, SSL Handshake Errors and SSL Warnings. SSL errors 1 (default name) If not explicitly configured, general SSL alerts from the following list: 10,20,21,22,30,40,49,50,51. SSL errors 2 (default name) If not explicitly configured, general SSL alerts from the following list: 41,42,43,44,45,46,48. SSL handshakes The number of observed SSL handshakes. Standalone hits The number of hits not associated with any operation, such as orphaned redirects, unauthorized hits, and discarded hits (no server response). Successful attempts The number of monitoring intervals during which successful attempts were made to connect to a server. Note that this is counted separately for each server. Thus, if in a given monitoring interval there are attempts to connect to three different servers, the Successful attempts metric will be incremented by three for that one monitoring interval. Note also that, even if TCP errors occur, but the connection is established during the given monitoring interval, then this monitoring interval is counted as a success (for that server). Sum of failures and response messages Sum of failures and response messages. 285

286 Appendix A Central Analysis Server Data Views TCP connections attempts The number of all TCP connection attempts (successful and unsuccessful). TCP errors The total number of TCP errors. Those errors may indicate server or application problems and therefore measurements of those are critical to understanding the issues that may affect end-user experience. AMDs measure and report on the following types of TCP errors: Connection Refused Errors - Client attempts to open a TCP session with a server, which rejects the request. SYN packet from Client is followed by RESET packet from Server, with matching TCP sequence numbers. This error is typically caused by resource exhaustion on the server, which is unable to accept more concurrent TCP sessions. This may be either a configuration issue (too few resources allocated in the kernel) or lack of memory. SYN flood attacks typically result in servers being unable to accept new connections. Server session termination error - Server is unexpectedly terminating a connection that was successfully opened. The server sends a RESET packet to the Client. Such an error originates at an application using TCP session that is monitored. It does not necessarily mean application failure; usually it means that the application encountered a condition in which it decided to immediately terminate session with the client, for example, because of an application security policy violation by the client. Session Abort - Client is unexpectedly terminating a connection that was successfully opened. The Client sends a RESET packet to the Server. These errors are inspected in the context of the client application and may or may not be reported. For example, the browser running HTTP may terminate the load of a GIF file if it is older than the one that it had previously cached and this is normal behavior. However, if all connections to the server are terminated because the user hits the STOP button, then this is abnormal session termination and is reported as "Aborted operation" or "Stopped Page". Client not responding errors (server timeout errors) - Server networking stack takes an assumption that the network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with the RESET packet. Such a condition may occur when the client has been silently disconnected from the network, for example, due to a link failure, or the client has crashed. Note that this error will not occur if the client has ended the session gracefully, e.g. by closing the client application. Server not responding errors (client timeout errors) - Client networking stack takes an assumption that network connection to the server exists, but the server remains idle and does not respond. In such a case, the client closes the TCP session with the RESET packet. This may occur either during the Session Setup phase (no response to the SYN packet), or during a normal data exchange process. Such a situation may result in the intermittent network problems between the client and the server. In the case the traffic is routed through asymmetric paths across the Internet, which is often the case, the path from the server to the client may be broken. 286

287 Appendix A Central Analysis Server Data Views TCP SYN time The time needed to establish a connection on the TCP/IP layer, that is, the average time it took to transfer SYN packets. Time to abort The average aborted operation duration including the redirect time. In the case of HTTP and SSL, this is the operation time. Total bandwidth usage The number of all transmitted bits (client + server) per second. Total bandwidth usage with breakdown Total bandwidth usage with breakdown. Total bytes The number of all transmitted bytes (client + server). Total bytes compression The data optimization observed, expressed as a byte reduction and a percentage, where a lower byte count on the WAN side means a higher reduction: 0% for pass-through. Less than 0% if more bytes were observed on the WAN side, including both pass-through and optimized traffic. Greater than 0% if fewer bytes were observed on the WAN side, including both pass-through and optimized traffic. This metric should not exceed 100%. Total bytes on LAN side The sum of bytes (client's and server's) observed on the LAN side before network traffic is directed into the WAN Optimization Controller (WOC). Total bytes on WAN side The sum of bytes (client's and server's) observed on the WAN side after network traffic leaves the WAN Optimization Controller (WOC), including bytes that have been passed through and those that have been marked as optimized. Total packets The number of all transmitted packets (client + server). Total packets/sec The number of all transmitted packets (client + server) per second. Total wait time The total time of all transactions. Transact. errors The number of transaction errors (applies to Jolt (Tuxedo)). Transact. rollbacks The number of transaction rollbacks (applies to Jolt (Tuxedo)). Transact. rollbacks after timeout The number of transaction rollbacks that occurred after a predefined timeout (applies to Jolt (Tuxedo)). Transact. service authentication errors The number of "service transaction authentication" errors (applies to Jolt (Tuxedo)). 287

288 Appendix A Central Analysis Server Data Views Transact. service not found errors The number of "transaction service not found" errors (applies to Jolt (Tuxedo)). Transactional service errors The total number of transactional service errors. Transport errors The number of transport related errors. Two-way loss rate The percentage of total packets (client and server) that were lost (due to network congestion, low router queue capacity or other reasons) and needed to be retransmitted. Unavailability (total) This is the difference between 100% and availability. Unavailability (transport) (Application failures/application successful attempts)*100%. Unique client groups The number of unique client groups, to which the detected clients belong. Unique operations The number of different operations (queries, operation types, etc.). Unique servers The number of unique servers, that is, unique server IP addresses. Unique services The number of unique services. A unique service is defined by a software service name, agent name (for synthetic traffic), analyzer name, server IP address, server name, and Type of Service value. Unique sites The number of unique client sites. User sessions The number of user HTTP sessions. The count can be identified by information contained in intercepted HTTP cookies or by HTTP authorization. User wait time per kb Reversed throughput, that is the average time spent by the user waiting for delivery of 1 kb of software service data (operations time vs. operation size). VoIP delay VoIP average networking delay, as reported by Real Time Transport Protocol (RTCP), measured for both downstream and upstream traffic. VoIP delay for client-to-server traffic VoIP average networking delay in the upstream direction, that is, from a local to a remote VoIP endpoint. VoIP delay for server-to-client traffic VoIP average networking delay in the downstream direction, that is, from a remote to the local VoIP endpoint. VoIP Jitter VoIP average jitter measured by the probe, for both downstream and upstream traffic. Jitter is a variation in voice data transit delay, in milliseconds. In general, higher levels of jitter are more likely to occur on either slow or heavily congested links. 288

289 Appendix A Central Analysis Server Data Views VoIP Jitter for client-to-server traffic VoIP average jitter as reported by Real Time Transport Protocol (RTCP), for upstream traffic. Jitter is a variation in voice data transit delay, in milliseconds. Higher levels of jitter are more likely to occur on either slow or heavily congested links. VoIP Jitter for server-to-client traffic VoIP average jitter measured by the probe in the downstream traffic, that is, from a remote VoIP phone to the local endpoint. Jitter is a variation in voice data transit delay, in milliseconds. VoIP loss rate The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for both upstream and downstream traffic. VoIP loss rate for client-to-server traffic The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for upstream traffic. VoIP loss rate for server-to-client traffic The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for downstream traffic. VoIP MOS VoIP average Mean Opinion Score (MOS) rating of the call quality, for both downstream and upstream traffic. VoIP MOS for client-to-server traffic VoIP average Mean Opinion Score (MOS) measured in the upstream direction, that is, from a subscriber to a remote VoIP phone. VoIP MOS for server-to-client traffic VoIP average Mean Opinion Score (MOS) measured in the downstream direction, that is, from a remote VoIP phone to the subscriber. VoIP R-factor VoIP average R-factor value, for both downstream and upstream traffic. It is a transmission quality rating, with a typical range of An R-Factor score is derived from multiple VoIP metrics, including latency, jitter, and loss. VoIP R-factor for client-to-server traffic VoIP average R-factor value in the upstream direction, that is, from a subscriber to a remote VoIP phone. VoIP R-factor for server-to-client traffic VoIP average R-factor value in the downstream direction, that is, from a remote VoIP phone to the subscriber. VoIP RTCP Jitter VoIP average jitter as reported by Real Time Transport Protocol (RTCP), for both downstream and upstream traffic. Jitter is a variation in voice data transit delay, in milliseconds. Higher levels of jitter are more likely to occur on either slow or heavily congested links. 289

290 Appendix A Central Analysis Server Data Views VoIP RTCP Jitter for client-to-server traffic VoIP average jitter as reported by Real Time Transport Protocol (RTCP) for the upstream traffic, that is, from a local VoIP endpoint to a remote one. Jitter reflects a variation in voice data transit delay, expressed in milliseconds. VoIP RTCP Jitter for server-to-client traffic VoIP average jitter as reported by Real Time Transport Protocol (RTCP) for the downstream traffic, that is, from a remote VoIP endpoint to the local endpoint. Jitter reflects a variation in voice data transit delay, in milliseconds. Window title SAP GUI decode has an option to retrieve form field values from selected SAP form fields. This automatically created group, aggregates metrics related to errors based on window title. Zero window size events Client sets this in TCP header when it wants the other side to slow down with data transmission because it cannot keep up with the transmission speed. Indicates that receiving machine is busy with other tasks. Application, transaction, and tier data This data view provides dimensions and metrics that are common to different types of application layers (tiers). However, it does not allow data aggregation for different tiers on a report - the "Tier" dimension must always be selected when defining the report. The only exception to this rule is when you want to display data for front-end tiers. You do not have to use the "Tier" dimension if you set the "Tier type" to "Data center tier" and the "Is front-end tier" dimension to "Yes". Application, transaction, and tier data dimensions Application A universal container that can accommodate transactions. Business day The classification of days as business or non-business, as defined in the Business Hours Configuration tool. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Day of the week The textual representation of the day of the week. Hour of the day The numerical representation of the hour of the day, that is, numbers from 0 to 23. Is front-end tier? Indicates whether a given tier is a front-end tier for a selected application. 290

291 Appendix A Central Analysis Server Data Views Tier A specific point of the application where we measure data. It can be a specific traffic type or a server. Tier sequence number The sequence number of a tier is determined by the order in which you define your tiers, and these numbers in turn determine the order in which data is displayed on the report. Tier type The type of a tier can be one of the following: client, network, or data center. Time The time stamp of the data presented on the report. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. Transaction step The step as configured in a transaction definition. Step configuration is built on DCRUM data using operations, tasks, modules or services. Steps are contained within transactions and carry the entire transaction configuration. Transaction step sequence number The sequence number of a step is used for presentation purposes. It marks the order of a particular step in a transaction configuration. You can order steps within each transaction if such an ordering makes sense for the overall monitored application paradigm. The transaction step sequence does not affect data aggregation. Application, transaction, and tier data metrics Aborted transactions The number of aborted transactions due to the HTTP timeout. This metric is calculated only for Client tiers. Aborts The number of aborted operations. This metric is calculated only for the Network tier, the Client network tier, Client optimized network tier and data center tiers. Affected users (availability) The number of unique users that were affected by TCP availability problems. For Client optimized network, Client network, and Network tiers, this metric is not calculated. Affected users (availability) breakdown A breakdown of users into how many were affected by availability problems and how many were not. Affected users (network) The number of unique users that experienced network performance problems. Affected users (network) breakdown A breakdown of users into how many were affected by network performance problems and how many were not. Affected users (performance) The number of unique users that experienced application performance problems or network performance problems. For Client optimized network tier, this metric is not calculated. 291

292 Appendix A Central Analysis Server Data Views Affected users (performance) breakdown A breakdown of users into how many were affected by application performance problems and how many were not. Application health index The percentage of fast operations calculated as "Fast Operations / (Failures + Operations) * 100%". Application Monitoring Operations The number of Application Monitoring operations. Availability (application) Availability limited to the application context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (Application) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Availability (TCP) Availability limited to the network context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (TCP) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Availability (total) Depending on the particular tier, this term may mean: For Client tiers: the percentage of successful attempts, calculated as *(failures/attempts). For the Citrix/WTS (presentation) tier: the percentage of successful TCP connection attempts, calculated as *(failures/attempts). For other Network tiers: the percentage of successfully sent packets, calculated as *(sent packets that were lost/total number of sent packets). For other Data center tiers: the percentage of successful attempts, calculated using the following formula: Availability (total) = 100% * (All Attempts All failures) / All Attempts whereall attempts = all failures + all successful operations + all standalone hits not classified as a failure + all abortsall failures = all failures (transport) + all failures (TCP) + all failures (application). Availability (transport) Availability limited to the transport context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (Transport) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. 292

293 Appendix A Central Analysis Server Data Views Average CPU utilization The percentage of elapsed time that the processor spent to execute non-idle threads. This counter is the primary indicator of processor activity, and shows the average percentage of busy time. Average memory utilization The average percentage of used physical memory (RAM). Client loss rate The percentage of total packets sent by a client that were lost and needed to be retransmitted. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for WAN and Pass-through deployment only), and tiers based on TCP-based analyzers. Client RTT Client RTT is the time it takes for a SYN packet (sent by a server) to travel from the AMD to the client and back again, as shown in the following picture. Client AMD Server T1 SYN T2 T3 T6 T7 SYN ACK ACK T5 Client RTT T8 T4 T9 A client RTT measurement begins when the SYN ACK packet from the server to the client passes by the AMD (T5). The packet reaches the client machine (T6) and is processed, while an acknowledgment is sent back to the server (T7). Client processing time impact (T7-T6) is again very low. Client RTT measurement ends when the ACK packet reaches the AMD (T8). Therefore, the Client Round Trip Time is calculated as T8-T5. Depending on the actual setup, Client RTT measurements may vary dramatically. In corporate environments, it may be a few milliseconds for LAN-connected clients or a couple dozens milliseconds for WAN-connected clients. In this case, where the client is coming from the Internet, the end-to-end Client RTT measurement is a compound of transit time through the Internet backbone as well as through the "last mile" access network. The impact of the last mile can be easily calculated, based on the connection speed and the packet size (56B in case of TCP SYN packet). For a 28 kbps dial-up connection, this amounts to 16 milliseconds one way, or 32 milliseconds for a complete round-trip measurement. For a 1.6 Mbps DSL line, this makes 56 microseconds towards complete client RTT measurement. Client Volume The number of client transmitted bytes. Database errors The number of database errors in the database analyzer: For TDS, which includes Sybase and MS SQL Server, any value from the following table is considered an error. For MySQL, if an ERR_Packet is returned, the error count is incremented. 293

294 Appendix A Central Analysis Server Data Views An error with a severity level of 19 or higher stops the execution of the current SQL batch and the error message is written to the error log. Errors that can be corrected by the user: 11: The given object or entity does not exist. 12: SQL statements that do not use locking because of special options. In some cases, read operations performed by these SQL statements could result in inconsistent data, because locks do not guarantee consistency. 13: Transaction deadlock errors. 14: Security-related errors such as permission denied. 15: Syntax errors in the SQL statement. 16: General errors that can be corrected by the user. Software errors that cannot be corrected by the user and that require system administrator action: 17: The SQL statement caused the database server to run out of resources (such as memory, locks, or disk space for the database) or to exceed some limit set by the system administrator. 18: There is a problem in the database engine software, but the SQL statement completes execution, and the connection to the instance of the database engine is maintained. System administrator action is required. 19: A non-configurable database engine limit has been exceeded and the current SQL batch has been terminated. System problems: 20-25: Fatal errors, meaning that the database engine task that was executing a SQL batch is no longer running. The task records information about what occurred and then terminates. In most cases, the application connection to the instance of the database engine also terminates. If this happens, depending on the problem, the application might not be able to reconnect. Database warnings The number of database warnings in the database analyzer: For TDS, which includes Sybase and MS SQL Server, this count will always be zero. TDS does not track anything as a warning. For MySQL, if an OK_Packet is returned, the warning count value in that packet is checked and the total warning field is updated with the returned number. DNS errors The number of DNS errors. End-to-end RTT The time it takes for a SYN packet to travel from the client to a monitored server and back again. Error indicator The number of error indicators. Failures (application) The number of operation attributes of all types set to be reported as an application failure. 294

295 Appendix A Central Analysis Server Data Views Failures (TCP) The number of operations that failed due to one the TCP errors. Failures (total) The total number of failures, that is all Failures (transport) + all Failures (TCP) + all Failures (application) Failures (transport) The number of operations that failed due to the problems in the transport layer. You configure the failures (transport) to include the following: protocol errors, SSL alerts, aborts and incomplete responses. Fast operations/transactions The number of operations or transactions for which the execution time was below a predefined threshold value. These include HTTP/HTTPS page loads, SQL database queries, XML (transactional services) operations, s, DNS requests, Oracle Forms submissions, MQ operations, MS Exchange operations, SAP operations, transactions (for RUM data). Fatal error The number of fatal errors. HTTP errors The number of observed HTTP client errors (4xx) and server errors (5xx). Idle time The part of the operation time spent between receiving a part of the response and requesting a subsequent part. It enables you to isolate the time taken by client from the time when the data was still being transmitted on the network. Incomplete responses The number of incomplete responses, that is partial and server aborted responses, as well as situations when a server did not respond to the request at all or responded in an urecognizable way. LDAP errors The number of LDAP Erros. The LDAP Errors are reported in the following categories: LDAP critical errors LDAP server errors LDAP security errors LDAP syntax errors LDAP client error LDAP client error Long aborts For HTTP, this is the number of operations manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL after at least 8 seconds of waiting for the page download (8 seconds is default). For XML, this is the number of transactions stopped after at least a threshold number of seconds of waiting (8 seconds is the default). MQ appl. errors The number of operation attributes of all types set to be reported as MQ application errors for software services based on an MQ analyzer. 295

296 Appendix A Central Analysis Server Data Views MQ errors The total number of IBM WebSphere Message Queue errors, including client errors, server errors, protocol errors and security errors. MS Exchange errors The total number of RPC server and RPC protocol errors. Network performance The percentage of total traffic that did not experience network-related problems (traffic in which the values of loss rate and RTT did not exceed configured thresholds). Network time The time the network takes to deliver the request to the server and to deliver the resulting response back to the user. In other words, network time is the portion of the operation time that is spent on transferring data over the network. Operation/transaction percentage breakdown A percentage breakdown into slow and fast transactions or operations. Operation/transactions requests The number of all operation requests, both requests that became successful operations and requests that were aborted by the client. Operation/Transaction time The average value of operation or transaction time for all operations or transactions performed on the particular tier. Operation/transaction time breakdown Operation time breakdown into the server time, the network time, the idle time and the other time. For RUM sequence transactions, the other time is a sum of the client time, the client response time, the application processing time, and the idle time. For synthetic transactions, the other time is equal to the client time and the idle time is not calculated. For RUM Browser data, the other time is equal to client time and the redirect and idle times are not calculated. Note that for RUM Browser data, the sum of client, network and server time does not have to reflect user action time. The other time is not calculated for Dynatrace Performance Network data. Operation/transaction time percentage breakdown The breakdown of the average value of operation time into percentage of the server time, the network time, the idle and the other time. For RUM sequence transactions, the other time is a sum of the client time, the client response time, the application processing time, and the idle time. For synthetic transactions, the other time is equal to the client time and the idle time is not calculated. For RUM Browser data, the other time is equal to client time and the redirect and idle times are not calculated. Note that for RUM Browser data, the sum of client, network and server time does not have to reflect user action time. The other time is not calculated for Dynatrace Performance Network data. Operation/transaction time with breakdown The time it took to complete an operation with an operation time breakdown into the server time, the network time, the idle time, and the other time. For RUM sequence transactions, the other time is a sum of the client time, the client response time, the application processing time, and the idle time. For synthetic transactions, the other time is equal to the client time. For RUM Browser data, the other time is equal to client time and the redirect time is not calculated. Note that for RUM Browser data, the sum of client, network and server 296

297 Appendix A Central Analysis Server Data Views time does not have to reflect user action time. The other time is not calculated for Dynatrace Performance Network data. Operation attributes The number of operation attributes of all types (type 1 to 5), observed for the given software service. Operations/Transactions Depending on the tier definition and on the traffic analyzer used, this metric shows the number of: HTTP(S) operations SQL database queries XML (transactional services) operations messages DNS requests Oracle Forms submissions MQ operations MS Exchange operations SAP operations Cerner transactions Transactions (for RUM data) Operations with breakdown The number of operations with operation breakdown into numbers of slow and fast operations. Other time For RUM sequence transactions, the other time is a sum of the client time, the client response time, and the application processing time. For synthetic transactions, the other time is equal to the client time. For RUM Browser data, the other time is equal to the client time if provided by the Application Monitoring server. The other time is not calculated for Dynatrace Performance Network data. Percentage of affected users (availability) The percentage of users that were affected by availability problems. For Client optimized network, Client network, and Network tiers, this metric is not calculated. Percentage of affected users (availability) breakdown A percentage breakdown of users into how many were affected by availability problems and how many were not. Percentage of affected users (network) The percentage of unique users that experienced network performance problems. Percentage of affected users (network) breakdown A percentage breakdown of users into how many were affected by network performance problems and how many were not. 297

298 Appendix A Central Analysis Server Data Views Percentage of affected users (performance) The percentage of users that experienced application performance problems or network performance problems. For Client optimized network tier, this metric is not calculated. Percentage of affected users (performance) breakdown A percentage breakdown of users into how many were affected by application performance problems and how many were not. Percentage of slow operations The percentage of operations for which the operation time was above a predefined threshold value. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Performance Depending on the particular tier, the term performance can mean: For Client tiers: the percentage of transactions completed in a time shorter than the defined time threshold, calculated as *(slow transactions/all transactions). For the Client optimized network tier: the percentage of compressed bytes. For other Network tiers: the percentage of total traffic that did not experience network-related problems. For Data center tiers: for transactional protocols, this is the percentage of software service operations completed in a time shorter than the performance threshold. For transactionless, TCP-based protocols, this is the percentage of monitoring intervals in which user wait time per kb of data was shorter than the threshold value. Primary Reason for Slowness Primary reason for slowness is one of the general categories causing operations to be slow. The categories include data center, network, application design, and client/3rd party. Primary Reason for Slowness details The details of the primary reason for slowness. Realized bandwidth The actual transfer rate of server data when the transfer attempt occurred. This metric takes into account factors such as loss rate (retransmissions). Redirect time The average amount of time that was spent between the time when a user went to a particular URL and the time this user was redirected to another URL and issued a request to that new URL. The difference between Redirect Time and HTTP Redirect Time is that the former counts all operations, while the latter refers only to those operations for which redirection actually took place. Requests (without aborts) The number of request excluding the aborts. Requests breakdown The number of operation or transaction requests with operation or transaction breakdown into numbers of slow, fast, aborted, and failed operations or transactions. 298

299 Appendix A Central Analysis Server Data Views Requests breakdown (with tooltip) Operation or transaction breakdown into numbers of slow, fast, aborted, and failed operations or transactions. Requests percentage breakdown (with tooltip) Operation or transaction percentage breakdown into slow, fast, aborted, and failed operations or transactions. Response messages The total number of protocol-specific server responses. That includes both errors and other identifiable response strings, as configured in monitoring. RMI/Simple parser errors Total number of RMI/Simple parser errors. RTT measurements The number of RTT measurements. An RTT measurement occurs during every TCP handshake, so it provides some insight into the number of attempted TCP sessions, and the potential accuracy of the RTT measurements that are reported. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for LAN only), and tiers based on TCP-based analyzers. SAP errors The number of errors detected on the protocol level in communication between SAP application server and SAP GUI client as well as between SAP application server and a third party clients using Remote Function Calls (RFC). Server loss rate The percentage of total packets sent by a server that were lost - between the AMD and the server - and needed to be retransmitted. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for WAN and Pass-through deployment only), and tiers based on TCP-based analyzers. Server RTT The time it takes for a SYN packet to travel from the AMD to a monitored server and back again. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for LAN only), and tiers based on TCP-based analyzers. Client AMD Server T1 SYN T2 T6 SYN ACK T5 Server RTT T3 T4 T7 ACK T8 T9 Server TCP data packets The total number of TCP packets sent by the servers, excluding the traffic control packets. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for LAN only), and tiers based on TCP-based analyzers. 299

300 Appendix A Central Analysis Server Data Views Server time The time it took the server to produce a response for the given request. Server Volume The number of server transmitted bytes. Short aborts The number of transactions stopped before timeout. For HTTP, this is the number of page loads software service manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL before 8 seconds of waiting for the page download (8 seconds is default). For XML, this is the number of transactions stopped before a threshold number of seconds of waiting (8 seconds is the default). Slow operations/transactions The number of operations or transactions for which the execution time was above a predefined threshold value. These include HTTP/HTTPS page loads, SQL database queries, XML (transactional services) operations, s, DNS requests, Oracle Forms submissions, MQ operations, MS Exchange operations, SAP operations, transactions (for RUM data). Slow operations (application design) The number of slow operations caused by the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - # of components) The number of slow operations caused by the number of components, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - redirect time) The number of slow operations caused by redirect time, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - request size) The number of slow operations caused by request size, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - response size) The number of slow operations caused by response size, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (client/3rd party) The number of slow operations caused by client/3rd party category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. 300

301 Appendix A Central Analysis Server Data Views Slow operations (data center) The number of slow operations caused by the data center category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (multiple reasons) The number of slow operations caused by multiple reasons, that is when the algorithm was not able to determine one primary reason for slowness. Slow operations (network) The number of slow operations caused by the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - latency) The number of slow operations caused by latency, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - loss rate) The number of slow operations caused by loss rate, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - other) The number of slow operations caused by other factors than latency or loss rate, which is one of the detailed reasons in network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. SMTP errors The total number of SMTP errors. SSL errors The number of all SSL alerts. This metric is the sum of SSL errors 1, SSL errors 2, and Other SSL errors. TCP errors The total number of TCP errors. Those errors may indicate server or application problems and therefore measurements of those are critical to understanding the issues that may affect end-user experience. AMDs measure and report on the following types of TCP errors: Connection Refused Errors - Client attempts to open a TCP session with a server, which rejects the request. SYN packet from Client is followed by RESET packet from Server, with matching TCP sequence numbers. This error is typically caused by resource exhaustion on the server, which is unable to accept more concurrent TCP sessions. This may be either a configuration issue (too few resources allocated in the kernel) or lack of memory. SYN flood attacks typically result in servers being unable to accept new connections. 301

302 Appendix A Central Analysis Server Data Views Server session termination error - Server is unexpectedly terminating a connection that was successfully opened. The server sends a RESET packet to the Client. Such an error originates at an application using TCP session that is monitored. It does not necessarily mean application failure; usually it means that the application encountered a condition in which it decided to immediately terminate session with the client, for example, because of an application security policy violation by the client. Session Abort - Client is unexpectedly terminating a connection that was successfully opened. The Client sends a RESET packet to the Server. These errors are inspected in the context of the client application and may or may not be reported. For example, the browser running HTTP may terminate the load of a GIF file if it is older than the one that it had previously cached and this is normal behavior. However, if all connections to the server are terminated because the user hits the STOP button, then this is abnormal session termination and is reported as "Aborted operation" or "Stopped Page". Client not responding errors (server timeout errors) - Server networking stack takes an assumption that the network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with the RESET packet. Such a condition may occur when the client has been silently disconnected from the network, for example, due to a link failure, or the client has crashed. Note that this error will not occur if the client has ended the session gracefully, e.g. by closing the client application. Server not responding errors (client timeout errors) - Client networking stack takes an assumption that network connection to the server exists, but the server remains idle and does not respond. In such a case, the client closes the TCP session with the RESET packet. This may occur either during the Session Setup phase (no response to the SYN packet), or during a normal data exchange process. Such a situation may result in the intermittent network problems between the client and the server. In the case the traffic is routed through asymmetric paths across the Internet, which is often the case, the path from the server to the client may be broken. Total bandwidth usage The number of all transmitted bits (client + server) per second. Total network time A difference between Total transaction time and sum of Total server time and Total redirect time. This metric is calculated only for the Data center tiers and for the following dimension combinations: Application-Tier and Application-Transaction-Tier. Total redirect time The sum of the averages of redirect time of all operations assigned to a transaction. This metric is used to indicate the redirect time used to achieve the result of multi-step transactions. It is calculated only for Data center tiers and for the following dimension combinations: Application-Tier and Application-Transaction-Tier. Total server time The sum of the averages of server time of all operations assigned to a transaction. This metric is used to indicate the server time used to achieve the result of multi-step 302

303 Appendix A Central Analysis Server Data Views transactions. It is calculated only for Data center tiers and for the following dimension combinations: Application-Tier and Application-Transaction-Tier. Total transaction time The sum of the averages of operation time of all operations assigned to a transaction. This metric is used to indicate the total time used to achieve the result of multi-step transactions. It is calculated only for Data center tiers and for the following dimension combinations: Application-Tier and Application-Transaction-Tier. Transactional service errors The total number of transactional service errors. Transaction errors The number of errors that originate from Synthetic Monitoring transactions or RUM sequence transactions. Two-way loss rate The average loss rate calculated for both directions. The sum of client and server retransmitted packets averaged by the sum of total client and server packets. Unique and affected users (availability) The number of unique users with a breakdown of users into how many were affected by availability problems and how many were not. Note that for RUM Browser the notion of users refers to visits. Unique and affected users (network) The number of unique users with a breakdown of users into how many were affected by network performance problems and how many were not. Unique and affected users (performance) The number of unique users with a breakdown of users into how many were affected by application performance problems and how many were not. Note that for RUM Browser the notion of users refers to visits. Unique users The number of unique users detected in monitored traffic. Note that for RUM Browser the notion of users refers to visits. Volume The number of all transmitted bytes (client + server). Application, transaction, and tier baselines This data view provides dimensions and metrics to analyze the baselines of the monitored traffic for different application layers (tiers). However, it does not allow data aggregation for different tiers on a report - the "Tier" dimension must always be selected when defining the report. The only exception to this rule is when you want to display data for front-end tiers. You do not have to use the "Tier" dimension if you set the "Tier type" to "Data center tier" and the "Is front-end tier" dimension to "Yes". Application, transaction, and tier baselines dimensions Application A universal container that can accommodate transactions. 303

304 Appendix A Central Analysis Server Data Views Baseline source Baseline source. Two possible values: pinned or average. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Is front-end tier? Indicates whether a given tier is a front-end tier for a selected application. Tier A specific point of the application where we measure data. It can be a specific traffic type or a server. Tier sequence number The sequence number of a tier is determined by the order in which you define your tiers, and these numbers in turn determine the order in which data is displayed on the report. Tier type The type of a tier can be one of the following: client, network, or data center. Time The time stamp of the data presented on the report. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. Transaction step The step as configured in a transaction definition. Step configuration is built on DCRUM data using operations, tasks, modules or services. Steps are contained within transactions and carry the entire transaction configuration. Transaction step sequence number The sequence number of a step is used for presentation purposes. It marks the order of a particular step in a transaction configuration. You can order steps within each transaction if such an ordering makes sense for the overall monitored application paradigm. The transaction step sequence does not affect data aggregation. Application, transaction, and tier baselines metrics Aborted transactions The number of aborted transactions due to the HTTP timeout. This metric is calculated only for Client tiers. Aborts The number of aborted operations. This metric is calculated only for the Network tier, the Client network tier, Client optimized network tier and data center tiers. Application health index The percentage of fast operations calculated as "Fast Operations / (Failures + Operations) * 100%". Application Monitoring Operations The number of Application Monitoring operations. Availability (application) Availability limited to the application context, calculated using the following formula: 304

305 Appendix A Central Analysis Server Data Views Availability (application) = 100% * (All Attempts Failures (Application) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Availability (TCP) Availability limited to the network context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (TCP) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Availability (total) Depending on the particular tier, this term may mean: For Client tiers: the percentage of successful attempts, calculated as *(failures/attempts). For the Citrix/WTS (presentation) tier: the percentage of successful TCP connection attempts, calculated as *(failures/attempts). For other Network tiers: the percentage of successfully sent packets, calculated as *(sent packets that were lost/total number of sent packets). For other Data center tiers: the percentage of successful attempts, calculated using the following formula: Availability (total) = 100% * (All Attempts All failures) / All Attempts whereall attempts = all failures + all successful operations + all standalone hits not classified as a failure + all abortsall failures = all failures (transport) + all failures (TCP) + all failures (application). Availability (transport) Availability limited to the transport context, calculated using the following formula: Availability (application) = 100% * (All Attempts Failures (Transport) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure. Average CPU utilization The percentage of elapsed time that the processor spent to execute non-idle threads. This counter is the primary indicator of processor activity, and shows the average percentage of busy time. Average memory utilization The average percentage of used physical memory (RAM). Client loss rate The percentage of total packets sent by a client that were lost and needed to be retransmitted. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for WAN and Pass-through deployment only), and tiers based on TCP-based analyzers. 305

306 Appendix A Central Analysis Server Data Views Client RTT Client RTT is the time it takes for a SYN packet (sent by a server) to travel from the AMD to the client and back again, as shown in the following picture. Client AMD Server T1 SYN T2 T3 T6 T7 SYN ACK ACK T5 Client RTT T8 T4 T9 A client RTT measurement begins when the SYN ACK packet from the server to the client passes by the AMD (T5). The packet reaches the client machine (T6) and is processed, while an acknowledgment is sent back to the server (T7). Client processing time impact (T7-T6) is again very low. Client RTT measurement ends when the ACK packet reaches the AMD (T8). Therefore, the Client Round Trip Time is calculated as T8-T5. Depending on the actual setup, Client RTT measurements may vary dramatically. In corporate environments, it may be a few milliseconds for LAN-connected clients or a couple dozens milliseconds for WAN-connected clients. In this case, where the client is coming from the Internet, the end-to-end Client RTT measurement is a compound of transit time through the Internet backbone as well as through the "last mile" access network. The impact of the last mile can be easily calculated, based on the connection speed and the packet size (56B in case of TCP SYN packet). For a 28 kbps dial-up connection, this amounts to 16 milliseconds one way, or 32 milliseconds for a complete round-trip measurement. For a 1.6 Mbps DSL line, this makes 56 microseconds towards complete client RTT measurement. Client Volume The number of client transmitted bytes. Database errors The number of database errors in the database analyzer: For TDS, which includes Sybase and MS SQL Server, any value from the following table is considered an error. For MySQL, if an ERR_Packet is returned, the error count is incremented. An error with a severity level of 19 or higher stops the execution of the current SQL batch and the error message is written to the error log. Errors that can be corrected by the user: 11: The given object or entity does not exist. 12: SQL statements that do not use locking because of special options. In some cases, read operations performed by these SQL statements could result in inconsistent data, because locks do not guarantee consistency. 13: Transaction deadlock errors. 14: Security-related errors such as permission denied. 15: Syntax errors in the SQL statement. 306

307 Appendix A Central Analysis Server Data Views 16: General errors that can be corrected by the user. Software errors that cannot be corrected by the user and that require system administrator action: 17: The SQL statement caused the database server to run out of resources (such as memory, locks, or disk space for the database) or to exceed some limit set by the system administrator. 18: There is a problem in the database engine software, but the SQL statement completes execution, and the connection to the instance of the database engine is maintained. System administrator action is required. 19: A non-configurable database engine limit has been exceeded and the current SQL batch has been terminated. System problems: 20-25: Fatal errors, meaning that the database engine task that was executing a SQL batch is no longer running. The task records information about what occurred and then terminates. In most cases, the application connection to the instance of the database engine also terminates. If this happens, depending on the problem, the application might not be able to reconnect. Database warnings The number of database warnings in the database analyzer: For TDS, which includes Sybase and MS SQL Server, this count will always be zero. TDS does not track anything as a warning. For MySQL, if an OK_Packet is returned, the warning count value in that packet is checked and the total warning field is updated with the returned number. DNS errors The number of DNS errors. End-to-end RTT The time it takes for a SYN packet to travel from the client to a monitored server and back again. Error indicator The number of error indicators. Failures (application) The number of operation attributes of all types set to be reported as an application failure. Failures (TCP) The number of operations that failed due to one the TCP errors. Failures (total) The total number of failures, that is all Failures (transport) + all Failures (TCP) + all Failures (application) Failures (transport) The number of operations that failed due to the problems in the transport layer. You configure the failures (transport) to include the following: protocol errors, SSL alerts, aborts and incomplete responses. 307

308 Appendix A Central Analysis Server Data Views Fast operations/transactions The number of operations or transactions for which the execution time was below a predefined threshold value. These include HTTP/HTTPS page loads, SQL database queries, XML (transactional services) operations, s, DNS requests, Oracle Forms submissions, MQ operations, MS Exchange operations, SAP operations, transactions (for RUM data). Fatal error The number of fatal errors. HTTP errors The number of observed HTTP client errors (4xx) and server errors (5xx). Idle time The part of the operation time spent between receiving a part of the response and requesting a subsequent part. It enables you to isolate the time taken by client from the time when the data was still being transmitted on the network. Incomplete responses The number of incomplete responses, that is partial and server aborted responses, as well as situations when a server did not respond to the request at all or responded in an urecognizable way. LDAP errors The number of LDAP Erros. The LDAP Errors are reported in the following categories: LDAP critical errors LDAP server errors LDAP security errors LDAP syntax errors LDAP client error LDAP client error Long aborts For HTTP, this is the number of operations manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL after at least 8 seconds of waiting for the page download (8 seconds is default). For XML, this is the number of transactions stopped after at least a threshold number of seconds of waiting (8 seconds is the default). MQ appl. errors The number of operation attributes of all types set to be reported as MQ application errors for software services based on an MQ analyzer. MQ errors The total number of IBM WebSphere Message Queue errors, including client errors, server errors, protocol errors and security errors. MS Exchange errors The total number of RPC server and RPC protocol errors. Network performance The percentage of total traffic that did not experience network-related problems (traffic in which the values of loss rate and RTT did not exceed configured thresholds). 308

309 Appendix A Central Analysis Server Data Views Network time The time the network takes to deliver the request to the server and to deliver the resulting response back to the user. In other words, network time is the portion of the operation time that is spent on transferring data over the network. Operation/transaction percentage breakdown A percentage breakdown into slow and fast transactions or operations. Operation/transactions requests The number of all operation requests, both requests that became successful operations and requests that were aborted by the client. Operation/Transaction time The average value of operation or transaction time for all operations or transactions performed on the particular tier. Operation/transaction time breakdown Operation time breakdown into the server time, the network time, the idle time and the other time. For RUM sequence transactions, the other time is a sum of the client time, the client response time, the application processing time, and the idle time. For synthetic transactions, the other time is equal to the client time and the idle time is not calculated. For RUM Browser data, the other time is equal to client time and the redirect and idle times are not calculated. Note that for RUM Browser data, the sum of client, network and server time does not have to reflect user action time. The other time is not calculated for Dynatrace Performance Network data. Operation/transaction time percentage breakdown The breakdown of the average value of operation time into percentage of the server time, the network time, the idle and the other time. For RUM sequence transactions, the other time is a sum of the client time, the client response time, the application processing time, and the idle time. For synthetic transactions, the other time is equal to the client time and the idle time is not calculated. For RUM Browser data, the other time is equal to client time and the redirect and idle times are not calculated. Note that for RUM Browser data, the sum of client, network and server time does not have to reflect user action time. The other time is not calculated for Dynatrace Performance Network data. Operation/transaction time with breakdown The time it took to complete an operation with an operation time breakdown into the server time, the network time, the idle time, and the other time. For RUM sequence transactions, the other time is a sum of the client time, the client response time, the application processing time, and the idle time. For synthetic transactions, the other time is equal to the client time. For RUM Browser data, the other time is equal to client time and the redirect time is not calculated. Note that for RUM Browser data, the sum of client, network and server time does not have to reflect user action time. The other time is not calculated for Dynatrace Performance Network data. Operation attributes The number of operation attributes of all types (type 1 to 5), observed for the given software service. Operations/Transactions Depending on the tier definition and on the traffic analyzer used, this metric shows the number of: 309

310 Appendix A Central Analysis Server Data Views HTTP(S) operations SQL database queries XML (transactional services) operations messages DNS requests Oracle Forms submissions MQ operations MS Exchange operations SAP operations Cerner transactions Transactions (for RUM data) Operations with breakdown The number of operations with operation breakdown into numbers of slow and fast operations. Other time For RUM sequence transactions, the other time is a sum of the client time, the client response time, and the application processing time. For synthetic transactions, the other time is equal to the client time. For RUM Browser data, the other time is equal to the client time if provided by the Application Monitoring server. The other time is not calculated for Dynatrace Performance Network data. Percentage of slow operations The percentage of operations for which the operation time was above a predefined threshold value. The term "operations" refers to operations in the context of the particular protocol, and can mean HTTP/HTTPS page loads, database queries, XML (transactional services) operations, Jolt transactions on a Tuxedo server, s, DNS requests, Oracle Forms submissions, MQ operations, VoIP calls, MS Exchange operations, or SAP operations. Performance Depending on the particular tier, the term performance can mean: For Client tiers: the percentage of transactions completed in a time shorter than the defined time threshold, calculated as *(slow transactions/all transactions). For the Client optimized network tier: the percentage of compressed bytes. For other Network tiers: the percentage of total traffic that did not experience network-related problems. For Data center tiers: for transactional protocols, this is the percentage of software service operations completed in a time shorter than the performance threshold. For transactionless, TCP-based protocols, this is the percentage of monitoring intervals in which user wait time per kb of data was shorter than the threshold value. Primary Reason for Slowness Primary reason for slowness is one of the general categories causing operations to be slow. The categories include data center, network, application design, and client/3rd party. 310

311 Appendix A Central Analysis Server Data Views Primary Reason for Slowness details The details of the primary reason for slowness. Realized bandwidth The actual transfer rate of server data when the transfer attempt occurred. This metric takes into account factors such as loss rate (retransmissions). Redirect time The average amount of time that was spent between the time when a user went to a particular URL and the time this user was redirected to another URL and issued a request to that new URL. The difference between Redirect Time and HTTP Redirect Time is that the former counts all operations, while the latter refers only to those operations for which redirection actually took place. Requests (without aborts) The number of request excluding the aborts. Requests breakdown The number of operation or transaction requests with operation or transaction breakdown into numbers of slow, fast, aborted, and failed operations or transactions. Requests breakdown (with tooltip) Operation or transaction breakdown into numbers of slow, fast, aborted, and failed operations or transactions. Requests percentage breakdown (with tooltip) Operation or transaction percentage breakdown into slow, fast, aborted, and failed operations or transactions. Response messages The total number of protocol-specific server responses. That includes both errors and other identifiable response strings, as configured in monitoring. RMI/Simple parser errors Total number of RMI/Simple parser errors. RTT measurements The number of RTT measurements. An RTT measurement occurs during every TCP handshake, so it provides some insight into the number of attempted TCP sessions, and the potential accuracy of the RTT measurements that are reported. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for LAN only), and tiers based on TCP-based analyzers. SAP errors The number of errors detected on the protocol level in communication between SAP application server and SAP GUI client as well as between SAP application server and a third party clients using Remote Function Calls (RFC). Server loss rate The percentage of total packets sent by a server that were lost - between the AMD and the server - and needed to be retransmitted. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for WAN and Pass-through deployment only), and tiers based on TCP-based analyzers. 311

312 Appendix A Central Analysis Server Data Views Server RTT The time it takes for a SYN packet to travel from the AMD to a monitored server and back again. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for LAN only), and tiers based on TCP-based analyzers. Client AMD Server T1 SYN T2 T6 SYN ACK T5 Server RTT T3 T4 T7 ACK T8 T9 Server TCP data packets The total number of TCP packets sent by the servers, excluding the traffic control packets. This metric is calculated only for the following tiers: RUM sequence transactions, Citrix/WTS (presentation), Client optimized network (for LAN only), and tiers based on TCP-based analyzers. Server time The time it took the server to produce a response for the given request. Server Volume The number of server transmitted bytes. Short aborts The number of transactions stopped before timeout. For HTTP, this is the number of page loads software service manually stopped by the user by either clicking on the Stop or Refresh buttons or selecting another URL before 8 seconds of waiting for the page download (8 seconds is default). For XML, this is the number of transactions stopped before a threshold number of seconds of waiting (8 seconds is the default). Slow operations/transactions The number of operations or transactions for which the execution time was above a predefined threshold value. These include HTTP/HTTPS page loads, SQL database queries, XML (transactional services) operations, s, DNS requests, Oracle Forms submissions, MQ operations, MS Exchange operations, SAP operations, transactions (for RUM data). Slow operations (application design) The number of slow operations caused by the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - # of components) The number of slow operations caused by the number of components, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - redirect time) The number of slow operations caused by redirect time, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness 312

313 Appendix A Central Analysis Server Data Views algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - request size) The number of slow operations caused by request size, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (application design - response size) The number of slow operations caused by response size, which is one of the detailed reasons in the application design category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (client/3rd party) The number of slow operations caused by client/3rd party category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (data center) The number of slow operations caused by the data center category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (multiple reasons) The number of slow operations caused by multiple reasons, that is when the algorithm was not able to determine one primary reason for slowness. Slow operations (network) The number of slow operations caused by the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - latency) The number of slow operations caused by latency, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - loss rate) The number of slow operations caused by loss rate, which is one of the detailed reasons in the network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. Slow operations (network - other) The number of slow operations caused by other factors than latency or loss rate, which is one of the detailed reasons in network category as calculated using the primary reason for slowness algorithm. Note that this includes only sucessful operations. Failures and aborted operations are not taken into account. SMTP errors The total number of SMTP errors. 313

314 Appendix A Central Analysis Server Data Views SSL errors The number of all SSL alerts. This metric is the sum of SSL errors 1, SSL errors 2, and Other SSL errors. TCP errors The total number of TCP errors. Those errors may indicate server or application problems and therefore measurements of those are critical to understanding the issues that may affect end-user experience. AMDs measure and report on the following types of TCP errors: Connection Refused Errors - Client attempts to open a TCP session with a server, which rejects the request. SYN packet from Client is followed by RESET packet from Server, with matching TCP sequence numbers. This error is typically caused by resource exhaustion on the server, which is unable to accept more concurrent TCP sessions. This may be either a configuration issue (too few resources allocated in the kernel) or lack of memory. SYN flood attacks typically result in servers being unable to accept new connections. Server session termination error - Server is unexpectedly terminating a connection that was successfully opened. The server sends a RESET packet to the Client. Such an error originates at an application using TCP session that is monitored. It does not necessarily mean application failure; usually it means that the application encountered a condition in which it decided to immediately terminate session with the client, for example, because of an application security policy violation by the client. Session Abort - Client is unexpectedly terminating a connection that was successfully opened. The Client sends a RESET packet to the Server. These errors are inspected in the context of the client application and may or may not be reported. For example, the browser running HTTP may terminate the load of a GIF file if it is older than the one that it had previously cached and this is normal behavior. However, if all connections to the server are terminated because the user hits the STOP button, then this is abnormal session termination and is reported as "Aborted operation" or "Stopped Page". Client not responding errors (server timeout errors) - Server networking stack takes an assumption that the network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with the RESET packet. Such a condition may occur when the client has been silently disconnected from the network, for example, due to a link failure, or the client has crashed. Note that this error will not occur if the client has ended the session gracefully, e.g. by closing the client application. Server not responding errors (client timeout errors) - Client networking stack takes an assumption that network connection to the server exists, but the server remains idle and does not respond. In such a case, the client closes the TCP session with the RESET packet. This may occur either during the Session Setup phase (no response to the SYN packet), or during a normal data exchange process. Such a situation may result in the intermittent network problems between the client and the server. In the case the traffic is routed through asymmetric paths across the Internet, which is often the case, the path from the server to the client may be broken. 314

315 Appendix A Central Analysis Server Data Views Total bandwidth usage The number of all transmitted bits (client + server) per second. Total network time A difference between Total transaction time and sum of Total server time and Total redirect time. This metric is calculated only for the Data center tiers and for the following dimension combinations: Application-Tier and Application-Transaction-Tier. Total redirect time The sum of the averages of redirect time of all operations assigned to a transaction. This metric is used to indicate the redirect time used to achieve the result of multi-step transactions. It is calculated only for Data center tiers and for the following dimension combinations: Application-Tier and Application-Transaction-Tier. Total server time The sum of the averages of server time of all operations assigned to a transaction. This metric is used to indicate the server time used to achieve the result of multi-step transactions. It is calculated only for Data center tiers and for the following dimension combinations: Application-Tier and Application-Transaction-Tier. Total transaction time The sum of the averages of operation time of all operations assigned to a transaction. This metric is used to indicate the total time used to achieve the result of multi-step transactions. It is calculated only for Data center tiers and for the following dimension combinations: Application-Tier and Application-Transaction-Tier. Transactional service errors The total number of transactional service errors. Transaction errors The number of errors that originate from Synthetic Monitoring transactions or RUM sequence transactions. Two-way loss rate The average loss rate calculated for both directions. The sum of client and server retransmitted packets averaged by the sum of total client and server packets. Volume The number of all transmitted bytes (client + server). Synthetic and sequence transaction data This data view provides dimensions and metrics to analyze the monitored transactions. Synthetic and sequence transaction data dimensions AMD UUID UUID (Universal Unique Identifier) of the AMD that produced the data. Analyzer group The logical group of analyzers based on the type of the analyzed traffic. For more information see Concept of Protocol Analyzers Application A universal container that can accommodate transactions. 315

316 Appendix A Central Analysis Server Data Views Application Monitoring Correlation ID Java and.net Monitoring GUID Application Monitoring Server ID Identifier of the Application Monitoring server. Application Monitoring System Profile Name of a used Application Monitoring System profile. Business day The classification of days as business or non-business, as defined in the Business Hours Configuration tool. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Citrix server IP The IP address of a Citrix server. Client area Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client city Geographical data about the client site. Client country Geographical data about the client country. Client geographical region Geographical data about the client region. Client group The client's group, as manually defined in Central Analysis Server. Client internal IP address Client IP address as seen in the client's local network. Client IP address The IP address of the client. Client region Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions, clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site description The optional description of the client site. 316

317 Appendix A Central Analysis Server Data Views Client site ID In cases when sites are ASes, Client Site ID contains the AS number, which is also given in Client ASN. For manual sites, Client Site ID is identical to Client site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Client site type One of site types: AS, Active, CIDR Block, Default, External, Manual, Network or Predefined. External is a site defined by a user in external configuration files. Manual site is defined by a user by means of configuration interface on the report server. Predefined sites are based on a mapping contained in a special configuration file. Client site UDL A dimension designed to filter only the User Defined Links. By default it is set to true (Yes) for WAN Optimization Sites report. Client site WAN Optimized Link Indicates whether a site to which the client belongs is selected as both a UDL and a WAN optimized link. Client VPN The name of the VPN in which the user registered. Client WINS name The client's computer name resolved by a WINS server. Conference call id The identifier of the VoIP conference call. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Day of the week The textual representation of the day of the week. Dynatrace Network Analyzer GUID The number identifying a Synthetic Monitoring transaction. Dynatrace Network Analyzer link status An indicator of the Transaction Trace report availability. When a problem occurs, for example, a transaction is too slow, and the Transaction Trace file contains such information, a full Transaction Trace report may be requested directly from the CAS. The indicator has three states: The report can be generated (Click to generate report) The report has been requested (Report requested) The report is ready for viewing (Click to view report) Dynatrace Network Analyzer request time The time when a Transaction Trace report was requested from the CAS. Hour of the day The numerical representation of the hour of the day, that is, numbers from 0 to 23. Link to Dynatrace Network Analyzer file A link to the Transaction Trace file. 317

318 Appendix A Central Analysis Server Data Views Link to Dynatrace Network Analyzer report A link to the Transaction Trace report. Process ID The identifier of a process running on a Citrix server on which Cerner applications are running. Screenshot Transaction Trace screenshot. Screenshot thumbnail Transaction Trace screenshot thumbnail. Tier A specific point of the application where we measure data. It can be a specific traffic type or a server. Tier sequence number The sequence number of a tier is determined by the order in which you define your tiers, and these numbers in turn determine the order in which data is displayed on the report. Time The time stamp of the data presented on the report. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. Transaction GUID The identification number of a transaction. Transaction source Informs whether the transaction comes from Synthetic Monitoring probes, Agentless Monitoring Device, Cerner RTMS, or is user-defined. Transaction timestamp The exact time when the transaction occurred. User name Client's name determined from HTTP cookie (requires configuration on AMD), HTTP authentication header or static mapping. Synthetic and sequence transaction data metrics Aborted transactions The number of aborted transactions (transaction error code: -3). An aborted transaction is reported when one or more consecutive URLs detected in the traffic match the defined transaction steps, but the next URL detected does not match the transaction definition. Affected users (availability) The number of unique users that were affected by the availability related problems. Affected users (performance) The number of users that experienced application performance problems. For transactional protocols, a problem is noted if at least one operation is completed in time longer than the performance threshold. For transactionless TCP-based protocols, a problem is noted if user wait per kb of data is longer than the threshold value. 318

319 Appendix A Central Analysis Server Data Views Application Delivery Channel Delay In WAN optimized scenario, Application Delivery Channel Delay (ADCD) is a quality metric represented in milliseconds. The ADCD is determined by initial observation of the traffic between a client and a server. ADCD is a derivative of RTT measured on a WAN link expressed in time and as such it can be understood as latency, where the larger ADCD would indicate a higher network latency. ADCD also includes time spent in the data center WOC for traffic buffering and processing. A change of ADCD from its initial value reflects a change of quality in WAN optimization service. For example, sudden increase of ADCD would suggest that the quality of the service has worsened and conversely, a sudden decrease of ADCD value could suggest an improvement in WAN optimization. Application Monitoring Visibility Application Monitoring visibility. Application performance For transactional protocols, this is the percentage of software service transactions completed in a time shorter than the performance threshold. For transactionless TCP-based protocols, this is the percentage of monitoring intervals in which user wait time per kb of data was shorter than the threshold value. Application processing time The average time spent by software service on operation processing. Attempts (transport) The total number of transactions, including transactions with errors. Availability (total) The percentage of successful attempts, calculated using the following formula: Availability (total) = 100% * (All Attempts All failures) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure All failures = all failures (transport) + all failures (TCP) + all failures (application). Client bandwidth usage The number of client bits per second. Client bytes The number of bytes sent by the clients. Note that this includes headers. Client packets The number of packets sent by the client. Client packets/sec The number of packets per second, sent by the clients. Client response time The average time spent by client side on transaction processing. Client RTT Client RTT is the time it takes for a SYN packet (sent by a server) to travel from the AMD to the client and back again, as shown in the following picture. 319

320 Appendix A Central Analysis Server Data Views Client AMD Server T1 SYN T2 T3 T6 T7 SYN ACK ACK T5 Client RTT T8 T4 T9 A client RTT measurement begins when the SYN ACK packet from the server to the client passes by the AMD (T5). The packet reaches the client machine (T6) and is processed, while an acknowledgment is sent back to the server (T7). Client processing time impact (T7-T6) is again very low. Client RTT measurement ends when the ACK packet reaches the AMD (T8). Therefore, the Client Round Trip Time is calculated as T8-T5. Depending on the actual setup, Client RTT measurements may vary dramatically. In corporate environments, it may be a few milliseconds for LAN-connected clients or a couple dozens milliseconds for WAN-connected clients. In this case, where the client is coming from the Internet, the end-to-end Client RTT measurement is a compound of transit time through the Internet backbone as well as through the "last mile" access network. The impact of the last mile can be easily calculated, based on the connection speed and the packet size (56B in case of TCP SYN packet). For a 28 kbps dial-up connection, this amounts to 16 milliseconds one way, or 32 milliseconds for a complete round-trip measurement. For a 1.6 Mbps DSL line, this makes 56 microseconds towards complete client RTT measurement. Client time Client time is the time interval between the last data packet from transaction response message from TCP session server to the first packet of the acknowledgment from TCP session server to the client. Client time is similar to server time, but measured in context of transaction response message. Client time (failed transactions) The client time for all failed transactions (transactions with a -2 status code). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Client time (requests) The client time for all transaction requests (both requests that became successful transactions and requests that ended as transactions with errors). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Custom metric (1)(avg) The average value of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (1)(cnt) The number of occurrences of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (1)(sum) The sum of all values of user-defined metrics in category 1 observed in the HTTP or XML traffic. 320

321 Appendix A Central Analysis Server Data Views Custom metric (2)(avg) The average value of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (2)(cnt) The number of occurrences of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (2)(sum) The sum of all values of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (3)(avg) The average value of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (3)(cnt) The number of occurrences of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (3)(sum) The sum of all values of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (4)(avg) The average value of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (4)(cnt) The number of occurrences of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (4)(sum) The sum of all values of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (5)(avg) The average value of user-defined metrics in category 5 observed in the HTTP or XML traffic. Custom metric (5)(cnt) The number of occurrences of user-defined metrics in category 5 observed in the HTTP or XML traffic. Custom metric (5)(sum) The sum of all values of user-defined metrics in category 5 observed in the HTTP or XML traffic. End-to-end RTT The time it takes for a SYN packet to travel from the client to a monitored server and back again. Errors The total number of TCP and SSL errors. 321

322 Appendix A Central Analysis Server Data Views Failed transactions For Synthetic Monitoring transactions, it is the number of transactions for which the give-up threshold was exceeded. For RUM transactions, failed transactions are all transactions with status other than -3 (aborted). Failed transactions time breakdown (Active Monitoring) The breakdown of trace time into client time, network time, server time, calculated only for unavailable transactions. Failures (total) The total number of failures, that is all Failures (transport) + all Failures (TCP) + all Failures (application) Failures (transport) The number of operations that failed due to the problems in the transport layer. These include protocol errors, SSL alerts classified as a failure, incomplete responses selected be classified as failures. Fast transactions The number of transactions for which the transaction time was below a predefined threshold value. HTTP abort error This error is reported when one of the URLs in a transaction detected in a monitored traffic does not match the transaction definition. This refers to any URL in a sequence of URLs, except the firs one. HTTP client errors (4xx) The sum of all HTTP client errors (4xx). This includes 4 categories of errors (4xx), by default HTTP Unauthorized (401, 407) errors, HTTP Not Found (404) errors, custom client (4xx) errors and Other HTTP (4xx) errors. The contents of the first 3 categories can be configured by users. HTTP client errors - category 3 (default name) The number of HTTP custom client errors (4xx). By default, there is no specific error type assigned here. HTTP errors The number of observed HTTP client errors (4xx) and server errors (5xx). HTTP not found errors 404 (default name) The number of observed custom HTTP 404 Not found errors. HTTP other client errors (4xx) The number of HTTP other client errors (4xx). There are four categories of HTTP client errors (4xx), of which three can be configured by users. By default, the first category includes HTTP Unauthorized (401, 407) errors, the second category - HTTP Not Found (404) errors. The 3rd category contains no default error types assigned, and can be configured by a user. Finally, a group of Other HTTP (4xx) errors contains all errors that do not fall into any other client errors category. The number is calculated based on the formula: [HTTP errors 4xx] - [HTTP Not Found errors 404] - [HTTP Not Authorized ( )] - [HTTP errors configured by user]. 322

323 Appendix A Central Analysis Server Data Views HTTP other server errors (5xx) The number of HTTP server errors (5xx) that do not fall into categories 1 or 2 of custom HTTP server errors (5xx). HTTP server errors (5xx) The number of all observed HTTP server errors (5xx). HTTP server errors category 1 (default name) The number of custom HTTP server errors (5xx), category 1. By default, there are no specific error types assigned to this category. HTTP server errors category 2 (default name) The number of custom HTTP server errors (5xx), category 2. By default, there are no specific error types assigned to this category. HTTP timeout error This type of error is reported if the time between the occurrence of consecutive URLs constituting a transaction exceeds the predefined timeout value. HTTP unauthorized errors 401, 407 (default name) The number of observed custom HTTP authentication related errors. These include "HTTP 401 Unauthorized" and "HTTP 407 Proxy authentication required" errors. HTTP servers generate errors "401 Unauthorized" in cases, when anonymous clients are not authorized to view the requested content and must provide authentication information in the WWW-Authenticate request header. The 401 errors are similar to "403 Forbidden" errors, however used when authentication is possible but it has failed or not yet been provided. The 407 error is basically similar to 401, but it indicates that the client should first authenticate with a proxy server. The AMD will report these errors only if the server-level authentication has been configured. Simple and basic user access policies are common in Web sites that do not store user-sensitive and/or business critical information. Most commercial-grade applications, based on HTTP, such as home banking applications or online shopping sites, rely on the application-level authentication rather than the server-level authentication. Such applications are designed in the way that even if the user authentication fails, the HTTP server usually sends the 200 OK response code and the authentication error message in the page content. Therefore, the 401 Unauthorized and 407 Proxy authentication required error codes are quite rare in commercial environments. Incomplete transaction error This error tells us that transaction was reported although monitored traffic did not match the first steps in the transaction definition. Max giveup time threshold The maximum time after which a Synthetic Monitoring transaction is considered incomplete. Max slow transaction threshold The maximum of a slow transaction threshold. If the transaction time is longer than the threshold, the transaction is considered to be slow. Min giveup time threshold The minimum time after which a Synthetic Monitoring transaction is considered incomplete. 323

324 Appendix A Central Analysis Server Data Views Min slow transaction threshold The minimum slow transaction threshold. Network time The time the network takes to deliver the request to the server and to deliver the resulting response back to the user. In other words, network time is the portion of the operation time that is spent on transferring data over the network. Network time (failed transactions) The network time for all failed transactions (transactions with a -2 status code). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Network time (requests) The network time for all transaction requests (both requests that became successful transactions and requests that ended as transactions with errors). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. No response error The number of errors of the category No response. These errors are reported when a request is detected in the monitored traffic, but the actual operation following this request is not observed. Number of results Depending on the transaction source, the number of results may mean: For RUM transaction: the number of subcomponents of error-free transaction. Note that this metric is recorded at the time when the monitored transaction is closed. For Private Enterprise transaction: the number of transaction requests. For RTMS transaction: the number of records returned by a particular RTMS timer. Operation attributes The number of operation attributes of all types (type 1 to 5), observed for the given software service. Operation attributes (1) The number of operation attributes of type 1, observed for the given software service. Operation attributes (2) The number of operation attributes of type 2, observed for the given software service. Operation attributes (3) The number of operation attributes of type 3, observed for the given software service. Operation attributes (4) The number of operation attributes of type 4, observed for the given software service. Operation attributes (5) The number of operation attributes of type 5, observed for the given software service. Percentage of affected users (availability) The percentage of users that were affected by the availability related problems. Percentage of affected users (performance) The percentage of users that experienced application performance problems. Percentage of slow transactions The percentage of transactions for which the transaction time was above a predefined threshold value. 324

325 Appendix A Central Analysis Server Data Views RTT measurements The number of RTT measurements. Server bandwidth usage The number of server bits per second. Server bytes The number of bytes sent by servers. The number includes headers. Server loss rate The percentage of total packets sent from a server that were lost and needed to be retransmitted. Server packets The number of packets sent by the servers. Server packets/sec The number of packets per second, sent by the servers. Server RTT The time it takes for a SYN packet to travel from the AMD to a monitored server and back again. Client AMD Server T1 SYN T2 T6 SYN ACK T5 Server RTT T3 T4 T7 ACK T8 T9 Server time The time it took the server to produce a response for the given request. Server time (failed transactions) The server time for all failed transactions (transactions with a -2 status code). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Server time (requests) The server time for all transaction requests (both requests that became successful transactions and requests that ended as transactions with errors). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Slow transactions The number of transactions for which the transaction time was above a predefined threshold value. Time resolution Time resolution. Total bandwidth usage The number of all transmitted bits (client + server) per second. Total bandwidth usage with breakdown Total bandwidth usage (client + server) with breakdown into client and server bandwidth usage. Total bytes The number of all transmitted bytes (client + server). 325

326 Appendix A Central Analysis Server Data Views Total packets The number of all transmitted packets (client + server). Total packets/sec The maximum value of Total packets/sec, over the time covered by the report. Transaction requests The number of all transaction requests, both requests that became successful transactions and requests that ended as transactions with errors. Transaction requests breakdown In case of Synthetic Monitoring transactions, the breakdown of all transaction requests into requests that became slow, fast and unavailable transactions. In case of RUM transactions, the breakdown of all transaction requests into requests that ended as slow, fast, aborted and failed transactions. As transaction requests we understand both requests that became successful transactions and transactions with errors. For RTMS transactions, use the "Transactions breakdown" metric. Transaction requests time breakdown The breakdown of trace time into client time, network time, server time, calculated for transactions coming from Synthetic Monitoring agents. Transactions The number of transactions. Transactions/min The number of transactions per minute. Transactions/sec The number of transactions per second. Transactions breakdown Transaction breakdown into numbers of slow and fast transactions. Transaction time The time it took to complete a transaction. Transaction time breakdown The transaction time breakdown into client time, client response time, server time, network time and application processing time. Unavailability (total) Unavailability calculated as Availability (total) - Failures (total). Unique users The number of unique users detected in the monitored traffic. Synthetic and sequence transaction baselines This data view provides dimensions and metrics to analyze the baselines of the monitored transactions. Synthetic and sequence transaction baselines dimensions Analyzer group The logical group of analyzers based on the type of the analyzed traffic. For more information see Concept of Protocol Analyzers 326

327 Appendix A Central Analysis Server Data Views Application A universal container that can accommodate transactions. Baseline Source Baseline source. Two possible values: pinned or average. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Client area Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client city Geographical data about the client site. Client country Geographical data about the client country. Client geographical region Geographical data about the client region. Client group The client's group, as manually defined in Central Analysis Server. Client internal IP address Client IP address as seen in the client's local network. Client IP address The IP address of the client. Client region Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions, clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site description The optional description of the client site. Client site ID In cases when sites are ASes, Client Site ID contains the AS number, which is also given in Client ASN. For manual sites, Client Site ID is identical to Client site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Client site type One of site types: AS, Active, CIDR Block, Default, External, Manual, Network or Predefined. External is a site defined by a user in external configuration files. Manual site 327

328 Appendix A Central Analysis Server Data Views is defined by a user by means of configuration interface on the report server. Predefined sites are based on a mapping contained in a special configuration file. Client site UDL A dimension designed to filter only the User Defined Links. By default it is set to true (Yes) for WAN Optimization Sites report. Client site WAN Optimized Link Indicates whether a site to which the client belongs is selected as both a UDL and a WAN optimized link. Client VPN The name of the VPN in which the user registered. Client WINS name The client's computer name resolved by a WINS server. Conference call id The identifier of the VoIP conference call. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Tier A specific point of the application where we measure data. It can be a specific traffic type or a server. Tier sequence number The sequence number of a tier is determined by the order in which you define your tiers, and these numbers in turn determine the order in which data is displayed on the report. Time The time stamp of the data presented on the report. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. Transaction source Informs whether the transaction comes from Synthetic Monitoring probes, Agentless Monitoring Device, Cerner RTMS, or is user-defined. User name Client's name determined from HTTP cookie (requires configuration on AMD), HTTP authentication header or static mapping. Synthetic and sequence transaction baselines metrics Aborted transactions The number of aborted transactions (transaction error code: -3). An aborted transaction is reported when one or more consecutive URLs detected in the traffic match the defined transaction steps, but the next URL detected does not match the transaction definition. Application Delivery Channel Delay In WAN optimized scenario, Application Delivery Channel Delay (ADCD) is a quality metric represented in milliseconds. The ADCD is determined by initial observation of the traffic between a client and a server. ADCD is a derivative of RTT measured on a WAN 328

329 Appendix A Central Analysis Server Data Views link expressed in time and as such it can be understood as latency, where the larger ADCD would indicate a higher network latency. ADCD also includes time spent in the data center WOC for traffic buffering and processing. A change of ADCD from its initial value reflects a change of quality in WAN optimization service. For example, sudden increase of ADCD would suggest that the quality of the service has worsened and conversely, a sudden decrease of ADCD value could suggest an improvement in WAN optimization. Application performance For transactional protocols, this is the percentage of software service transactions completed in a time shorter than the performance threshold. For transactionless TCP-based protocols, this is the percentage of monitoring intervals in which user wait time per kb of data was shorter than the threshold value. Application processing time The average time spent by software service on operation processing. Attempts (transport) The total number of transactions, including transactions with errors. Availability (total) The percentage of successful attempts, calculated using the following formula: Availability (total) = 100% * (All Attempts All failures) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure All failures = all failures (transport) + all failures (TCP) + all failures (application). Client bandwidth usage The number of client bits per second. Client bytes The number of bytes sent by the clients. Note that this includes headers. Client packets The number of packets sent by the client. Client packets/sec The number of packets per second, sent by the clients. Client response time The average time spent by client side on transaction processing. Client RTT Client RTT is the time it takes for a SYN packet (sent by a server) to travel from the AMD to the client and back again, as shown in the following picture. Client AMD Server T1 SYN T2 T3 T6 T7 SYN ACK ACK T5 Client RTT T8 T4 T9 329

330 Appendix A Central Analysis Server Data Views A client RTT measurement begins when the SYN ACK packet from the server to the client passes by the AMD (T5). The packet reaches the client machine (T6) and is processed, while an acknowledgment is sent back to the server (T7). Client processing time impact (T7-T6) is again very low. Client RTT measurement ends when the ACK packet reaches the AMD (T8). Therefore, the Client Round Trip Time is calculated as T8-T5. Depending on the actual setup, Client RTT measurements may vary dramatically. In corporate environments, it may be a few milliseconds for LAN-connected clients or a couple dozens milliseconds for WAN-connected clients. In this case, where the client is coming from the Internet, the end-to-end Client RTT measurement is a compound of transit time through the Internet backbone as well as through the "last mile" access network. The impact of the last mile can be easily calculated, based on the connection speed and the packet size (56B in case of TCP SYN packet). For a 28 kbps dial-up connection, this amounts to 16 milliseconds one way, or 32 milliseconds for a complete round-trip measurement. For a 1.6 Mbps DSL line, this makes 56 microseconds towards complete client RTT measurement. Client time Client time is the time interval between the last data packet from transaction response message from TCP session server to the first packet of the acknowledgment from TCP session server to the client. Client time is similar to server time, but measured in context of transaction response message. Client time (failed transactions) The client time for all failed transactions (transactions with a -2 status code). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Client time (requests) The client time for all transaction requests (both requests that became successful transactions and requests that ended as transactions with errors). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Custom metric (1)(avg) The average value of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (1)(cnt) The number of occurrences of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (1)(sum) The sum of all values of user-defined metrics in category 1 observed in the HTTP or XML traffic. Custom metric (2)(avg) The average value of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (2)(cnt) The number of occurrences of user-defined metrics in category 2 observed in the HTTP or XML traffic. Custom metric (2)(sum) The sum of all values of user-defined metrics in category 2 observed in the HTTP or XML traffic. 330

331 Appendix A Central Analysis Server Data Views Custom metric (3)(avg) The average value of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (3)(cnt) The number of occurrences of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (3)(sum) The sum of all values of user-defined metrics in category 3 observed in the HTTP or XML traffic. Custom metric (4)(avg) The average value of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (4)(cnt) The number of occurrences of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (4)(sum) The sum of all values of user-defined metrics in category 4 observed in the HTTP or XML traffic. Custom metric (5)(avg) The average value of user-defined metrics in category 5 observed in the HTTP or XML traffic. Custom metric (5)(cnt) The number of occurrences of user-defined metrics in category 5 observed in the HTTP or XML traffic. Custom metric (5)(sum) The sum of all values of user-defined metrics in category 5 observed in the HTTP or XML traffic. End-to-end RTT The time it takes for a SYN packet to travel from the client to a monitored server and back again. Errors The total number of TCP and SSL errors. Failed transactions For Synthetic Monitoring transactions, it is the number of transactions for which the give-up threshold was exceeded. For RUM transactions, failed transactions are all transactions with status other than -3 (aborted). Failed transactions time breakdown (Active Monitoring) The breakdown of trace time into client time, network time, server time, calculated only for unavailable transactions. Failures (total) The total number of failures, that is all Failures (transport) + all Failures (TCP) + all Failures (application) 331

332 Appendix A Central Analysis Server Data Views Failures (transport) The number of operations that failed due to the problems in the transport layer. These include protocol errors, SSL alerts classified as a failure, incomplete responses selected be classified as failures. Fast transactions The number of transactions for which the transaction time was below a predefined threshold value. HTTP abort error This error is reported when one of the URLs in a transaction detected in a monitored traffic does not match the transaction definition. This refers to any URL in a sequence of URLs, except the firs one. HTTP client errors (4xx) The sum of all HTTP client errors (4xx). This includes 4 categories of errors (4xx), by default HTTP Unauthorized (401, 407) errors, HTTP Not Found (404) errors, custom client (4xx) errors and Other HTTP (4xx) errors. The contents of the first 3 categories can be configured by users. HTTP client errors - category 3 (default name) The number of HTTP custom client errors (4xx). By default, there is no specific error type assigned here. HTTP errors The number of observed HTTP client errors (4xx) and server errors (5xx). HTTP not found errors 404 (default name) The number of observed custom HTTP 404 Not found errors. HTTP other client errors (4xx) The number of HTTP other client errors (4xx). There are four categories of HTTP client errors (4xx), of which three can be configured by users. By default, the first category includes HTTP Unauthorized (401, 407) errors, the second category - HTTP Not Found (404) errors. The 3rd category contains no default error types assigned, and can be configured by a user. Finally, a group of Other HTTP (4xx) errors contains all errors that do not fall into any other client errors category. The number is calculated based on the formula: [HTTP errors 4xx] - [HTTP Not Found errors 404] - [HTTP Not Authorized ( )] - [HTTP errors configured by user]. HTTP other server errors (5xx) The number of HTTP server errors (5xx) that do not fall into categories 1 or 2 of custom HTTP server errors (5xx). HTTP server errors (5xx) The number of all observed HTTP server errors (5xx). HTTP server errors category 1 (default name) The number of custom HTTP server errors (5xx), category 1. By default, there are no specific error types assigned to this category. HTTP server errors category 2 (default name) The number of custom HTTP server errors (5xx), category 2. By default, there are no specific error types assigned to this category. 332

333 Appendix A Central Analysis Server Data Views HTTP timeout error This type of error is reported if the time between the occurrence of consecutive URLs constituting a transaction exceeds the predefined timeout value. HTTP unauthorized errors 401, 407 (default name) The number of observed custom HTTP authentication related errors. These include "HTTP 401 Unauthorized" and "HTTP 407 Proxy authentication required" errors. HTTP servers generate errors "401 Unauthorized" in cases, when anonymous clients are not authorized to view the requested content and must provide authentication information in the WWW-Authenticate request header. The 401 errors are similar to "403 Forbidden" errors, however used when authentication is possible but it has failed or not yet been provided. The 407 error is basically similar to 401, but it indicates that the client should first authenticate with a proxy server. The AMD will report these errors only if the server-level authentication has been configured. Simple and basic user access policies are common in Web sites that do not store user-sensitive and/or business critical information. Most commercial-grade applications, based on HTTP, such as home banking applications or online shopping sites, rely on the application-level authentication rather than the server-level authentication. Such applications are designed in the way that even if the user authentication fails, the HTTP server usually sends the 200 OK response code and the authentication error message in the page content. Therefore, the 401 Unauthorized and 407 Proxy authentication required error codes are quite rare in commercial environments. Incomplete transaction error This error tells us that transaction was reported although monitored traffic did not match the first steps in the transaction definition. Max giveup time threshold The maximum time after which a Synthetic Monitoring transaction is considered incomplete. Min giveup time threshold The minimum time after which a Synthetic Monitoring transaction is considered incomplete. Network time The time the network takes to deliver the request to the server and to deliver the resulting response back to the user. In other words, network time is the portion of the operation time that is spent on transferring data over the network. Network time (failed transactions) The network time for all failed transactions (transactions with a -2 status code). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Network time (requests) The network time for all transaction requests (both requests that became successful transactions and requests that ended as transactions with errors). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. No response error The number of errors of the category No response. These errors are reported when a request is detected in the monitored traffic, but the actual operation following this request is not observed. 333

334 Appendix A Central Analysis Server Data Views Number of results Depending on the transaction source, the number of results may mean: For RUM transaction: the number of subcomponents of error-free transaction. Note that this metric is recorded at the time when the monitored transaction is closed. For Private Enterprise transaction: the number of transaction requests. For RTMS transaction: the number of records returned by a particular RTMS timer. Operation attributes The number of operation attributes of all types (type 1 to 5), observed for the given software service. Operation attributes (1) The number of operation attributes of type 1, observed for the given software service. Operation attributes (2) The number of operation attributes of type 2, observed for the given software service. Operation attributes (3) The number of operation attributes of type 3, observed for the given software service. Operation attributes (4) The number of operation attributes of type 4, observed for the given software service. Operation attributes (5) The number of operation attributes of type 5, observed for the given software service. Percentage of slow transactions The percentage of transactions for which the transaction time was above a predefined threshold value. RTT measurements The number of RTT measurements. Server bandwidth usage The number of server bits per second. Server bytes The number of bytes sent by servers. The number includes headers. Server loss rate The percentage of total packets sent from a server that were lost and needed to be retransmitted. Server packets The number of packets sent by the servers. Server packets/sec The number of packets per second, sent by the servers. Server RTT The time it takes for a SYN packet to travel from the AMD to a monitored server and back again. 334

335 Appendix A Central Analysis Server Data Views Client AMD Server T1 SYN T2 T6 SYN ACK T5 Server RTT T3 T4 T7 ACK T8 T9 Server time The time it took the server to produce a response for the given request. Server time (failed transactions) The server time for all failed transactions (transactions with a -2 status code). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Server time (requests) The server time for all transaction requests (both requests that became successful transactions and requests that ended as transactions with errors). This metric is valid only for 'Transactions (Synthetic Monitoring)' transaction source. Slow transactions The number of transactions for which the transaction time was above a predefined threshold value. Total bandwidth usage The number of all transmitted bits (client + server) per second. Total bandwidth usage with breakdown Total bandwidth usage (client + server) with breakdown into client and server bandwidth usage. Total bytes The number of all transmitted bytes (client + server). Total packets The number of all transmitted packets (client + server). Total packets/sec The maximum value of Total packets/sec, over the time covered by the report. Transaction requests The number of all transaction requests, both requests that became successful transactions and requests that ended as transactions with errors. Transaction requests breakdown In case of Synthetic Monitoring transactions, the breakdown of all transaction requests into requests that became slow, fast and unavailable transactions. In case of RUM transactions, the breakdown of all transaction requests into requests that ended as slow, fast, aborted and failed transactions. As transaction requests we understand both requests that became successful transactions and transactions with errors. For RTMS transactions, use the "Transactions breakdown" metric. Transaction requests time breakdown The breakdown of trace time into client time, network time, server time, calculated for transactions coming from Synthetic Monitoring agents. 335

336 Appendix A Central Analysis Server Data Views Transactions The number of transactions. Transactions/min The number of transactions per minute. Transactions/sec The number of transactions per second. Transactions breakdown Transaction breakdown into numbers of slow and fast transactions. Transaction time The time it took to complete a transaction. Transaction time breakdown The transaction time breakdown into client time, client response time, server time, network time and application processing time. Unavailability (total) Unavailability calculated as Availability (total) - Failures (total). Internetwork traffic data This data view provides dimensions and metrics to analyze the internetwork type of traffic. Internetwork traffic data dimensions Agent The name of the synthetic agent that loaded the HTTP pages, for example, Keynote, Gomez, or Mercury. The name of the agent is determined from the User-agent field of the HTTP request and/or from agent user names or IP address configured on the server. Analysis type It assumes two values: Non-transaction and Transaction. This is to determine if transactional (TCP-based) or non-transactional traffic will be considered. Analyzer The name of the traffic analyzer. For more information see Concept of Protocol Analyzers Analyzer group The logical group of analyzers based on the type of the analyzed traffic. For more information see Concept of Protocol Analyzers Call initiator The IP address of the party initiating the call. Call manager IP The IP address of the call manager used in a VoIP call. Class of service The name identifying a Type of Service value. The mapping of Class of Service names to different values of Type of Service is defined in Central Analysis Server configuration. Conference call id Identifier of the VoIP conference call. 336

337 Appendix A Central Analysis Server Data Views Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Link alias A custom name created by a user for a selected link. Link group An element of the links hierarchy tree. May contain separate links or other link groups. Link group level The hierarchy level on which a link group resides. The dimension differentiates only between 2 states: a link/group can either be on Level 1, or on any level different than Level 1. Link monitor Link information source (Network Monitoring Probe, Flow Collector, AMD). Link name A link name, as reported by the information source (Network Monitoring Probe, Flow Collector, AMD). Link type The type of a monitored link, for example Ethernet or Frame Relay. Local area Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks, or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Local host IP address The IP address of the local host computer. Local host name The name of the local host computer. Local region Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks, or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Local site Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks, or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Local site ID Local site ID Local site optimized Local site optimized Physical link name The first of 2 segments in a link name. 337

338 Appendix A Central Analysis Server Data Views Protocol The IP protocol name. Remote area The name of the area to which the remote host's site belongs. Remote host IP address The IP address of the remote host computer. Remote host name The name of the remote host computer. Remote region The name of the region to which the remote host's area belongs. Remote site The name of the remote site, that is, the site in which the remote host is placed. Remote site ID If a site is an AS, its ID is the AS number; for all other site types it is the site name. Remote site optimized >Remote site optimized Server port The TCP port number on a server that hosts a software service. Software service The software service name, where by a software service we understand a service implemented by a specific piece of software, offered on a TCP or UDP port of one or more servers and identified by a particular TCP port number. Time The time stamp of the data presented on the report. TOS-binary A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by the AMD and displayed in reports. The use of this field is software service specific; it is used by software services to denote special types of traffic. TOS-decimal A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by AMD and displayed in reports. The use of this field is software service specific: it is used by software services to denote special types of traffic. Traffic type The type of client traffic: real or synthetic, that is, generated by a synthetic agent. VC link name The second of 2 segments in a link name. Internetwork traffic data metrics % of bad delay calls The percentage of VoIP calls with delay above the acceptable level. % of bad jitter calls The percentage of VoIP calls with jitter exceeding the acceptable level. 338

339 Appendix A Central Analysis Server Data Views % of bad lost packets calls The percentage of VoIP calls with loss rate above the acceptable level. % of bad MOS calls The percentage of VoIP calls with the Mean Opinion Score (MOS) rating below acceptable threshold. % of bad R-factor calls The percentage of VoIP calls with R-factor value below the acceptable value. Affected hosts (availability) The number of unique hosts that were affected by the availability problems. Affected hosts (network) The number of unique hosts that experienced network performance problems. Attempts The number of monitoring intervals during which attempts were made to connect to a server. Note that this is counted separately for each server, client and software service. Thus, if in a given monitoring interval there are attempts to connect to three different servers, the Attempts metric will be incremented by three for that one monitoring interval. The actual value shown on the report is the sum total of all the attempts, for all the monitoring intervals, in the period covered by the report. Availability (total) The percentage of successful attempts, calculated using the following formula: Availability (total) = 100% * (All Attempts All failures) / All Attempts where All attempts = all failures + all successful operations + all standalone hits not classified as a failure + all aborts not classified as a failure All failures = all failures (transport) + all failures (TCP) + all failures (application). Bad delay calls The number of VoIP calls with delay above the acceptable level. Bad jitter calls The number of VoIP calls with jitter exceeding the acceptable level. Bad lost packets calls The number of VoIP calls with loss rate above the acceptable level. Bad MOS calls The number of VoIP calls with the Mean Opinion Score (MOS) rating below acceptable threshold. Bad R-factor calls The number of VoIP calls with R-factor value below the acceptable value. Call attempts The number of call attempts including successful and failed ones. Call duration The average call duration. Calls The total number of VoIP calls. Note that for a selected software service the number of calls as seen from the sites' perspective may differ from the number seen from the endpoints' 339

340 Appendix A Central Analysis Server Data Views perspective. This is because in one site we may have two users taking part in the same call. Calls finished with termination error The number of calls that finished with the termination error. Calls not started due to remote peer The number of calls that could not start due to a remote peer. Calls with error during begin phase The number of calls affected by errors occurring during the begin phase. Client not responding errors The number of errors of category Client not responding. Errors of this category occur when the server closes the TCP session with a RESET packet after the client has been idle for too long. Such a situation happens when the server TCP/IP stack detects that network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with a RESET packet. This may occur when the client has been silently disconnected from the network, for example, due to link failure, or the client has crashed. Note that this error will not occur if the client session has ended gracefully, that is, by closing the client application. Closed TCP connections The total number of successful or failed TCP connections. Connection establishment timeout errors The number of TCP errors of category Connection establishment timeout errors. This category of errors applies when there was no response from the server to the SYN packets transmitted by the client. Connection refused errors The number of TCP errors of category Connection refused errors, also referred to as Session establishment errors. This category of errors applies when a server rejects a request from a client to open a TCP session. Such a situation usually happens when the server runs out of resources, either due to operating system kernel configuration or lack of memory. Data samples The number of lines in the traffic performance data packages received from the AMDs. When clients are aggregated into so-called aggregation blocks, this is the number of software service-server-site triplets. This metric is not calculated in PVU mode. Downstream bandwidth usage Downstream traffic bandwidth per data resolution (hour/day/week/month). Downstream bytes The number of bytes transferred in the downstream direction (to the subscriber). Downstream loss rate The percentage of total packets sent in the downstream direction that were lost (due to network congestion, low router queue capacity or other reasons) and needed to be retransmitted. Downstream loss rate (AMD->local) The percentage of total packets sent by in the downstream direction that were lost - between the AMD and the local site - and needed to be retransmitted. 340

341 Appendix A Central Analysis Server Data Views Downstream loss rate (remote->amd) The percentage of total packets sent in the downstream direction that were lost - between the remote site and the AMD - and needed to be retransmitted. Downstream packets The number of packets transmitted in the downstream direction. Downstream packet size The average size of the downstream packets (in bytes), including header. Downstream packets lost The number of lost TCP data packets sent in the downstream direction, excluding the traffic control packets. Downstream realized bandwidth Realized bandwidth in the downstream direction, to a site. Downstream TCP packets The total number of TCP packets sent in the downstream direction, excluding the traffic control packets. Downstream VoIP delay VoIP weighted average networking delay in the downstream direction, that is, from a remote to the local VoIP endpoint. The Delay for one call is calculated as follows: Delay = Latency + LookAheadDelay + JitterBufferDelay + PLSize / BaseFrameSize * BaseFrameDuration Where Latency in this formula is not a Delay from a Report Block of RTCP packet. It is calculated on the basis of time stamps (measured by the Probe) of RTCP packets and Delays extracted from Report Blocks of RTCP packets. Other parameters apart from PLSize are codec specific. PLSize is the current RTP payload size. Downstream VoIP Jitter VoIP average jitter measured by the probe in the downstream traffic, that is, from a remote VoIP phone to the local endpoint. Jitter is a variation in voice data transit delay, in milliseconds. The Jitter is the mean value of the deviation: deviation Jitter = LastJitter * D * In RTP packets, a creation time stamp is written. To calculate the deviation of last packet, the creation time stamp (TRTP) is subtracted from the time stamp written in previous packet (LastTRTP). The previous value is subtracted from, Arrival to destination last packet Timestamp (TM), subtracted from previous Arrival to destination packet Timestamp (LastTM). D = absolute value ((TM - LastTM) - (TRTP - LastTRTP)). Downstream VoIP loss rate The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for downstream traffic. Downstream VoIP MOS VoIP average Mean Opinion Score (MOS) measured in the downstream direction, that is, from a remote VoIP phone to the subscriber. It is within a range from 1 to 5. MOS is calculated basing on some statically configured parameters and dynamically measured call variables. Statically configured parameters are codec parameters and MOS constants. Dynamically measured call variables are: latency, size of frame and loss rate. MOS may be unavailable if there is no RTCP traffic in the call. Downstream VoIP R-factor VoIP average R-factor value in the downstream direction, that is, from a remote VoIP phone to the subscriber. A value derived from metrics such as latency, jitter, and packet 341

342 Appendix A Central Analysis Server Data Views loss, the R-Factor value helps quickly assess the quality-of-experience for VoIP calls on the network. Typical scores range from 50 (bad) to 90 (excellent). Downstream VoIP RTCP Jitter VoIP average jitter as reported by Real Time Transport Protocol (RTCP) for the downstream traffic, that is, from a remote VoIP endpoint to the local endpoint. Jitter reflects a variation in voice data transit delay, in milliseconds. The Jitter is the mean value of the deviation: deviation Jitter = LastJitter * D * In RTP packets, a creation time stamp is written. To calculate the deviation of last packet, the creation time stamp (TRTP) is subtracted from the time stamp written in previous packet (LastTRTP). The previous value is subtracted from, Arrival to destination last packet Timestamp (TM), subtracted from previous Arrival to destination packet Timestamp (LastTM). D = absolute value ((TM - LastTM) - (TRTP - LastTRTP)). End-to-end RTT The time it takes for a SYN packet to travel from the client to a monitored server and back again. Failures (total) The total number of failures, that is all Failures (transport) + all Failures (TCP) + all Failures (application) Local ACK RTT RTT measurement performed during ACK packet transmission, from local site side of the transaction. Local ACK RTT measurements These metrics keep track of how many RTT of local site's ACK measurements were made. ACK measurement is performed during ACK packet transmission either from server or client side of the transaction. Local RTT The round-trip time measured for the local site. Network performance The percentage of total traffic that did not experience network-related problems (traffic in which the values of loss rate and RTT did not exceed configured thresholds). Out of contract bytes The number of bytes marked as Out-of-contract in the TOS field in the TCP header. This setting can signify that the data was sent over and above a certain preset limit. Out of contract packets The number of packets marked as Out-of-contractin the TOS field in the TCP header. This can signify that the data was sent over and above a certain preset limit. Percentage of affected hosts (availability) The percentage of hosts that were affected by availability related problems. Percentage of affected hosts (network) The percentage of unique hosts that experienced network performance problems. Remote ACK RTT Remote ACK RTT is the time it takes for an ACK packet with no payload to travel from the remote site the AMD and back again. 342

343 Appendix A Central Analysis Server Data Views Remote ACK RTT measurements This metric keeps track of how many remote ACK RTT measurements were made. ACK measurement is performed during ACK packet transmission either from server or client side of the transaction. Remote RTT The round-trip time measured for the remote site. Reports on in- and external traffic This switch is a result of differences that might occur while reporting peer-to-peer traffic. It takes in account the internal and external traffic in relation to the site. RTT measurements The number of RTT measurements. An RTT measurement occurs during every TCP handshake, so it provides some insight into the number of attempted TCP sessions, and the potential accuracy of the RTT measurements that are reported. Server not responding errors The number of Server Not Responding errors. This category of errors applies when the client closes the TCP session with a RESET packet after the server has failed to respond for too long. Server session termination errors The number of Server Session Termination errors. This category of errors applies when the server detects an error on the software service level and closes the TCP session with a RESET packet. Successful attempts The number of monitoring intervals during which successful attempts were made to connect to a server. Note that this is counted separately for each server. Thus, if in a given monitoring interval there are attempts to connect to three different servers, the Successful attempts metric will be incremented by three for that one monitoring interval. Note also that, even if TCP errors occur, but the connection is established during the given monitoring interval, then this monitoring interval is counted as a success (for that server). TCP errors The total number of TCP errors. Those errors may indicate server or application problems and therefore measurements of those are critical to understanding the issues that may affect end-user experience. AMDs measure and report on the following types of TCP errors: Connection Refused Errors - Client attempts to open a TCP session with a server, which rejects the request. SYN packet from Client is followed by RESET packet from Server, with matching TCP sequence numbers. This error is typically caused by resource exhaustion on the server, which is unable to accept more concurrent TCP sessions. This may be either a configuration issue (too few resources allocated in the kernel) or lack of memory. SYN flood attacks typically result in servers being unable to accept new connections. Server session termination error - Server is unexpectedly terminating a connection that was successfully opened. The server sends a RESET packet to the Client. Such an error originates at an application using TCP session that is monitored. It does not necessarily mean application failure; usually it means that the application encountered 343

344 Appendix A Central Analysis Server Data Views a condition in which it decided to immediately terminate session with the client, for example, because of an application security policy violation by the client. Session Abort - Client is unexpectedly terminating a connection that was successfully opened. The Client sends a RESET packet to the Server. These errors are inspected in the context of the client application and may or may not be reported. For example, the browser running HTTP may terminate the load of a GIF file if it is older than the one that it had previously cached and this is normal behavior. However, if all connections to the server are terminated because the user hits the STOP button, then this is abnormal session termination and is reported as "Aborted operation" or "Stopped Page". Client not responding errors (server timeout errors) - Server networking stack takes an assumption that the network connection to the client exists, but the client remains idle and does not respond. In such a case, the server closes the TCP session with the RESET packet. Such a condition may occur when the client has been silently disconnected from the network, for example, due to a link failure, or the client has crashed. Note that this error will not occur if the client has ended the session gracefully, e.g. by closing the client application. Server not responding errors (client timeout errors) - Client networking stack takes an assumption that network connection to the server exists, but the server remains idle and does not respond. In such a case, the client closes the TCP session with the RESET packet. This may occur either during the Session Setup phase (no response to the SYN packet), or during a normal data exchange process. Such a situation may result in the intermittent network problems between the client and the server. In the case the traffic is routed through asymmetric paths across the Internet, which is often the case, the path from the server to the client may be broken. Total bandwidth usage The number of all transmitted bits (client + server) per second. Total bandwidth usage with breakdown Total bandwidth usage with breakdown. Total bytes The number of all transmitted bytes (client + server). Two-way loss rate The percentage of total packets (client and server) that were lost (due to network congestion, low router queue capacity or other reasons) and needed to be retransmitted. Two-way loss rate with breakdown Two-way loss rate with breakdown Unique applications The number of applications detected, that is, the unique names of the applications. Unique local hosts The number of hosts detected in the local site. Unique remote hosts The number of hosts detected in the remote site. 344

345 Appendix A Central Analysis Server Data Views Upstream bandwidth usage The number of upstream bits per second. Upstream bytes The number of bytes transmitted in the upstream direction. Upstream loss rate The percentage of total packets sent in the upstream direction that were lost and needed to be retransmitted. Upstream loss rate (AMD->remote) The percentage of total packets sent in the upstream direction that were lost - between the AMD and the remote site - and needed to be retransmitted. Upstream loss rate (local->amd) The percentage of total packets sent in the upstream direction that were lost - between the local site and the AMD - and needed to be retransmitted. Upstream packets The number of packets transmitted in the upstream direction. Upstream packet size The average size of the upstream packets (in bytes), including header. Upstream packets lost The number of lost TCP data packets sent in the upstream direction, excluding the traffic control packets. Upstream realized bandwidth Realized bandwidth in the upstream direction, from a site, area or region. Upstream TCP packets The total number of TCP packets sent in the upstream direction, excluding the traffic control packets. Upstream VoIP delay VoIP average networking delay in the upstream direction, that is, from a local to a remote VoIP endpoint. The Delay for one call is calculated as follows: Delay = Latency + LookAheadDelay + JitterBufferDelay + PLSize / BaseFrameSize * BaseFrameDuration Where Latency in this formula is not a Delay from a Report Block of RTCP packet. It is calculated on the basis of time stamps (measured by the Probe) of RTCP packets and Delays extracted from Report Blocks of RTCP packets. Other parameters apart from PLSize are codec specific. PLSize is the current RTP payload size. Upstream VoIP Jitter Average jitter measured by the probe in the upstream traffic, that is, from a local to a remote VoIP endpoint. Jitter reflects variation in voice data transit delay, in milliseconds. The Jitter is the mean value of the deviation: deviation Jitter = LastJitter * D * In RTP packets, a creation time stamp is written. To calculate the deviation of last packet, the creation time stamp (TRTP) is subtracted from the time stamp written in previous packet (LastTRTP). The previous value is subtracted from, Arrival to destination last packet Timestamp (TM), subtracted from previous Arrival to destination packet Timestamp (LastTM). D = absolute value ((TM - LastTM) - (TRTP - LastTRTP)). 345

346 Appendix A Central Analysis Server Data Views Upstream VoIP loss rate The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for upstream traffic. Upstream VoIP MOS VoIP average Mean Opinion Score (MOS) measured in the upstream direction, that is, from a subscriber to a remote VoIP phone. It is within a range from 1 to 5. MOS is calculated basing on some statically configured parameters and dynamically measured call variables. Statically configured parameters are codec parameters and MOS constants. Dynamically measured call variables are: latency, size of frame and loss rate. MOS may be unavailable if there is no RTCP traffic in the call. Upstream VoIP R-factor VoIP average R-factor value in the upstream direction, that is, from a subscriber to a remote VoIP phone. A value derived from metrics such as latency, jitter, and packet loss, the R-Factor value helps quickly assess the quality-of-experience for VoIP calls on the network. Typical scores range from 50 (bad) to 90 (excellent). Upstream VoIP RTCP Jitter VoIP average jitter as reported by Real Time Transport Protocol (RTCP) for the upstream traffic, that is, from a local VoIP endpoint to a remote one. Jitter reflects a variation in voice data transit delay, in milliseconds. VoIP delay VoIP average networking delay, as reported by Real Time Transport Protocol (RTCP), measured for both downstream and upstream traffic. VoIP Jitter VoIP average jitter measured by the probe, for both downstream and upstream traffic. Jitter is a variation in voice data transit delay, in milliseconds. In general, higher levels of jitter are more likely to occur on either slow or heavily congested links. VoIP loss rate The percentage of VoIP packets lost or discarded that needed to be retransmitted, measured for both upstream and downstream traffic. VoIP MOS VoIP average Mean Opinion Score (MOS) rating of the call quality, for both downstream and upstream traffic. VoIP R-factor VoIP average R-factor value, for both downstream and upstream traffic. It is a transmission quality rating, with a typical range of An R-Factor score is derived from multiple VoIP metrics, including latency, jitter, and loss. VoIP RTCP Jitter VoIP average jitter as reported by Real Time Transport Protocol (RTCP), for both downstream and upstream traffic. Jitter is a variation in voice data transit delay, in milliseconds. Higher levels of jitter are more likely to occur on either slow or heavily congested links. Zero window size events Client sets this in TCP header when it wants the other side to slow down with data transmission because it cannot keep up with the transmission speed. Indicates that receiving machine is busy with other tasks. 346

347 Network link data This data view provides dimensions and metrics to analyze the monitored links. Network link data dimensions Agent The name of the synthetic agent that loaded the HTTP pages, for example, Keynote, Gomez, or Mercury. The name of the agent is determined from the User-agent field of the HTTP request and/or from agent user names or IP address configured on the server. Analysis type It assumes two values: Non-transaction and Transaction. This is to determine if transactional (TCP-based) or non-transactional traffic will be considered. Analyzer The name of the traffic analyzer. For more information see Concept of Protocol Analyzers Analyzer group The logical group of analyzers based on the type of the analyzed traffic. For more information see Concept of Protocol Analyzers Business day The classification of days as business or non-business, as defined in the Business Hours Configuration tool. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Class of service The name identifying a Type of Service value. The mapping of Class of Service names to different values of Type of Service is defined in Central Analysis Server configuration. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Day of the week The textual representation of the day of the week. Hour of the day The numerical representation of the hour of the day, that is, numbers from 0 to 23. Link alias A custom name created by a user for a selected link. Appendix A Central Analysis Server Data Views Link group An element of the links hierarchy tree. May contain separate links or other link groups. Link group level The hierarchy level on which a link group resides. The dimension differentiates only between 2 states: a link/group can either be on Level 1, or on any level different than Level 1. Link monitor Link information source (Network Monitoring Probe, Flow Collector, AMD). 347

348 Appendix A Central Analysis Server Data Views Link name A link name, as reported by the information source (Network Monitoring Probe, Flow Collector, AMD). Link type The type of a monitored link, for example Ethernet or Frame Relay. Physical link name The first of 2 segments in a link name. Protocol The IP protocol name. Software service The software service name, where by a software service we understand a service implemented by a specific piece of software, offered on a TCP or UDP port of one or more servers and identified by a particular TCP port number. Time The time stamp of the data presented on the report. TOS-binary A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by the AMD and displayed in reports. The use of this field is software service specific; it is used by software services to denote special types of traffic. TOS-decimal A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by AMD and displayed in reports. The use of this field is software service specific: it is used by software services to denote special types of traffic. Traffic type The type of client traffic: real or synthetic, that is, generated by a synthetic agent. VC link name The second of 2 segments in a link name. Network link data metrics Attempts The number of monitoring intervals during which attempts were made to connect to a server. Note that this is counted separately for each server, client and software service. Thus, if in a given monitoring interval there are attempts to connect to three different servers, the Attempts metric will be incremented by three for that one monitoring interval. The actual value shown on the report is the sum total of all the attempts, for all the monitoring intervals, in the period covered by the report. End-to-end RTT The time it takes for a SYN packet to travel from the client to a monitored server and back again. Far-end RTT The round-trip time measured for the remote site. Incoming bandwidth usage Traffic bandwidth to hosts on near-end side, per resolution (hour/day/week/month). 348

349 Appendix A Central Analysis Server Data Views Incoming bandwidth usage (third party) Traffic bandwidth to hosts on near-end side, per resolution (hour/day/week/month) as reported for this particular link. Source: SNMP polling. Incoming bytes The number of bytes transferred in the downstream direction (to the subscriber). Incoming bytes (third party) The number of bytes transferred in the downstream direction as reported for this particular link. Source: SNMP polling. Incoming loss rate The percentage of total packets sent in the direction to hosts on near end side that were lost and needed to be retransmitted. Incoming packets The number of packets transmitted in the direction to hosts on the far-end side of the interface. Incoming packets/sec The number of packets per second, sent by the clients. Incoming packets (third party) The number of packets transmitted in the direction to hosts on the far-end side of the interface. Source: SNMP polling. Incoming speed The speed at which the data flows in the direction to hosts on the near side, reported by a monitoring device such as a Network Monitoring Probe. Incoming utilization The ratio showing how network bandwidth is actually used in relation to nominal link speed, for incoming traffic, that is, for traffic to the near-end hosts. Incoming utilization (third party) The ratio showing how network bandwidth is actually used on this particular link in relation to nominal link speed, for incoming traffic, that is, for traffic to the near-end hosts. Source: SNMP polling. LAN-WAN byte ratio The amount of compression performed and expressed as a percentage. 100% for pass-through. Greater than 100% if more bytes on the WAN side, including both pass-through and optimized traffic. Less than 100% if fewer bytes on the WAN side, including both pass-through and optimized traffic. Near-end RTT The round-trip time measured for the local site. Outgoing bandwidth usage Traffic bandwidth to hosts on the far-end side, per resolution (either hour/day/week/month). Outgoing bandwidth usage (third party) Traffic bandwidth to hosts on the far-end side, per resolution (either hour/day/week/month) as reported for this particular link. Source: SNMP polling. 349

350 Appendix A Central Analysis Server Data Views Outgoing bytes The total volume of data transferred from hosts on the near end to host on the far end side. Outgoing bytes (third party) Total volume of data transferred from hosts on the near end to host on the far end side as reported for this particular link. Source: SNMP polling. Outgoing loss rate The percentage of total packets sent in the direction to hosts on the far end side that were lost and needed to be retransmitted. Outgoing packets The number of packets transmitted in the direction to hosts on the near-end side of the interface. Outgoing packets/sec The number of packets per second, sent by the servers. Outgoing packets (third party) The number of packets transmitted in the direction to hosts on the near-end side of the interface. Source: SNMP polling. Outgoing speed The speed at which the data flows in the direction to hosts on the far side, reported by a monitoring device such as a Network Monitoring Probe. Outgoing utilization The ratio showing how network bandwidth is used in relation to nominal link speed, for outgoing traffic, that is, for traffic to hosts on the far-end side of the interface or segment. Outgoing utilization (third party) The ratio showing how network bandwidth is used on this particular link in relation to nominal link speed, for outgoing traffic, that is, for traffic to hosts on the far-end side of the interface. Source: SNMP polling. Percentage of optimized traffic (bytes) Indicates the traffic distribution in two separate branches: optimized traffic and passed-through traffic. The higher the value, the more bytes are optimized. Low values may indicate poorly configured optimization or optimization device overload. RTT measurements The number of RTT measurements. An RTT measurement occurs during every TCP handshake, so it provides some insight into the number of attempted TCP sessions, and the potential accuracy of the RTT measurements that are reported. Successful attempts The number of monitoring intervals during which successful attempts were made to connect to a server. Note that this is counted separately for each server. Thus, if in a given monitoring interval there are attempts to connect to three different servers, the Successful attempts metric will be incremented by three for that one monitoring interval. Note also that, even if TCP errors occur, but the connection is established during the given monitoring interval, then this monitoring interval is counted as a success (for that server). Total bandwidth usage The number of all transmitted bits (client + server) per second. 350

351 Appendix A Central Analysis Server Data Views Total bandwidth usage with breakdown Total bandwidth usage with breakdown. Total bytes The number of all transmitted bytes (client + server). Total bytes (third party) The number of all transmitted bytes as reported for this particular link. Source: SNMP polling. Total bytes compression The data optimization observed, expressed as a byte reduction and a percentage, where a lower byte count on the WAN side means a higher reduction: 0% for pass-through. Less than 0% if more bytes were observed on the WAN side, including both pass-through and optimized traffic. Greater than 0% if fewer bytes were observed on the WAN side, including both pass-through and optimized traffic. This metric should not exceed 100%. Total bytes on LAN side The sum of bytes (client's and server's) observed on the LAN side before network traffic is directed into the WAN Optimization Controller (WOC). Total bytes on WAN side The sum of bytes (client's and server's) observed on the WAN side after network traffic leaves the WAN Optimization Controller (WOC), including bytes that have been passed through and those that have been marked as optimized. Total packets The number of all transmitted packets (client + server). Total packets/sec The maximum value of Total packets/sec, over the time covered by the report. Total packets (third party) The number of packets transmitted in the direction to hosts on the far-end side and to hosts on the near-end side of the interface. Source: SNMP polling. Total utilization The ratio showing how network bandwidth is actually used in relation to nominal link speed, for both incoming and outgoing traffic, that is, for traffic to the far-end hosts, and back to the near-end side. Two-way loss rate The percentage of total packets (client and server) that were lost (due to network congestion, low router queue capacity or other reasons) and needed to be retransmitted. Citrix/WTS hardware data This data view provides dimensions and metrics helpful in Citrix statistics analysis. 351

352 Appendix A Central Analysis Server Data Views Citrix/WTS hardware data dimensions Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Server area Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and optionally on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Server city Geographical data about the server site. Server country Geographical data about the server site. Server geographical region Geographical data about the server site. Server IP address The IP address of the server. Server name The name of the server resolved by a DNS server. Server region Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Server site Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' Autonomous System names. Sites are the smallest logical structures that comprise of clients and servers. Areas are composed of sites, and regions are composed of areas. Server site description Optional description of the server site. Server site ID In cases when sites are ASes, Server site ID contains the AS number, which is also given in Server ASN. For manual sites, Server site ID is identical to Server site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Software service The software service name, where by a software service we understand a service implemented by a specific piece of software, offered on a TCP or UDP port of one or more servers and identified by a particular TCP port number. Time The time stamp of the data presented on the report. 352

353 Appendix A Central Analysis Server Data Views Citrix/WTS hardware data metrics Average CPU utilization The percentage of elapsed time that the processor spent to execute non-idle threads. This counter is the primary indicator of processor activity, and shows the average percentage of busy time. Average disk utilization The percentage of elapsed time that disk storage was busy servicing read or write requests. This counter shows the average percentage of busy time. Average maximum CPU utilization The percentage of elapsed time that the processor spent to execute non-idle threads. This counter is the primary indicator of processor activity, and shows the average maximum percentage of busy time. Average maximum disk utilization The percentage of elapsed time that disk storage was busy servicing read or write requests. This counter shows the average maximum percentage of busy time. Average maximum memory utilization The percentage of used physical memory. This counter shows the average maximum percentage of used RAM. Average maximum number of active sessions The average maximum number of active Terminal Services sessions. Average maximum number of open sessions The average maximum number of open Terminal Services sessions. Average memory utilization The average percentage of used physical memory (RAM). Average minimum CPU utilization The percentage of elapsed time that the processor spent to execute non-idle threads. This counter is the primary indicator of processor activity, and shows the average minimum percentage of busy time. Average minimum disk utilization The percentage of elapsed time that disk storage was busy servicing read or write requests. This counter shows the average minimum percentage of busy time. Average minimum memory utilization The percentage of used physical memory. This counter shows the average minimum percentage of used RAM. Average minimum number of active sessions The average minimum number of open Terminal Services sessions. Average minimum number of open sessions The average minimum number of open Terminal Services sessions. Average number of active sessions The average number of active Windows Terminal Services sessions. Average number of open sessions The average number of open Windows Terminal Services sessions. 353

354 Appendix A Central Analysis Server Data Views Maximum CPU utilization The percentage of elapsed time that the processor spent to execute non-idle threads. This counter is the primary indicator of processor activity, and shows the maximum percentage of busy time. Maximum disk utilization The percentage of elapsed time that disk storage was busy servicing read or write requests. This counter shows the maximum percentage of the busy time. Maximum memory utilization The percentage of used physical memory. This counter shows the maximum percentage of used RAM. Maximum number of active sessions The maximum number of total Terminal Services sessions. Maximum number of open sessions The maximum number of open Terminal Services sessions. Minimum CPU utilization The percentage of elapsed time that the processor spent to execute non-idle threads. This counter is the primary indicator of processor activity, and shows the minimum percentage of busy time. Minimum disk utilization The percentage of elapsed time that disk storage was busy servicing read or write requests. This counter shows the minimum percentage of busy time. Minimum memory utilization The percentage of used physical memory. This counter shows the minimum percentage of used RAM. Minimum number of active sessions The minimum number of active Terminal Services sessions. Minimum number of open sessions The minimum number of open Terminal Services sessions. Low-significance traffic The Low significance traffic data view shows information on traffic that is not relevant to regular performance analysis and reporting. This view should be used to see destination IP addresses for aggregated traffic and any breakdown of metrics by destination address. Traffic considered in this data view includes, for example, data concerning virus activity and network scanning, which are usually attempts to connect to non-existent hosts or services. It does not make sense to include such traffic in any performance measurements, so this data is aggregated. It does not affect typical roles of the CAS, which include even the inter-location traffic flow affected by unwanted packets, but which simplifies and shortens CAS tasks. The only important determinations that can be made from such data is identification of infected workstations or misconfigured applications. Other traffic that is aggregated is non-tcp and non-udp traffic. As with virus/port scanning activity traffic, the CAS records only sender's IP address and aggregates destinations to locations for non-tcp and non-udp traffic. 354

355 Appendix A Central Analysis Server Data Views Low-significance traffic dimensions Agent The name of the synthetic agent that loaded the HTTP pages, for example, Keynote, Gomez, or Mercury. The name of the agent is determined from the User-agent field of the HTTP request and/or from agent user names or IP address configured on the server. Analysis type It assumes two values: Non-transaction and Transaction. This is to determine if transactional (TCP-based) or non-transactional traffic will be considered. Analyzer The name of the traffic analyzer. For more information see Concept of Protocol Analyzers Analyzer group The logical group of analyzers based on the type of the analyzed traffic. For more information see Concept of Protocol Analyzers Class of service The name identifying a Type of Service value. The mapping of Class of Service names to different values of Type of Service is defined in Central Analysis Server configuration. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Protocol The IP protocol name. Script name The name of the simple parser script. Server IP address The IP address of the server. Server name The name of the server resolved by a DNS server. Server port The TCP port number on a server that hosts a software service. Software service The software service name, where by a software service we understand a service implemented by a specific piece of software, offered on a TCP or UDP port of one or more servers and identified by a particular TCP port number. Software service type The type of the software service - autodiscovered or user-defined. Time The time stamp of the data presented on the report. TOS-binary A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by the AMD and displayed in reports. The use of this field is software service specific; it is used by software services to denote special types of traffic. 355

356 Appendix A Central Analysis Server Data Views TOS-decimal A traffic identifier contained in an 8-bit field in the IP packet header. The contents of this field can be detected by AMD and displayed in reports. The use of this field is software service specific: it is used by software services to denote special types of traffic. Traffic type The type of client traffic: real or synthetic, that is, generated by a synthetic agent. Low-significance traffic metrics Auto-discovery flags If a software service contains any autodiscovered traffic, the metric displays the green tick sign. Client bytes The number of bytes sent by the clients. Note that this includes headers. Server bytes The number of bytes sent by servers. The number includes headers. Total bytes The number of all transmitted bytes (client + server). Synthetic Backbone Synthetic Monitoring page data Synthetic Monitoring page data dimensions Account Name of the Dynatrace Performance Network account. Application A universal container that can accommodate transactions. Backbone node Name of the Dynatrace Performance Network node. Business day The classification of days as business or non-business, as defined in the Business Hours Configuration tool. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Client area Sites, areas, and regions define a logical grouping of clients and servers, or Backbobne nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. 356

357 Appendix A Central Analysis Server Data Views Client city Geographical data about the client site, or the Backbone node in case of Synthetic Backbone reports. Client country Geographical data about the client country, or the Backbone node country in case of Synthetic Backbone reports. Client geographical region Geographical data about the client region, or the Backbone node region in case of Synthetic Backbone reports.. Client group The client's group, or a group of Backbone nodes in case of Synthetic Backbone reports, as manually defined in Central Analysis Server. Client IP address The IP address of the client, or the Backbone node in case of Internet synthetic reports. Client region Sites, areas, and regions define a logical grouping of clients and servers, or Backbone nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site Sites, areas, and regions define a logical grouping of clients and servers, or Backbobne nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions, clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site description The optional description of the client site, or a Backbobne node in case of Synthetic Backbone reports. Client site ID In cases when sites are ASes, Client Site ID contains the AS number, which is also given in Client ASN, or Backbone node ASN in case of Synthetic Backbone reports. For manual sites, Client Site ID is identical to Client site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Client site type One of site types: AS, Active, CIDR Block, Default, External, Manual, Network or Predefined. External is a site defined by a user in external configuration files. Manual site is defined by a user by means of configuration interface on the report server. Predefined sites are based on a mapping contained in a special configuration file. Client site UDL A dimension designed to filter only the User Defined Links. By default it is set to true (Yes) for WAN Optimization Sites report. 357

358 Appendix A Central Analysis Server Data Views Client site WAN Optimized Link Indicates whether a site to which the client belongs is selected as both a UDL and a WAN optimized link. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Day of the week The textual representation of the day of the week. Hour of the day The numerical representation of the hour of the day, that is, numbers from 0 to 23. Page The page name accessed within a test. Page sequence number Sequence number of the page. Test Name of the Dynatrace Performance Network test. Test time The exact time of the test execution. Time The time stamp of the data presented on the report. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. Transaction step The step as configured in a transaction definition. Step configuration is built on DCRUM data using operations, tasks, modules or services. Steps are contained within transactions and carry the entire transaction configuration. Transaction step sequence number The sequence number of a step is used for presentation purposes. It marks the order of a particular step in a transaction configuration. You can order steps within each transaction if such an ordering makes sense for the overall monitored application paradigm. The transaction step sequence does not affect data aggregation. Synthetic Monitoring page data metrics 1st byte time Time between the completion of the TCP connection with the destination server that will provide the displayed page's HTML, graphic, or other component and the reception of the First Packet (also known as first byte) for that object. Overloaded web servers often have a long First Byte time. Application performance For transactional protocols, this is the percentage of software service operations completed in a time shorter than the performance threshold. For SMTP and transactionless TCP-based protocols, this is the percentage of monitoring intervals in which user wait time per kb of data was shorter than the threshold value. 358

359 Appendix A Central Analysis Server Data Views Byte limit exceeded errors The number of byte Limit errors. The byte limit error occurs when the attempt to download a page or object was blocked because the reported size of the object was greater than the current limit. Bytes (average) The average number of bytes downloaded for the pages per test. Bytes (sum) The total number of bytes downloaded for pages. Connections (average) The average number of connections established for pages per test. Connections (sum) Total number of connections established for pages. Connect time The time (in seconds) that it takes to connect to a Web server across a network. After obtaining the target IP address by using the DNS Lookup, the Dyntrace Performance Network Agent establishes a TCP connection with the device at that IP address. TCP connections are started by the agent's transmitting a special "SYN" packet and then receiving an "ACK" packet from the server. The elapsed time between transmitting the SYN to the server and receiving the ACK response is the Initial Connection time. Content match errors The number of errors caused by the content not matching the condition defined for the page or object. Content time Time required to receive the content of a page or page component starting with the receipt of the first content and ending with the last packet received. DNS time The time it takes to translate the host name (for example, into the IP address (for example, ). The Dyntrace Performance Network Agent performs this translation by using the Internet's standard Domain Name Service (DNS). Hosts (average) The average number of hosts associated with the page. Hosts (sum) Total number of hosts associated with the page(s). HTTP client errors The number of HTTP client errors. HTTP server errors The number of HTTP server errors. Number of 200 objects The total number of page objects with a return code of Number of 300 objects The total number of page objects with a return code of Number of 400 objects The total number of page objects with a return code of

360 Appendix A Central Analysis Server Data Views Number of 500 objects The total number of page objects with a return code of Objects (average) The average number of objects downloaded per page. Objects (sum) The total number of downloaded page objects. Objects breakdown The breakdown into successful and failing objects. Objects with network errors The total number of page objects with network related errors. Objects with server errors The total number of page objects with server related errors. Page availability The percentage of successful pages vs all pages. Page error breakdown The breakdown of page errors, that is Socket time-out errors, Byte limit exceeded errors, User script errors, and Content match failures. Page response time The average time it took the page to produce a response for the given request. Pages executed The number of pages tested. Pages executed (failed) The number of tested pages reported as failed. Pages executed (fast) The number of tested pages reported as fast. Pages executed (slow) The number of tested pages reported as slow. Pages executed breakdown Breakdown into failed, slow and fast tested pages. Page time breakdown The breakdown of components, that is DNS time, connect time, SSL time, 1st byte time, content time, of page load for all object associated with the page. Percentage of pages executed (slow) The percentage of tested pages reported as slow. Processing time The average client-side processing time. Slow pages threshold (max) The threshold above which the page is reported as fast. Slow pages threshold (min) The threshold below which the page is reported as failed. Socket time-out errors The number of errors caused by no response to the attempts to open a TCP connection to the server. 360

361 Appendix A Central Analysis Server Data Views SSL time Time it takes to establish a Secure Socket Layer (SSL) connection and exchange SSL keys. User script errors The number user script errors. The user script error occurs when JavaScript on the page did not or was not able to execute properly. Synthetic Monitoring test data Synthetic Monitoring test data dimensions Account Name of the Dynatrace Performance Network account. Backbone node Name of the Dynatrace Performance Network node. Business day The classification of days as business or non-business, as defined in the Business Hours Configuration tool. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Client area Sites, areas, and regions define a logical grouping of clients and servers, or Backbobne nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client city Geographical data about the client site, or the Backbone node in case of Synthetic Backbone reports. Client country Geographical data about the client country, or the Backbone node country in case of Synthetic Backbone reports. Client geographical region Geographical data about the client region, or the Backbone node region in case of Synthetic Backbone reports.. Client group The client's group, or a group of Backbone nodes in case of Synthetic Backbone reports, as manually defined in Central Analysis Server. Client IP address The IP address of the client, or the Backbone node in case of Internet synthetic reports. Client region Sites, areas, and regions define a logical grouping of clients and servers, or Backbone nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest 361

362 Appendix A Central Analysis Server Data Views groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site Sites, areas, and regions define a logical grouping of clients and servers, or Backbobne nodes in case of Synthetic Backbone reports, into a hierarchy. They are based on manual definitions, clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site description The optional description of the client site, or a Backbobne node in case of Synthetic Backbone reports. Client site ID In cases when sites are ASes, Client Site ID contains the AS number, which is also given in Client ASN, or Backbone node ASN in case of Synthetic Backbone reports. For manual sites, Client Site ID is identical to Client site, and contains the site name as defined in your site configuration. Sites based on CIDR blocks or subnets are identified by IP addresses. Client site type One of site types: AS, Active, CIDR Block, Default, External, Manual, Network or Predefined. External is a site defined by a user in external configuration files. Manual site is defined by a user by means of configuration interface on the report server. Predefined sites are based on a mapping contained in a special configuration file. Client site UDL A dimension designed to filter only the User Defined Links. By default it is set to true (Yes) for WAN Optimization Sites report. Client site WAN Optimized Link Indicates whether a site to which the client belongs is selected as both a UDL and a WAN optimized link. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Day of the week The textual representation of the day of the week. Hour of the day The numerical representation of the hour of the day, that is, numbers from 0 to 23. Test Name of the Dynatrace Performance Network test. Test time The exact time of the test execution. Time The time stamp of the data presented on the report. 362

363 Appendix A Central Analysis Server Data Views Synthetic Monitoring test data metrics Bytes downloaded The total number of downloaded bytes. Bytes downloaded per test The average number of bytes downloaded per test. Failing objects The number of failing page objects within the tests. Failing pages The number of unsuccessful attempts to access a page within a test. Number of objects The number of tested page objects, including successful and failing objects. Number of pages The number of tested pages. Number of tests The number of test executions. Objects breakdown The breakdown into successful and failed page objects, accompanied by the number of all objects. Pages breakdown The breakdown into successful and failed pages, accompanied by the number of all pages. Pages tested per test The average number of tested pages, including successful and failed attempts, per test. Percentage of failed objects The percentage of failed page objects vs all objects. Percentage of failed pages The percentage of failed pages vs all pages. Percentage of successful objects The percentage of successfully downloaded page objects vs all objects. Percentage of successful pages The percentage of successfully accessed pages vs all pages. Response time Total time required to download a complete web page and its objects. For full object tests, response time is the time, as measured in seconds, from when a user clicks on a link to the time that the Web page is fully downloaded. For HTML-only tests, it is the time, as measured in seconds, from when a user clicks on the link to the time when the root object is downloaded. Response time encompasses the collection of all objects that make up a page including third-party content on off-site servers, graphics, frames, redirections, and so on. For an operation flow (a series of interactive operations on several web pages), the response time is measured from the start of the operational flow (the moment a user clicks on a link) to the end of the operational flow (the content from the last web page content is downloaded.) Successful objects The number of successfully tested page objects. 363

364 Appendix A Central Analysis Server Data Views Successful pages The number of successful attempts to access a page within a test. Test availability The percentage of successful tests vs all tests. RUM Browser RUM Browser data RUM Browser data dimensions Application A universal container that can accommodate transactions as reported by the Application Monitoring server. An application reported by the CAS can comprise of one or more Application Monitoring applications as set in the Business Units configuration. Application Monitoring Application The name of the Application Monitoring application used to perform a user action. Application Monitoring Server ID Identifier of the Application Monitoring server. Application Monitoring System Profile Name of a used Application Monitoring System profile. Browser/client version The version of the Web browser. Reported only for client type - Desktop. Browser name The name of the Web browser. Browser OS The name of the OS hosting the Web browser. Example: Windows 5.0. Business day The classification of days as business or non-business, as defined in the Business Hours Configuration tool. Business hour The classification of hours, as business and non-business, as defined in the Business Hours Configuration tool. Possible values are Business and Off-business. Client area Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client city The city of the client Client country The country of the client. Client geographical region The region of the client. 364

365 Appendix A Central Analysis Server Data Views Client IP address The IP address of the client. Client name User Name / Visit Tag as reported by the Application Monitoring server. Client region Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions, clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client type Type of the client: desktop, mobile, synthetic or SAP GUI client. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Day of the week The textual representation of the day of the week. Destination URL Target URL as reported by Application Monitoring Server normalized by the CAS. Device The name of the manufacturer and the mobile device used to perform a user action, for example Samsung Galaxy S Plus. Reported only for client type - Mobile. Hour of the day The numerical representation of the hour of the day, that is, numbers from 0 to 23. Page title Page title reported by the Application Monitoring server normalized by the CAS. Record type Type of the RUM Browser record. UEM is available for web and mobile native applications. The dimension specifies which type of the application, page or mobile, was used to execute the operation. Source URL The source URL as reported by the Application Monitoring server and normalized by the CAS. Time The time stamp of the data presented on the report. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. 365

366 Appendix A Central Analysis Server Data Views Transaction step The step as configured in a transaction definition. Step configuration is built on DCRUM data using operations, tasks, modules or services. Steps are contained within transactions and carry the entire transaction configuration. Transaction step sequence number The sequence number of a step is used for presentation purposes. It marks the order of a particular step in a transaction configuration. You can order steps within each transaction if such an ordering makes sense for the overall monitored application paradigm. The transaction step sequence does not affect data aggregation. User action name The name of the user action, for example "Loading of a Page" or "Click on Book Now", as reported by the Application Monitoring Server and normalized by the CAS. RUM Browser data metrics Affected visits The number of visits affected by errors. Affected visits (availability) The number of visits affected by availability related errors. Affected visits (performance) The number of RUM Browser visits that experienced application performance problems. A problem is noted if at least one operation is completed in time longer than the performance threshold. Affected visits breakdown The affected visits breakdown into visits affected by performance and availability related errors. Availability (total) The percentage of user actions that experienced availability related problems. CDN time The time it takes to load content from external domains marked as CDN (Content Delivery Network) in the Application Monitoring server configuration.. Client time The client portion of the time it took to complete the user action. DNS lookup time Time spent for resolving domain names. Document fetch done In addition to everything until "Request Start" this includes the amount of time spent until the document (HTML) was completely downloaded from the HTTP server. Document interactive Represents the amount of time spent until the document became interactive. This includes all steps done for "Document Fetch Done", including "Request start". Document load time The total time to load the document. This includes all steps done for "Document Interactive", including "Request start" and "Document fetch done". 366

367 Appendix A Central Analysis Server Data Views Document request time Time spent waiting for the first byte of the document response. Document response time Time spent downloading the document response. Download size The size of content downloaded Failures (total) The number of failed user actions attempts. Hits within user actions The number of hits performed to complete the user action. Navigation timing steps The user action time broken into the navigation timing steps: DNS - Time spent for resolving domain names. Connect - Time spent for establishing a socket connection from the browser to the web server. SSL - Time spent for establishing a secure socket connection from the browser to the web server. URL Redirection - Time spent for following HTTP redirects. Document request - Time spent waiting for the first byte of the document response. Document response - Time spent downloading the document response. Document processing - Time between the response being delivered and the OnLoad event. Network latency The RUM Browser JavaScript Agent calculates the network latency between browser and Web or application server. (JavaScript Agent signal round-trip) Network time The network portion of the time it took to complete the user action. Not-affected visits The number of visits without any errors nor performance degradations. Perceived render time The time it takes to render the visible part of a Web page. This time can be an indicator for the time users have to wait until a Web page is operable. Application Monitoring can calculate this metric with the JavaScript Agent. Percentage of affected visits (availability) The percentage of visits affected by availability related problems. Percentage of affected visits (performance) The percentage of visits affected by performance related errors. Performance The percentage of user actions that has not experienced application performance problems. Processing time Time between the response being delivered and the OnLoad event. 367

368 Appendix A Central Analysis Server Data Views Request start Represents the amount of time spent until the document request can be sent. This includes (DNS, Connect, SSL and URL redirects). Server time The server portion of the time it took to complete the user action. SSL setup time Time spent for establishing a secure socket connection from the browser to the web server. TCP connection time Time spent for establishing a socket connection from the browser to the web server. Third party time The time it takes to load content from external domains marked as Third Party in the Application Monitoring server configuration. Unique visits The number of unique visits reported by the Application Monitoring server. URL redirection time Time spent for following HTTP redirects. User actions The number of user actions, excluding failed attempts. User actions (fast) The number of fast user actions. User actions (slow) The number of slow user actions. User actions (slow and failed) The total number of slow and/or failed user actions. User actions attempts The number of user action attempts during the reported period, including failed attempts. User actions attempts breakdown User actions attempts broken into failed, slow and fast. User actions breakdown User actions breakdown into slow and fast. User action time The time it takes to complete a user action. User action time with CNS breakdown Time it took to complete the user action broken down into Client, Network and Server Time. RUM Browser baselines RUM Browser baselines dimensions Application A universal container that can accommodate transactions as reported by the Application Monitoring server. An application reported by the CAS can comprise of one or more Application Monitoring applications as set in the Business Units configuration. 368

369 Appendix A Central Analysis Server Data Views Application Monitoring Application The name of the Application Monitoring application used to perform a user action. Application Monitoring Server ID Identifier of the Application Monitoring server. Application Monitoring System Profile Name of a used Application Monitoring System profile. Baseline Source Baseline source. Two possible values: pinned or average. Browser/client version The version of the Web browser. Reported only for client type - Desktop. Browser name The name of the Web browser. Browser OS The name of the OS hosting the Web browser. Example: Windows 5.0. Client area Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client city The city of the client Client country The country of the client. Client geographical region The region of the client. Client name User Name / Visit Tag as reported by the Application Monitoring server. Client region Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions and/or on clients' BGP Autonomous System names. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client site Sites, areas, and regions define a logical grouping of clients and servers into a hierarchy. They are based on manual definitions, clients' BGP Autonomous System names, CIDR blocks or subnets. Sites are the smallest groupings of clients and servers. Areas are composed of sites. Regions are composed of areas. Client type Type of the client: desktop, mobile, synthetic or SAP GUI client. Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Destination URL Target URL as reported by Application Monitoring Server normalized by the CAS. 369

370 Appendix A Central Analysis Server Data Views Device The name of the manufacturer and the mobile device used to perform a user action, for example Samsung Galaxy S Plus. Reported only for client type - Mobile. Page title Page title reported by the Application Monitoring server normalized by the CAS. Record type Type of the RUM Browser record. UEM is available for web and mobile native applications. The dimension specifies which type of the application, page or mobile, was used to execute the operation. Source URL The source URL as reported by the Application Monitoring server and normalized by the CAS. Time The time stamp of the data presented on the report. Transaction A universal container that can accommodate operations. This metric refers only to transactions without errors. Transaction step The step as configured in a transaction definition. Step configuration is built on DCRUM data using operations, tasks, modules or services. Steps are contained within transactions and carry the entire transaction configuration. Transaction step sequence number The sequence number of a step is used for presentation purposes. It marks the order of a particular step in a transaction configuration. You can order steps within each transaction if such an ordering makes sense for the overall monitored application paradigm. The transaction step sequence does not affect data aggregation. User action name The name of the user action, for example "Loading of a Page" or "Click on Book Now", as reported by the Application Monitoring Server and normalized by the CAS. RUM Browser baselines metrics Availability (total) The percentage of user actions that experienced availability related problems. CDN time The time it takes to load content from external domains marked as CDN (Content Delivery Network) in the Application Monitoring server configuration.. Client time The client portion of the time it took to complete the user action. DNS lookup time Time spent for resolving domain names. Document fetch done In addition to everything until "Request Start" this includes the amount of time spent until the document (HTML) was completely downloaded from the HTTP server. 370

371 Appendix A Central Analysis Server Data Views Document interactive Represents the amount of time spent until the document became interactive. This includes all steps done for "Document Fetch Done", including "Request start". Document load time The total time to load the document. This includes all steps done for "Document Interactive", including "Request start" and "Document fetch done". Document request time Time spent waiting for the first byte of the document response. Document response time Time spent downloading the document response. Download size The size of content downloaded Failures (total) The number of failed user actions attempts. Hits within user actions The number of hits performed to complete the user action. Navigation timing steps The user action time broken into the navigation timing steps: DNS - Time spent for resolving domain names. Connect - Time spent for establishing a socket connection from the browser to the web server. SSL - Time spent for establishing a secure socket connection from the browser to the web server. URL Redirection - Time spent for following HTTP redirects. Document request - Time spent waiting for the first byte of the document response. Document response - Time spent downloading the document response. Document processing - Time between the response being delivered and the OnLoad event. Network latency The RUM Browser JavaScript Agent calculates the network latency between browser and Web or application server. (JavaScript Agent signal round-trip) Network time The network portion of the time it took to complete the user action. Perceived render time The time it takes to render the visible part of a Web page. This time can be an indicator for the time users have to wait until a Web page is operable. Application Monitoring can calculate this metric with the JavaScript Agent. Performance The percentage of user actions that has not experienced application performance problems. Processing time Time between the response being delivered and the OnLoad event. 371

372 Appendix A Central Analysis Server Data Views Tools Alert Log Request start Represents the amount of time spent until the document request can be sent. This includes (DNS, Connect, SSL and URL redirects). Server time The server portion of the time it took to complete the user action. SSL setup time Time spent for establishing a secure socket connection from the browser to the web server. TCP connection time Time spent for establishing a socket connection from the browser to the web server. Third party time The time it takes to load content from external domains marked as Third Party in the Application Monitoring server configuration. URL redirection time Time spent for following HTTP redirects. User actions The number of user actions, excluding failed attempts. User actions (fast) The number of fast user actions. User actions (slow) The number of slow user actions. User actions (slow and failed) The total number of slow and/or failed user actions. User actions attempts The number of user action attempts during the reported period, including failed attempts. User actions attempts breakdown User actions attempts broken into failed, slow and fast. User actions breakdown User actions breakdown into slow and fast. User action time The time it takes to complete a user action. User action time with CNS breakdown Time it took to complete the user action broken down into Client, Network and Server Time. Alert Log dimensions Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. 372

373 Appendix A Central Analysis Server Data Views Description The actual alert message that was issued when the alert was triggered. All parameter values are real values (not parameter variables). Id The alert identifier shown in the ID column of the Defined alerts list in Alert Definitions). Name The name of the alert definition. Parameter (1) The parameter designated as the detector setting for a given alert. This is usually one of the threshold names set up for the alert detector. The parameters are numbered sequentially in the order to add them when editing an alert definition. Parameter (10) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (11) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (12) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (13) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (14) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (15) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (16) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (17) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. 373

374 Appendix A Central Analysis Server Data Views Parameter (18) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (19) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (2) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (20) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (21) Parameter (21) Parameter (22) Parameter (22) Parameter (23) Parameter (23) Parameter (24) Parameter (24) Parameter (25) Parameter (25) Parameter (3) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (4) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (5) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (6) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. 374

375 Appendix A Central Analysis Server Data Views Parameter (7) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (8) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (9) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Recipient For alerts configured to be sent by , this is the address of the user defined as the alert recipient. Note that when you choose this dimension to be included in the report, the values for Number of generated alerts and the Number of sent s will show data only for the given alert recipient. Severity The state of the alert. It helps you define whether conditions for triggering the alert still occur or no longer happen. There are four possible alert states: Alert started - conditions for triggering alert were recorded within at least one monitoring interval. Alert finished - after Alert started was issued the conditions for triggering alert were not recorded for at least one monitoring interval. Alert repeated - conditions for triggering the alert persist for more than one monitoring interval. Timeout - conditions for triggering the alert persist for the amount of time that it is impossible to determine the actual alert state. Time The time at which generation of the alert started. Type Type Alert Log metrics Maximal value of parameter (1) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (10) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (11) The maximum value of the parameter defined in the detector settings. 375

376 Appendix A Central Analysis Server Data Views Maximal value of parameter (12) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (13) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (14) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (15) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (16) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (17) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (18) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (19) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (2) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (20) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (21) Maximal value of parameter (21) Maximal value of parameter (22) Maximal value of parameter (22) Maximal value of parameter (23) Maximal value of parameter (23) Maximal value of parameter (24) Maximal value of parameter (24) Maximal value of parameter (25) Maximal value of parameter (25) Maximal value of parameter (3) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (4) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (5) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (6) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (7) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (8) The maximum value of the parameter defined in the detector settings. 376

377 Appendix A Central Analysis Server Data Views Maximal value of parameter (9) The maximum value of the parameter defined in the detector settings. Number of generated alerts The total number of alerts generated. Number of sent s The number of notifications delivered by . Note that a single message may contain more than one notification. Value of parameter (1) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (10) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (11) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (12) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (13) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (14) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (15) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (16) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (17) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (18) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (19) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (2) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. 377

378 Appendix A Central Analysis Server Data Views Value of parameter (20) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (21) Value of parameter (21) Value of parameter (22) Value of parameter (22) Value of parameter (23) Value of parameter (23) Value of parameter (24) Value of parameter (24) Value of parameter (25) Value of parameter (25) Value of parameter (3) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (4) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (5) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (6) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (7) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (8) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (9) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. 378

379 APPENDIX B Tools Data Views Alert Log Tools data views provide access to alert data stored in the CAS database and to additional report parameters. Users can build custom reports based on data views using Data Mining Interface (DMI). Alert Log dimensions Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Description The actual alert message that was issued when the alert was triggered. All parameter values are real values (not parameter variables). Id The alert identifier shown in the ID column of the Defined alerts list in Alert Definitions). Name The name of the alert definition. Parameter (1) The parameter designated as the detector setting for a given alert. This is usually one of the threshold names set up for the alert detector. The parameters are numbered sequentially in the order to add them when editing an alert definition. Parameter (10) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (11) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. 379

380 Appendix B Tools Data Views Parameter (12) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (13) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (14) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (15) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (16) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (17) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (18) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (19) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (2) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (20) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (21) Parameter (21) Parameter (22) Parameter (22) 380

381 Appendix B Tools Data Views Parameter (23) Parameter (23) Parameter (24) Parameter (24) Parameter (25) Parameter (25) Parameter (3) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (4) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (5) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (6) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (7) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (8) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Parameter (9) The parameter you set up as the detector setting for the given alert. This is usually one of the threshold names you set up for the alert detector. The parameters are numbered sequentially in the order you add them when editing an alert definition. Recipient For alerts configured to be sent by , this is the address of the user defined as the alert recipient. Note that when you choose this dimension to be included in the report, the values for Number of generated alerts and the Number of sent s will show data only for the given alert recipient. Severity The state of the alert. It helps you define whether conditions for triggering the alert still occur or no longer happen. There are four possible alert states: 381

382 Appendix B Tools Data Views Alert started - conditions for triggering alert were recorded within at least one monitoring interval. Alert finished - after Alert started was issued the conditions for triggering alert were not recorded for at least one monitoring interval. Alert repeated - conditions for triggering the alert persist for more than one monitoring interval. Timeout - conditions for triggering the alert persist for the amount of time that it is impossible to determine the actual alert state. Time The time at which generation of the alert started. Type Type Alert Log metrics Maximal value of parameter (1) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (10) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (11) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (12) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (13) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (14) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (15) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (16) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (17) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (18) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (19) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (2) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (20) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (21) Maximal value of parameter (21) 382

383 Appendix B Tools Data Views Maximal value of parameter (22) Maximal value of parameter (22) Maximal value of parameter (23) Maximal value of parameter (23) Maximal value of parameter (24) Maximal value of parameter (24) Maximal value of parameter (25) Maximal value of parameter (25) Maximal value of parameter (3) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (4) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (5) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (6) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (7) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (8) The maximum value of the parameter defined in the detector settings. Maximal value of parameter (9) The maximum value of the parameter defined in the detector settings. Number of generated alerts The total number of alerts generated. Number of sent s The number of notifications delivered by . Note that a single message may contain more than one notification. Value of parameter (1) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (10) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (11) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (12) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (13) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. 383

384 Appendix B Tools Data Views Value of parameter (14) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (15) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (16) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (17) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (18) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (19) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (2) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (20) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (21) Value of parameter (21) Value of parameter (22) Value of parameter (22) Value of parameter (23) Value of parameter (23) Value of parameter (24) Value of parameter (24) Value of parameter (25) Value of parameter (25) Value of parameter (3) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (4) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (5) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. 384

385 Appendix B Tools Data Views Value of parameter (6) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (7) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (8) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Value of parameter (9) The mean value of the parameter defined in the detector settings. The parameters are numbered sequentially in the order to add them when editing an alert definition. Report Parameters This data view provides dimensions to store the state of the report. These dimensions can be used to selectively show report data (for example, metric values or baseline values) without the need to define several separate reports. Report Parameters dimensions Data source The name of the data source in case you have configured a number of associated report servers to be used as data sources on the DMI screen. Parameter 1 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 10 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 11 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 12 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 13 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 14 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 15 One of the parameters that store the state of the report. You can change its name on the Result Display tab. 385

386 Appendix B Tools Data Views Parameter 16 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 17 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 18 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 19 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 2 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 3 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 4 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 5 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 6 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 7 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 8 One of the parameters that store the state of the report. You can change its name on the Result Display tab. Parameter 9 One of the parameters that store the state of the report. You can change its name on the Result Display tab. 386

387 APPENDIX C Dynatrace Network Analyzer On-Demand Trace-To-Report Conversion Some CAS detail reports that include Synthetic Monitoring data may have links for DNA if this was configured in the Enterprise Synthetic Console. You can access the DNA reports or trace files from the CAS directly using the Transaction Trace report or the Link to Transaction Trace file dimensions. Links to DNA are provided for transactions after you have configured the CAS to receive data from Enterprise Synthetic Agent Manager. Links to DNA File When the Enterprise Synthetic Agent Manager sends transaction information to the CAS, you can access either the trace file or just baseline data (not actual measurements). The type of data is indicated by an icon and shown in the Link to Transaction Trace file column: Icon Meaning Link to DNA trace file. Link to DNA baseline data. Central Analysis Server Dynatrace Network Analyzer Trace-to-Report Conversion States Before user input, DNA link status contains a Click to Request Report link, and the cells in the DNA request time and Link to DNA report columns are empty. If you hover over the trace file icon, a tooltip shows a link to the trace file. After you click the Click to Request Report link, an auxiliary window displays conversion messages. At this stage, the message Please wait, report conversion started is displayed, the report reloads, DNA link status changes to Report Requested, and the date of the conversion request appears in DNA request time. 387

388 Appendix C Dynatrace Network Analyzer On-Demand Trace-To-Report Conversion When report conversion is complete, the generated report is displayed inside the auxiliary window. The report reloads again and DNA link status shows Click to View Report. Link to DNA report contains an icon that you can click to display the DNA report. After the report is generated and displayed, the DNA request time shows the exact conversion date. Request Timeout Alerts If conversion takes more than five minutes, an alert window informs you that the request has timed out. The report reloads, the DNA link status content is reset, and Click to Request Report appears again. An additional warning is shown in the DNA request time column with a warning icon informing you that report conversion failed. If conversion from a trace file to a report fails and an error message is stored within the trace file, the auxiliary window displays an error message and the report reloads. Custom DMI Reports DMI provides all metrics and dimensions that you can add to your transactions report. You can create a custom report showing metrics available on the Sequence transaction data data view. For a list of metrics and dimensions available on Sequence transaction data data view, see Synthetic and sequence transaction data [p. 315]. You can also customize existing reports generated in DMI by clicking Edit report in the Actions list. For more information on how to use DMI, refer to the Data Mining Interface (DMI) User Guide, which you can access from Help Books Data Mining Interface User Guide. 388

389 APPENDIX D Network Performance Calculations Assuming that the network's role is to effectively transfer data to the end user, the percentage of total traffic that is transferred effectively is the network performance. Low network performance does not always mean low end-user experience; some applications are less dependent than others on network conditions. The measured total traffic includes both directions of data transfer: to and from the client, or downstream and upstream, including bytes transferred internally within the site. Criteria for effectiveness of data transfer can be customized with regard to the used Key Performance Indicators (KPIs) and thresholds. Key Performance Indicators (KPIs) Retransmissions (Loss Rate) in both directions Latency (Round Trip Time) in both directions measured for: TCP handshakes only (the End-to-end RTT metric) Any ACK packet (the Client ACK RTT metric) This is disabled by default. In some situations, however, end-to-end RTT may prove insufficient and you may need to add ACK RTT as a performance criterion. This may be due to long session duration, when the frequency of RTT measurements is rather low. Effective Throughput (Realized Bandwidth) This is disabled by default. Thresholds Normals per site. In this case, you can set the KPI cut-off thresholds (the maximum values for which the metric is treated as good). Constants, which are fixed customizable thresholds for simpler calculations. For more information, see Configuring Thresholds for Network Performance Calculations in the Data Center Real User Monitoring Administration Guide. 389

390 Appendix D Network Performance Calculations Measurement Logic In every reporting interval, the configured KPIs are evaluated against their thresholds (see Figure 40. Comparing KPI Values with the Configured Threshold [p. 390]). For CAS, these thresholds are equal to the network site's average values of RTT and loss rate for all of the business days within the past 10 elapsed calendar days, increased by a predefined factor. By default, this factor is equal to 100%, such that if the values of loss rate or RTT exceed twice the average values, bytes transferred during that time are considered to be transferred while network-related problems were present. Figure 32. Comparing KPI Values with the Configured Threshold Configured KPI Threshold Time Figure 33. Calculating Network Performance Traffic 3 MB 1 MB 1 MB Time The analysis of network performance is done on the lowest level: a single user using a single service. 390

391 Appendix D Network Performance Calculations Each portion of data transferred between the user and a service is flagged as affected or not affected, depending on the KPI values. The percentage of unaffected traffic is reported as network performance (see Figure 41. Calculating Network Performance [p. 390]). In this example, network performance = 4/5*100% = 80%. NOTE The average values of RTT and loss rate that are used in network performance measurements are by default calculated only for recognized software services, so not monitored or unknown traffic is not taken into account. If you want the calculation to be based on all the traffic, you can change the default configuration of the AMD. Analyzing Network Performance Problems Understanding the metric logic is not required to find areas for performance improvement. Follow these rules to analyze network-related problems: 1. Problems are more obvious in high time resolution. 2. Problems are more obvious on lower levels (at the site level rather than at the network level, and at the client level rather than at the site level). 3. Find the underperforming sites first. 4. After you find the underperforming sites, determine whether all or only some users are having problems. 5. Display metric charts and analyze KPI values and transferred traffic during periods of low network performance. To learn about site performance, see Sites Report [p. 123]. To learn about verifying user-related problems, see All Users Report [p. 161]. 391

392 Appendix D Network Performance Calculations 392

393 APPENDIX E Graphical Explanation of Network Performance Metrics Three figures illustrate network performance metrics from the perspective of the client, AMD, and server. Figure 34. Graphical Representation of Client RTT Client AMD Server T1 SYN T2 T3 T6 T7 SYN ACK ACK T5 Client RTT T8 T4 T9 393

394 Appendix E Graphical Explanation of Network Performance Metrics Figure 35. Graphical Representation of Server RTT Client AMD Server T1 SYN T2 T6 SYN ACK T5 Server RTT T3 T4 T7 ACK T8 T9 394

395 Appendix E Graphical Explanation of Network Performance Metrics Figure 36. Graphical Representation of Client/Server Metrics, Including ACK RTT CLIENT SERVER Lookup DNS lookup Response SYN SYN ACK TCP SYN time RTT ACK SSL handshake SSL connection setup time Network time (part 1) GET page.html HTTP redirect GET page.html Request time Redirect time Redirect time 1/2 Server ACK RTT 1/2 Client ACK RTT ACK GET RESPONSE page.html ACK HTTP server time Server think time Server time Operation time or Page load time SYN GET image1.jpg RTT ACK SYN ACK GET image3.jpg image1.jpg Image server response time (1) Network time (part 2) Image server response time (3) image3.jpg GET image2.jpg image2.jpg Image server response time (2) XML and SOAP transactions are depicted differently. A single XML transaction consists of two parts: request and response. Each contains the data counted from the opening XML tag to the closing XML tag and as such consists of one or more packets carrying this data. TCP handshake, SSL handshake, and HTTP header are not treated as a transaction part and hence are skipped in XML-related measurements. 395

396 Appendix E Graphical Explanation of Network Performance Metrics 396

397 Glossary Glossary The following glossary contains definitions of terms used across the DC RUM documentation. For definitions of metrics provided by DC RUM in DMI data views, see Central Analysis Server Data Views [p. 197]. alert An event notification generated by the report server when certain predefined events occur or when selected parameters related to user sessions, applications, and server activity reach predefined threshold levels. All other The object classification assigned to all clients who have not been assigned to an explicit site. analyzer Software component provided by Dynatrace to perform monitoring and traffic analysis. The report server uses analyzers to monitor operations for specific software services based on popular protocols, such as HTTP, provided that the underlying transport protocol is TCP, or UDP only in case of DNS-based software services. The report server can also analyze and report statistical information on non-transactional UDP-based or IP-based protocols. For more information, see Concept of Protocol Analyzers in the Data Center Real User Monitoring Administration Guide. Synonyms: decode application In DC RUM reports, a universal container that can accommodate transactions. Each application can contain one or more transactions. For more information, see Managing Business Units in the Data Center Real User Monitoring Administration Guide. 397

398 Glossary area In the context of the DC RUM report server, a collection of sites. An area has the same properties as a site, but refers to a larger entity. Areas cannot overlap. Any given site can belong to one and only one area. See also site and region. bandwidth usage A measurement calculated as the number of bits transferred during a specific time interval divided by the time interval. This measurement does not take into account factors such as inactive periods when the application was not attempting to transfer data, or transmission loss rate. baseline data Data from the last several days (usually nine days) aggregated into one average or typical day. Baselines are necessary for considering the variations in traffic on different days of the week, random anomalies in traffic load, or to compare traffic with a known baseline from a specific point in time. Baseline data is generated once a day after the arrival of data from the first monitoring interval after 00:10 am (in the background). Baseline data is not averaged over the day within each day and therefore may vary rapidly depending on the time of day just as monitored data would. Each monitoring interval is assigned the value averaged over the nine-day period for this specific monitoring interval. Baseline data is generated once a day after the arrival of data from the first monitoring interval after 00:10 am (in the background). Baseline data is not averaged over the day within each day and therefore may vary rapidly depending on the time of day just as monitored data would. Each monitoring interval is assigned the value averaged over the nine-day period for this specific monitoring interval. Requesting baseline data for Yesterday will yield the same results as requesting baseline data for Today, because baseline data for yesterday will still be calculated over the last nine days counting from today. Class of Service (CoS) The name identifying a Type of Service value. The mapping of Class of Service names to different values of Type of Service is defined in the report server configuration. See also Type of Service. client In the context of the DC RUM report server, the IP address of a user. Users can be identified by their IP addresses or in a number of other ways, such as by HTTP cookie contents or VPN login names. client internal IP address Term used by the report server in relation to virtual private networks where external users of the network appear inside the network under different (internal) IP addresses. custom metric A user-defined metric that extracts values from HTTP or XML requests (for example, HTML pages or SOAP messages). Each custom metric can be displayed as a sum of values or as their average. The sum metrics can be used to trace users or resources that use the most or least resources (for example, clients 398

399 Glossary who make the largest money transfers in a bank or purchase large quantities of items from an online bookstore). The average metrics can help in observing trends. For information on defining custom metrics, refer to the RUM Console Online Help. custom tier A tier that can be modified by a user. See also tier. decode A synonym for analyzer. Default Data Center site The classification for any server that has not been assigned to an explicit site. downstream In the context of the report server, the direction of traffic to a given region, area, site, or host. front-end tier In a user-defined configuration, the system architecture layer that is closest to the end user. See also tier. host A system component that participates in data exchange. A host can be either a server or a client machine, depending on the context and the direction of the monitored traffic. local The specified site for which the report server is displaying data. Local and remote are defined in the context of a particular site, area or region. When displaying data about a specified site, area or region, the report server refers to the site as local and to other sites as remote. If a report contains sections that focus on data from different sites, each site in turn will be designated as local. monitoring interval In the context of Global Configuration of the report server, the length of the shortest individual traffic-monitoring period. This period is usually a short interval of a few minutes. The latest values in a report are from the last closed monitoring interval, that is, from the last traffic-monitoring period. The monitoring interval is not the total time interval covered by the report. monitored session The session identified by application, server IP address, client IP address, and operation. normal value A baseline value collected based on the last several days (usually nine) and aggregated to calculate a typical value of a measure. For more information, see baseline data [p. 398]. 399

400 Glossary network ID The unique identifier assigned to a user for logging in to the network. Depending on the report server configuration, the network ID may be an IP address, HTTP authorization ID, HTTP cookie-based ID, a VPN ID, or static user name mapping. network performance The percentage of total traffic that did not experience network-related problems (traffic in which the values of loss rate and RTT did not exceed configured thresholds). For more information, see Network Performance Calculations [p. 389]. not monitored TCP TCP traffic that is not associated with a monitored application. This term is related to smart application monitoring. If smart application monitoring is enabled, application session information captured and reported by the AMD is not stored immediately in the report server database; it has to meet smart application monitoring thresholds before it is stored. not monitored UDP UDP traffic that is not associated with a monitored application. This term is related to smart application monitoring. If smart application monitoring is enabled, application session information captured and reported by the AMD is not stored immediately in the report server database; it has to meet smart application monitoring thresholds before it is stored. Privacy Enhanced Mail (PEM) Base64 encoded DER certificate, enclosed between -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- protocol In the context of the report server, layer 4 protocols according to the OSI model. The report server recognizes UDP and TCP-based protocols. realized bandwidth The actual transfer rate of application data when the transfer attempt occurred. This measurement takes into account factors such as loss rate and retransmission. The realized bandwidth is calculated as the size of the actual transfer divided by the transfer time. This metric reflects transient conditions on the network during the times when the transfer occurred. When the metric is averaged over a longer time interval, the average value is calculated only for those time sub-intervals for which actual data transfers attempts took place. region In the context of the report server, a collection of areas. A region has the same properties as an area, but refers to a larger entity. Regions cannot overlap. Any given area can belong to one and only one region. See also area and site. remote A site other than the specified site for which the report server is displaying data. 400

401 Glossary Local and remote are defined in the context of a particular site, area or region. When displaying data about a specified site, area or region, the report server refers to the site as local and to other sites as remote. If a report contains sections that focus on data from different sites, each site in turn will be designated as local. report server A common name for Central Analysis Server (CAS) or Advanced Diagnostics Server (ADS). The report server is the part of the Data Center Real User Monitoring responsible for measurement data processing, storage, and report generation. It connects to one or more AMDs and processes the measurement data into a relational database of measurements. The database is then used to serve interactive reports to the Data Center Real User Monitoring system user. reporting group A universal container that can accommodate software services, servers, operations, or any combination of these. Reporting groups can contain software services of every type but they were designed especially for HTTP-based services. Riverbed Steelhead A third-party appliance based on technology that optimizes the performance of TCP applications operating in a WAN environment. Steelhead combines data streamlining, transport streamlining, and application streamlining to improve WAN traffic performance. The software that runs a Steelhead appliance is called the Riverbed Optimization System (RiOS). Steelhead is generally deployed as a physical or virtual appliance. Mobile and software versions are also available. server In the context of the report server, the recipient of a TCP session or request (SYN packet), TCP, or UDP. Servers listen in on specified TCP/UDP ports, accept incoming requests, and reply to them. Usually, but not always, a server is a computer running software that offers a service or a number of services on one or more of the computer's ports. Servers are said to host software services. A server is identified by a unique IP address. This IP address appears on reports, unless the server's name can be resolved by means of a Domain Name Server (DNS), in which case the server's name is used instead. server from site The category assigned to application session information that does not meet smart application monitoring thresholds. If smart application monitoring is enabled, application session information captured and reported by AMD is not stored immediately in the report server database. It has to meet smart application monitoring thresholds. Sessions that meet the thresholds are stored under their server IP addresses, while those that do not, are stored as server from site. Network scanning by a workstation infected by a virus. Such a workstation will scan a large number of IP addresses. These addresses will not be reported individually, but on per-site basis. site An IP network from which users log in to a monitored network. 401

402 Glossary A site can be a range of IP addresses set manually, referred to as a class-c IP network; an automatically set class-b network; a range of addresses defined by a customized network mask; or a set of IP networks that is based on the BGP routing table analysis. Sites can be grouped together into areas, which in turn can be grouped together into regions. See also area, region, and All other. site realized bandwidth A weighted average of the software service realized bandwidth values for all services accessed from a particular site, weighted by the number of operations. software service A service, implemented by a specific piece of software, offered on a TCP or UDP port of one or more servers and identified by a particular TCP port number. Software services are identified on reports by either port numbers or assigned names. It is possible to configure the report server to define software services as services on particular ports of particular servers. In this case, a software service is identified by a combination of port number and server IP address. synthetic agent A simulator of user traffic to a given web site. Synthetic agents are designed to measure web site availability and performance. They are usually distributed over a number of different geographical locations. The report server is able to distinguish synthetic traffic from real user traffic. TCP availability The percentage of successfully completed connection attempts from the region, area, or site. By default, the measurement algorithm for this metric is based only on traffic that is generated by recognized applications or scanning attempts, which means that not monitored or unknown traffic is not taken into account. TCP session A collection of TCP packets exchanged between a given pair of client and server addresses, using a specific server port and client ports. tier A specific point where DC RUM collects performance data. For more information, see Multi-Tier Reporting [p. 98]. time The report server uses a granular concept of time, where events are recorded as having occurred at the beginning of their monitoring intervals: that is, all events that have occurred during a monitoring interval are time-stamped with the time corresponding to the beginning of that monitoring interval. If you need to specify time in a report server input field, you should do so according to the format defined in the operating system settings on the report server computer. transaction Any of the following: 402

403 Glossary A single operation, such as a web page load. A sequence of operations DC RUM monitors sequences of web page loads and sequences of XML calls, and it reports both on these sequences as transactions and on individual operations within the transactions. Defined collections of non-sequenced operations. A transaction defines a logical business goal, such as registration in an online store. One or more transactions together constitute an application. A transaction can have only one parent application. Data for a transaction can come from a Enterprise Synthetic agent or an AMD. The same transaction can contain data from different data sources at the same time. However, metrics for each of the data sources are aggregated separately. For more information, see Managing Business Units in the Data Center Real User Monitoring Administration Guide. Type of Service (ToS) A traffic identifier contained in an 8-bit field in the IP packet header (comprising a 6-bit Differentiated Services Code Point (DSCP) field and a 2-bit Explicit Congestion Notification (ECN) field). The contents of this field can be detected by the report server and displayed in reports. The use of this field is application-specific: it is used by applications to denote special types of traffic. See also Class of Service. unknown TCP proto TCP traffic that has not been recognized as belonging to a particular application. This situation can occur if the traffic is not defined in the Monitoring Configuration as belonging to a particular application, and the traffic has not been classified automatically by the autodiscovery mechanism. unknown UDP proto UDP traffic that has not been recognized as belonging to a particular application. This situation can occur if the traffic is not defined in the Monitoring Configuration as belonging to a particular application, and the traffic has not been classified automatically by the autodiscovery mechanism. upstream In the context of the report server, the direction of traffic from a given region, area, site or host. URI Uniform Resource Identifier. A URI provides a way to identify abstract or physical resources on the World Wide Web. It is a syntax for encoding the names and addresses of objects. The URI is a general form for creating some kind of address. A URL (Uniform Resource Locator) is a specific address used with some protocol such as HTTP or FTP that follows the general URI format. See also URL. URL Uniform Resource Locator. 403

404 Glossary The URL provides a standard way of specifying the location of a resource on the Internet: it is an Internet address. Resources are often web pages (HTML documents), but they can also be text or PDF documents, images, downloadable files, services, electronic mailboxes, or many other objects. URLs make resources available under a variety of naming schemes and access methods (such as HTTP, FTP, and ) addressable by one simple, uniform method. user Users can be identified by their IP addresses or in a number of other ways, such as by HTTP cookie contents or VPN login names. The term client in the context of report server refers to the IP address of a user. See also client. user session A collection of transactions identified by specific cookie value. A new cookie value sent by the client starts a new user session. A new cookie value issued by the server does not signify the start of a new session. The report server distinguishes between different user sessions by analyzing HTTP cookie information, that is, the contents of a particular named cookie or depending on the report server configuration the contents of all the cookies in HTTP transactions. For example, a user sends requests with cookie ABCD=1234. In one of the responses, the server changes the value to ABCD=5678. The report server recognizes subsequent requests with cookie value ABCD=5678 as a continuation of the session: no session count is increased. virtual IP address (VIP) A network interface that enables users to use IP addresses not directly related to the actual physical hardware. In systems that do not use virtual IP addresses, if an interface fails, any connections to that interface are lost. With virtual IP addressing on the system and routing protocols within the network providing automatic reroute, recovery from failures occurs without disruption to the existing user connections that are using the virtual interface, as long as packets can arrive through another physical interface. Virtual Private Network (VPN) The provision of private voice and data networking from the public switched network through advanced public switches. The network connection appears to the user as an end-to-end circuit, without actually involving a permanent physical connection as in the case of a leased line. VPNs retain the advantages of private networks but add benefits like capacity on demand. The report server can monitor multiple VPNs. There is no fixed limit to the number of monitored VPNs and remote users; however, the capacity of the monitoring software depends on the overall system performance and on the VPN traffic. WAN Optimization A Wide Area Network deployment in which software and network services are optimized through at least two or more WAN Optimization Controllers (WOCs). The goal of WAN optimization is to improve application response time and reduce the required bandwidth over a WAN connection by using a WAN controller on each end of the WAN link. A WAN Optimizer is deployed on either end of a WAN connection to optimize the traffic sent over the WAN. The WAN Optimizer classifies, prioritizes, and compresses network data, caches 404

405 Glossary network traffic, and streamlines protocols to maximize the performance of a service delivered over distributed network. The most common optimization techniques involve: Transport (TCP) optimization TCP flow-control round trips are reduced by: Fast error recovery Mitigated slow-start Window scaling Pre-established TCP connection pools between the WAN-optimizing appliances Payload Optimization The TCP payload is indexed and stored on disk on each side of the WAN: Data segments (blocks) are replaced with references to this data Byte-level indexing is independent of the application or file Application Acceleration Application-specific acceleration is used to reduce application traffic. In Common Internet File System (CIFS) SMB emulation is used: By spoofing the CIFS protocol By reading ahead and writing behind Specific modules can be made available from individual vendors for a specific application Using a combination of these techniques and setting up the acceleration appliances to act as proxy servers can accelerate end-user experience significantly. WAN Optimization Controller (WOC) WAN optimization controllers (WOCs) are physical devices that transparently intercept local network traffic, optimize it, and send the optimized traffic over the WAN link to the receiving controller. On the other side of the WAN, the receiving WOC transparently converts the optimized traffic from the WAN link into normal network traffic. The typical WAN optimization scenario involves at least two WOCs located between the data center (or a server) and a branch office (or a client). Wide Area Application Engine (WAE) A Cisco platform that consists of a portfolio of network appliances that host Cisco WAN optimization and application acceleration solutions that enable branch-office server consolidation and performance improvements for centralized applications and content across the WAN. Wide Area Application Services (WAAS) A Cisco technology that optimizes the performance of TCP-based applications operating in a WAN environment. WAAS combines WAN optimization, optimization of the Transport Control Protocol (TCP), Data Redundancy Elimination (DRE, also known as de-duplication) and application protocol acceleration in a single appliance or blade. It runs on Wide Area Application 405

406 Glossary Engine (WAE) hardware platforms, including stand-alone appliances and network modules (NME) for the Cisco Integrated Services Routers (ISRs). 406

407 Index Index A alerts 94, 191, canceling 191 delivery 191, logs 195 mechanism 191 notification 194 raising 191 raising and canceling conditions 191 repeating 191, 194 report 94 states 194 traps 195 type 193 anomalies 193 Citrix performance 193 diagnostics 193 generic 193 link performance 193 new objects 193 performance 193 point-to-point data 193 aliasing 78 dimension names 78 metric names 78 All Users report 161 analyzer 17, 26 groups 26 supported by CAS 17 application 97 configuration overview 97 Application Health Status report 31, 86, 89 Applications Detail Perspective 89 Applications for Site report 125 Applications report 99, 107 front-end tiers for application 107 Areas report 128 autodiscovery of software services 49, 53, 55, 148 configuring on CAS 53 drilldowns from reports 148 multithreaded decodes and 55 performance 55 B baseline data 62 browsers 14, 23 configuring 14 localization 23 versions supported 14 business perspective 96, 100 Business Service Management 25 integration 25 C Capture Packets dialog box 41 capturing network traffic 33 CAS 13, 17, 31, 85 default report 31 reports 85 supported protocols 17 Central Analysis Server, See CAS Cerner 17 Citrix Landing Page report 115 Citrix Servers report 118 Citrix Sites report 118 Citrix/WTS (presentation) tier 114 Client network tier

408 Index component 11 update 11 configuration 14, 106 browser 14 default Tiers report 106 contact information 11 Customer Support 11 customizing 72 columns 72 D data 69 resolution 69 Data Center Analysis report 146 Data Mining Interface, See DMI DB2 (DRDA) 17 CAS 17 decode 17 supported by CAS 17 dimensions 78 aliasing names 78 DMI 83 DNS 17 DRDA (DB2) 17 CAS 17 drilldown 31 reports 31 drilldown links 75 Dynatrace Network Analyzer 387 integration 387 E ing report 65 Enterprise Synthetic 131, , 387 integration 387 Epic 17 Error Details report Errors report 160 ESMTP 17 EUE Overview reports 95, 105 structure 105 Exchange/RPC 17 F front-end tier 99, 107 application 107 G graphical report 159 Metric Charts 159 H health check 77 HTTP 17 I IBM WebSphere MQ 17 ICA (Citrix) 17 ICMP 17 Informix 17 integration 24, 387 with Dynatrace Application Monitoring 24 with Dynatrace Network Analyzer 387 with Enterprise Synthetic 387 international features support 23, character encoding 23 East Asian languages 63 localized browser 23 localized reports localized server 23, user settings IP 17 J Java applets 16 support in IE 9 16 Jolt 17 K Kerberos 17 L languages 63 East Asian 63 localization 23 browser 23 character encoding 23 server

409 Index Location Health report 171 logging in to report server 29 M mail server 65 configuring 65 metric 78 aliasing names 78 column grouping 78 Metric Charts report 159 Module Breakdown by Server report 158 monitoring 96, 100 higher level 96, 100 multithreaded decodes and autodiscovery of software services 55 N n-tier architecture 105, 119 See also Tiers report name setting 78 aliasing 78 NetFlow 17 network 393 performance metrics 393 Network Analysis 176 usage examples 176 Network Analysis reports 175 network performance 389 Network tier 123 O online support site 11 Operation Breakdown by Server report 158 Operations report 153 Oracle 17 P packet capture 33 Packet Data Mining Tasks dialog box 45 performance 55 autodiscovery of software services 55 Port Finder 49 Product acronyms 10 protocol 17, 26 analyzer 17 analyzer groups 26 supported by CAS 17 R Regions report 129 repeating alert notification 191 repeating alert notifications 194 report 31, 64 66, 69, 72 73, 80, 83, 85 86, 89, 94, 98, 107, , , 118, 123, , , 146, 148, , , , , 168, , accessing 31 Alerts 94 All Users 161 Application Health Status 86, 89 Applications Detail Perspective 89 Application Performance Affected Users 162 Applications 107 Applications for Site 125 Areas 128 Availability Affected Users 163 Citrix Landing Page 115 Citrix Servers 118 Citrix Sites 118 customizing columns 72 Data Center Analysis 146 data mining 83 Error Details Errors 160 exporting 66 filtering and searching 73 Hit Details 171 language settings 64 Location Health 171 menu 85 Module Breakdown by Server 158 Modules 156 Network Performance Affected Users 162 Operation Breakdown by Server 158 Operations 153 Regions 129 saving 66 scheduled ing 65 Sequence Transactions Log 113 Sequence Transactions Log - Details 114 Servers 151 Service Breakdown by Server 158 Services 157 Sites 123 Slow Operation Cause Breakdown 166 Slow Operation Load Sequence 170 Slow Operation Loads 168 Software Services 148 Software Services Overview 150 sorting 73 tabular 69 Task Breakdown by Server 158 Tasks 154 Tiers 98, 111 time range 69 time resolution

410 Index report (continued) tooltips 80 Top N View 172 Transaction Load Sequence 114 Transactions for Application 110 Transactions for Site 126 User Activity 164 User Activity - User Diagnostics 165 User Activity:Sequence Transactions 164 User Health 171 Voice and Video - Availability Charts 188 Voice and Video - Internetwork Availability Charts 188 Voice and Video - Internetwork Network Charts 188 Voice and Video - Internetwork Usage Charts 188 Voice and Video - Network Charts 188 Voice and Video - Usage Charts 188 Voice and Video Status - Activity 182, 185 Voice and Video Status - Areas 183 Voice and Video Status - Conversations 181 Voice and Video Status - Regions 183 Voice and Video Status - Signaling - Areas 187 Voice and Video Status - Signaling - Conversations 186 Voice and Video Status - Signaling - Regions 187 Voice and Video Status - Signaling - Sites 187 Voice and Video Status - Sites 183 Voice and Video Status - Software Services (Codecs) 180, 184 VoIP Activity - Interval 179 VoIP Activity - Site 179 VoIP Overview 178 VoIP Overview) 178 report server 14 supported browsers 14 report server cluster 32 reporting 100, 154, hierarchy 100 N-level hierarchy 154, resolution, See data resolution RUM Analysis reports 146 RUM Browser S SAP GUI 17 SAP RFC 17 Sequence Transactions Log - Details report 114 Sequence Transactions Log report 113 Service Breakdown by Server report 158 Sites report 123 Slow Operation Cause Breakdown 166 Smart Packet Capture 33, 35 36, 41, configuring a traffic capture 36 feature summary 35 listing traffic captures 44 overview 33 SMB 17 SMTP 17 SOAP 17 software services 49, 53, 86, 148, 150 autodiscovery 49, 53 on Application Health Status report 86 overview report 150 report 148 user-defined 53 SSL 17 status icons 76 Synthetic Backbone , nodes 146 overview 142 pages 145 tests 143 Synthetic Backbone Tests 144 tests with errors 144 system requirement 14 supported browsers 14 T table 72 column 72 filtering 72 selecting 72 customizing columns 72 tabular report 69, 75, 77 common features 69 drilldown links 75 health check 77 Task Breakdown by Server report 158 TCP 17 TDS 17 tenants assigning on CAS 61 defining rule on AMD 60 managing on AMD 59 overview 59 tier 97 configuration overview 97 Tiers report 98 99, 106, 111, , , 123 Citrix/WTS (presentation) tier , 118 Client network tier 123 default configuration 106 front-end tier 99 Network tier 123 structure 99 WAN Optimization 119 time 69 range 69 tooltips

411 Index Top N View report 172 tabular 172 trace capture 33 transaction 97 configuration overview 97 Transactions for Application report 110 Transactions for Site report 126 trend and state indicators 77 health check 77 U UDP 17 update 11 component 11 product 11 User Activity 164 Sequence Transactions report 164 User Activity - Details report 165 User Activity - User Diagnostics report 165 User activity on demand 166 User Activity report link to ADS 164 User Activity - User Diagnostics 165 User Health report 171 V voice and video 178 status reports 178 VoIP 17 VoIP Activity - Interval report 179 VoIP Activity - Site report 179 VoIP Overview report 178 W WAN optimization 59 60, comparison 122 GRE/WCCP links 121 MPLS sites 119 software services 120 software services performance 123 traffic classification traffic type VLAN X XML

412 Index 412

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Migration from CryptoSwift Migration Guide Release 12.0.2 Please direct questions about Data Center Real User Monitoring or comments on this document to: APM Customer Support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Getting Started Release 12.3 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Alert System Administration Guide Release 12.3 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Integration with Cisco NAM Getting Started Release 12.1 Please direct questions about Data Center Real User Monitoring or comments on this document to: APM Customer Support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Network Performance Monitoring User Guide Release 12.3 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring SAP Application Monitoring User Guide Release 12.3 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring WAN Optimization Getting Started Release 12.2.0 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Cerner Application Monitoring User Guide Release 12.3 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

Real-User Monitoring Data Center

Real-User Monitoring Data Center Real-User Monitoring Data Center Central Analysis Server Installation Guide Release 11.7.0 Please direct questions about Central Analysis Server or comments on this document to: APM Customer Support FrontLine

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Web Application Monitoring User Guide Release 12.3 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring WAN Optimization Getting Started Release 12.2.0 Please direct questions about Data Center Real User Monitoring or comments on this document to: APM Customer Support FrontLine

More information

Business Service Management

Business Service Management Business Service Management Installation Guide Release 12.2 Please direct questions about Business Service Management or comments on this document to: APM Customer Support FrontLine Support Login Page:

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Citrix/Windows Terminal Services Monitoring User Guide Release 12.1 Please direct questions about Data Center Real User Monitoring or comments on this document to: APM

More information

VantageView. Installation Guide. Release 12.0

VantageView. Installation Guide. Release 12.0 VantageView Installation Guide Release 12.0 Please direct questions about VantageView or comments on this document to: APM Customer Support FrontLine Support Login Page: http://go.compuware.com Copyright

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Central Analysis Server Installation Guide Release 12.3 Please direct questions about Central Analysis Server or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Oracle Forms Application Monitoring User Guide Release 12.3 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring SSL Monitoring Administration Guide Release 12.1 Please direct questions about Data Center Real User Monitoring or comments on this document to: APM Customer Support FrontLine

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Oracle Forms Application Monitoring User Guide Release 12.3 Please direct questions about Data Center Real User Monitoring or comments on this document to: Customer Support

More information

Synthetic Monitoring Scripting Framework. User Guide

Synthetic Monitoring Scripting Framework. User Guide Synthetic Monitoring Scripting Framework User Guide Please direct questions about {Compuware Product} or comments on this document to: APM Customer Support FrontLine Support Login Page: http://go.compuware.com

More information

Central Security Server

Central Security Server Central Security Server Installation and Administration Guide Release 12.3 Please direct questions about {Compuware Product} or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring E-mail Application Monitoring User Guide Release 12.0.2 Please direct questions about Data Center Real User Monitoring or comments on this document to: APM Customer Support

More information

BusinessObjects Enterprise InfoView User's Guide

BusinessObjects Enterprise InfoView User's Guide BusinessObjects Enterprise InfoView User's Guide BusinessObjects Enterprise XI 3.1 Copyright 2009 SAP BusinessObjects. All rights reserved. SAP BusinessObjects and its logos, BusinessObjects, Crystal Reports,

More information

Citrix Access Gateway Plug-in for Windows User Guide

Citrix Access Gateway Plug-in for Windows User Guide Citrix Access Gateway Plug-in for Windows User Guide Access Gateway 9.2, Enterprise Edition Copyright and Trademark Notice Use of the product documented in this guide is subject to your prior acceptance

More information

Transaction Monitoring Version 8.1.3 for AIX, Linux, and Windows. Reference IBM

Transaction Monitoring Version 8.1.3 for AIX, Linux, and Windows. Reference IBM Transaction Monitoring Version 8.1.3 for AIX, Linux, and Windows Reference IBM Note Before using this information and the product it supports, read the information in Notices. This edition applies to V8.1.3

More information

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1 Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Getting Started with PRTG Network Monitor 2012 Paessler AG

Getting Started with PRTG Network Monitor 2012 Paessler AG Getting Started with PRTG Network Monitor 2012 Paessler AG All rights reserved. No parts of this work may be reproduced in any form or by any means graphic, electronic, or mechanical, including photocopying,

More information

http://docs.trendmicro.com

http://docs.trendmicro.com Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,

More information

DEPLOYMENT GUIDE Version 1.1. Deploying the BIG-IP LTM v10 with Citrix Presentation Server 4.5

DEPLOYMENT GUIDE Version 1.1. Deploying the BIG-IP LTM v10 with Citrix Presentation Server 4.5 DEPLOYMENT GUIDE Version 1.1 Deploying the BIG-IP LTM v10 with Citrix Presentation Server 4.5 Table of Contents Table of Contents Deploying the BIG-IP system v10 with Citrix Presentation Server Prerequisites

More information

Citrix Access on SonicWALL SSL VPN

Citrix Access on SonicWALL SSL VPN Citrix Access on SonicWALL SSL VPN Document Scope This document describes how to configure and use Citrix bookmarks to access Citrix through SonicWALL SSL VPN 5.0. It also includes information about configuring

More information

Dell SonicWALL SRA 7.5 Citrix Access

Dell SonicWALL SRA 7.5 Citrix Access Dell SonicWALL SRA 7.5 Citrix Access Document Scope This document describes how to configure and use Citrix bookmarks to access Citrix through Dell SonicWALL SRA 7.5. It also includes information about

More information

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide Abstract This guide describes the Virtualization Monitor (vmon), an add-on service module of the HP Intelligent Management

More information

The Top 10 Reasons to Upgrade to DC RUM v12. Product Marketing/Product Management May 2013

The Top 10 Reasons to Upgrade to DC RUM v12. Product Marketing/Product Management May 2013 The Top 10 Reasons to Upgrade to DC RUM v12 Product Marketing/Product Management May 2013 1 Top 10 Reasons to Upgrade to DC RUM v12 2 1 More Intuitive User Interface 2 Now From the Palm of Your Hand 3

More information

InfoView User s Guide. BusinessObjects Enterprise XI Release 2

InfoView User s Guide. BusinessObjects Enterprise XI Release 2 BusinessObjects Enterprise XI Release 2 InfoView User s Guide BusinessObjects Enterprise XI Release 2 Patents Trademarks Copyright Third-party contributors Business Objects owns the following U.S. patents,

More information

Users Guide. Ribo 3.0

Users Guide. Ribo 3.0 Users Guide Ribo 3.0 DOCUMENT ID: DC37542-01-0300-02 LAST REVISED: April 2012 Copyright 2012 by Sybase, Inc. All rights reserved. This publication pertains to Sybase software and to any subsequent release

More information

OnCommand Performance Manager 1.1

OnCommand Performance Manager 1.1 OnCommand Performance Manager 1.1 Installation and Setup Guide For Red Hat Enterprise Linux NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501

More information

Best of Breed of an ITIL based IT Monitoring. The System Management strategy of NetEye

Best of Breed of an ITIL based IT Monitoring. The System Management strategy of NetEye Best of Breed of an ITIL based IT Monitoring The System Management strategy of NetEye by Georg Kostner 5/11/2012 1 IT Services and IT Service Management IT Services means provisioning of added value for

More information

CTERA Agent for Linux

CTERA Agent for Linux User Guide CTERA Agent for Linux September 2013 Version 4.0 Copyright 2009-2013 CTERA Networks Ltd. All rights reserved. No part of this document may be reproduced in any form or by any means without written

More information

Sage 100 ERP. Installation and System Administrator s Guide

Sage 100 ERP. Installation and System Administrator s Guide Sage 100 ERP Installation and System Administrator s Guide This is a publication of Sage Software, Inc. Version 2014 Copyright 2013 Sage Software, Inc. All rights reserved. Sage, the Sage logos, and the

More information

Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide

Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

SSL VPN Server Guide. Access Manager 3.2 SP2. June 2013

SSL VPN Server Guide. Access Manager 3.2 SP2. June 2013 SSL VPN Server Guide Access Manager 3.2 SP2 June 2013 Legal Notice THIS DOCUMENT AND THE SOFTWARE DESCRIBED IN THIS DOCUMENT ARE FURNISHED UNDER AND ARE SUBJECT TO THE TERMS OF A LICENSE AGREEMENT OR A

More information

echomountain Enterprise Monitoring, Notification & Reporting Services Protect your business

echomountain Enterprise Monitoring, Notification & Reporting Services Protect your business Protect your business Enterprise Monitoring, Notification & Reporting Services echomountain 1483 Patriot Blvd Glenview, IL 60026 877.311.1980 [email protected] echomountain Enterprise Monitoring,

More information

DEPLOYMENT GUIDE Version 2.1. Deploying F5 with Microsoft SharePoint 2010

DEPLOYMENT GUIDE Version 2.1. Deploying F5 with Microsoft SharePoint 2010 DEPLOYMENT GUIDE Version 2.1 Deploying F5 with Microsoft SharePoint 2010 Table of Contents Table of Contents Introducing the F5 Deployment Guide for Microsoft SharePoint 2010 Prerequisites and configuration

More information

Administrator Operations Guide

Administrator Operations Guide Administrator Operations Guide 1 What You Can Do with Remote Communication Gate S 2 Login and Logout 3 Settings 4 Printer Management 5 Log Management 6 Firmware Management 7 Installation Support 8 Maintenance

More information

The SSL device also supports the 64-bit Internet Explorer with new ActiveX loaders for Assessment, Abolishment, and the Access Client.

The SSL device also supports the 64-bit Internet Explorer with new ActiveX loaders for Assessment, Abolishment, and the Access Client. WatchGuard SSL v3.2 Release Notes Supported Devices SSL 100 and 560 WatchGuard SSL OS Build 355419 Revision Date January 28, 2013 Introduction WatchGuard is pleased to announce the release of WatchGuard

More information

Enterprise Manager. Version 6.2. Installation Guide

Enterprise Manager. Version 6.2. Installation Guide Enterprise Manager Version 6.2 Installation Guide Enterprise Manager 6.2 Installation Guide Document Number 680-028-014 Revision Date Description A August 2012 Initial release to support version 6.2.1

More information

CA Spectrum and CA Service Desk

CA Spectrum and CA Service Desk CA Spectrum and CA Service Desk Integration Guide CA Spectrum 9.4 / CA Service Desk r12 and later This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter

More information

Sage ERP MAS 90 Sage ERP MAS 200 Sage ERP MAS 200 SQL. Installation and System Administrator's Guide 4MASIN450-08

Sage ERP MAS 90 Sage ERP MAS 200 Sage ERP MAS 200 SQL. Installation and System Administrator's Guide 4MASIN450-08 Sage ERP MAS 90 Sage ERP MAS 200 Sage ERP MAS 200 SQL Installation and System Administrator's Guide 4MASIN450-08 2011 Sage Software, Inc. All rights reserved. Sage, the Sage logos and the Sage product

More information

Sharp Remote Device Manager (SRDM) Server Software Setup Guide

Sharp Remote Device Manager (SRDM) Server Software Setup Guide Sharp Remote Device Manager (SRDM) Server Software Setup Guide This Guide explains how to install the software which is required in order to use Sharp Remote Device Manager (SRDM). SRDM is a web-based

More information

http://docs.trendmicro.com

http://docs.trendmicro.com Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,

More information

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide This document is intended to help you get started using WebSpy Vantage Ultimate and the Web Module. For more detailed information, please see

More information

APMaaS Synthetic Monitoring for Web and Mobile

APMaaS Synthetic Monitoring for Web and Mobile APMaaS Synthetic Monitoring for Web and Mobile Data Feed API Reference September 2014 Please direct questions about APMaaS Synthetic Monitoring or comments on this document to: APM Customer Support FrontLine

More information

ICE Trade Vault. Public User & Technology Guide June 6, 2014

ICE Trade Vault. Public User & Technology Guide June 6, 2014 ICE Trade Vault Public User & Technology Guide June 6, 2014 This material may not be reproduced or redistributed in whole or in part without the express, prior written consent of IntercontinentalExchange,

More information

Configuration Guide. Websense Web Security Solutions Version 7.8.1

Configuration Guide. Websense Web Security Solutions Version 7.8.1 Websense Web Security Solutions Version 7.8.1 To help you make the transition to Websense Web Security or Web Security Gateway, this guide covers the basic steps involved in setting up your new solution

More information

Gigabyte Management Console User s Guide (For ASPEED AST 2400 Chipset)

Gigabyte Management Console User s Guide (For ASPEED AST 2400 Chipset) Gigabyte Management Console User s Guide (For ASPEED AST 2400 Chipset) Version: 1.4 Table of Contents Using Your Gigabyte Management Console... 3 Gigabyte Management Console Key Features and Functions...

More information

www.novell.com/documentation SSL VPN Server Guide Access Manager 3.1 SP5 January 2013

www.novell.com/documentation SSL VPN Server Guide Access Manager 3.1 SP5 January 2013 www.novell.com/documentation SSL VPN Server Guide Access Manager 3.1 SP5 January 2013 Legal Notices Novell, Inc., makes no representations or warranties with respect to the contents or use of this documentation,

More information

Network Probe User Guide

Network Probe User Guide Network Probe User Guide Network Probe User Guide Table of Contents 1. Introduction...1 2. Installation...2 Windows installation...2 Linux installation...3 Mac installation...4 License key...5 Deployment...5

More information

WatchGuard SSL v3.2 Update 1 Release Notes. Introduction. Windows 8 and 64-bit Internet Explorer Support. Supported Devices SSL 100 and 560

WatchGuard SSL v3.2 Update 1 Release Notes. Introduction. Windows 8 and 64-bit Internet Explorer Support. Supported Devices SSL 100 and 560 WatchGuard SSL v3.2 Update 1 Release Notes Supported Devices SSL 100 and 560 WatchGuard SSL OS Build 445469 Revision Date 3 April 2014 Introduction WatchGuard is pleased to announce the release of WatchGuard

More information

Kaspersky Security Center Web-Console

Kaspersky Security Center Web-Console Kaspersky Security Center Web-Console User Guide CONTENTS ABOUT THIS GUIDE... 5 In this document... 5 Document conventions... 7 KASPERSKY SECURITY CENTER WEB-CONSOLE... 8 SOFTWARE REQUIREMENTS... 10 APPLICATION

More information

Veeam Backup Enterprise Manager. Version 7.0

Veeam Backup Enterprise Manager. Version 7.0 Veeam Backup Enterprise Manager Version 7.0 User Guide August, 2013 2013 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may

More information

Clientless SSL VPN Users

Clientless SSL VPN Users Manage Passwords, page 1 Username and Password Requirements, page 3 Communicate Security Tips, page 3 Configure Remote Systems to Use Clientless SSL VPN Features, page 3 Manage Passwords Optionally, you

More information

Audit Management Reference

Audit Management Reference www.novell.com/documentation Audit Management Reference ZENworks 11 Support Pack 3 February 2014 Legal Notices Novell, Inc., makes no representations or warranties with respect to the contents or use of

More information

Astaro Security Gateway V8. Remote Access via L2TP over IPSec Configuring ASG and Client

Astaro Security Gateway V8. Remote Access via L2TP over IPSec Configuring ASG and Client Astaro Security Gateway V8 Remote Access via L2TP over IPSec Configuring ASG and Client 1. Introduction This guide contains complementary information on the Administration Guide and the Online Help. If

More information

Data Center Real User Monitoring

Data Center Real User Monitoring Data Center Real User Monitoring Dynatrace Enterprise Portal Administration Guide Release 12.3 Please direct questions about DC RUM or comments on this document to: Customer Support https://community.compuwareapm.com/community/display/support

More information

GFI LANguard 9.0 ReportPack. Manual. By GFI Software Ltd.

GFI LANguard 9.0 ReportPack. Manual. By GFI Software Ltd. GFI LANguard 9.0 ReportPack Manual By GFI Software Ltd. http://www.gfi.com E-mail: [email protected] Information in this document is subject to change without notice. Companies, names, and data used in examples

More information

Sophos UTM. Remote Access via PPTP. Configuring UTM and Client

Sophos UTM. Remote Access via PPTP. Configuring UTM and Client Sophos UTM Remote Access via PPTP Configuring UTM and Client Product version: 9.000 Document date: Friday, January 11, 2013 The specifications and information in this document are subject to change without

More information

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP LTM SYSTEM WITH CITRIX PRESENTATION SERVER 3.0 AND 4.5

DEPLOYMENT GUIDE DEPLOYING THE BIG-IP LTM SYSTEM WITH CITRIX PRESENTATION SERVER 3.0 AND 4.5 DEPLOYMENT GUIDE DEPLOYING THE BIG-IP LTM SYSTEM WITH CITRIX PRESENTATION SERVER 3.0 AND 4.5 Deploying F5 BIG-IP Local Traffic Manager with Citrix Presentation Server Welcome to the F5 BIG-IP Deployment

More information

MEGA Web Application Architecture Overview MEGA 2009 SP4

MEGA Web Application Architecture Overview MEGA 2009 SP4 Revised: September 2, 2010 Created: March 31, 2010 Author: Jérôme Horber CONTENTS Summary This document describes the system requirements and possible deployment architectures for MEGA Web Application.

More information

Novell ZENworks Asset Management 7.5

Novell ZENworks Asset Management 7.5 Novell ZENworks Asset Management 7.5 w w w. n o v e l l. c o m October 2006 USING THE WEB CONSOLE Table Of Contents Getting Started with ZENworks Asset Management Web Console... 1 How to Get Started...

More information

Kaspersky Security Center Web-Console

Kaspersky Security Center Web-Console Kaspersky Security Center Web-Console User Guide CONTENTS ABOUT THIS GUIDE... 5 In this document... 5 Document conventions... 7 KASPERSKY SECURITY CENTER WEB-CONSOLE... 8 SOFTWARE REQUIREMENTS... 10 APPLICATION

More information

Installation Guide. SAP Control Center 3.3

Installation Guide. SAP Control Center 3.3 Installation Guide SAP Control Center 3.3 DOCUMENT ID: DC01002-01-0330-01 LAST REVISED: November 2013 Copyright 2013 by SAP AG or an SAP affiliate company. All rights reserved. No part of this publication

More information

VMware Identity Manager Administration

VMware Identity Manager Administration VMware Identity Manager Administration VMware Identity Manager 2.4 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

SAP BusinessObjects Business Intelligence Platform Document Version: 4.1 Support Package 5-2014-11-06. Business Intelligence Launch Pad User Guide

SAP BusinessObjects Business Intelligence Platform Document Version: 4.1 Support Package 5-2014-11-06. Business Intelligence Launch Pad User Guide SAP BusinessObjects Business Intelligence Platform Document Version: 4.1 Support Package 5-2014-11-06 Business Intelligence Launch Pad User Guide Table of Contents 1 Document history....7 2 Getting started

More information

App Orchestration 2.5

App Orchestration 2.5 Configuring NetScaler 10.5 Load Balancing with StoreFront 2.5.2 and NetScaler Gateway for Prepared by: James Richards Last Updated: August 20, 2014 Contents Introduction... 3 Configure the NetScaler load

More information

Table of Contents. Welcome... 2. Login... 3. Password Assistance... 4. Self Registration... 5. Secure Mail... 7. Compose... 8. Drafts...

Table of Contents. Welcome... 2. Login... 3. Password Assistance... 4. Self Registration... 5. Secure Mail... 7. Compose... 8. Drafts... Table of Contents Welcome... 2 Login... 3 Password Assistance... 4 Self Registration... 5 Secure Mail... 7 Compose... 8 Drafts... 10 Outbox... 11 Sent Items... 12 View Package Details... 12 File Manager...

More information

CA Unified Infrastructure Management Server

CA Unified Infrastructure Management Server CA Unified Infrastructure Management Server CA UIM Server Configuration Guide 8.0 Document Revision History Version Date Changes 8.0 September 2014 Rebranded for UIM 8.0. 7.6 June 2014 No revisions for

More information

Shakambaree Technologies Pvt. Ltd.

Shakambaree Technologies Pvt. Ltd. Welcome to Support Express by Shakambaree Technologies Pvt. Ltd. Introduction: This document is our sincere effort to put in some regular issues faced by a Digital Signature and USB Token user doing on

More information

CA Performance Center

CA Performance Center CA Performance Center Release Notes Release 2.3.3 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is for

More information

Copyright 2012 Trend Micro Incorporated. All rights reserved.

Copyright 2012 Trend Micro Incorporated. All rights reserved. Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,

More information

User Guide. CTERA Agent. August 2011 Version 3.0

User Guide. CTERA Agent. August 2011 Version 3.0 User Guide CTERA Agent August 2011 Version 3.0 Copyright 2009-2011 CTERA Networks Ltd. All rights reserved. No part of this document may be reproduced in any form or by any means without written permission

More information

HP IMC User Behavior Auditor

HP IMC User Behavior Auditor HP IMC User Behavior Auditor Administrator Guide Abstract This guide describes the User Behavior Auditor (UBA), an add-on service module of the HP Intelligent Management Center. UBA is designed for IMC

More information

IBM Remote Lab Platform Citrix Setup Guide

IBM Remote Lab Platform Citrix Setup Guide Citrix Setup Guide Version 1.8.2 Trademarks IBM is a registered trademark of International Business Machines Corporation. The following are trademarks of International Business Machines Corporation in

More information

HP IMC Firewall Manager

HP IMC Firewall Manager HP IMC Firewall Manager Configuration Guide Part number: 5998-2267 Document version: 6PW102-20120420 Legal and notice information Copyright 2012 Hewlett-Packard Development Company, L.P. No part of this

More information

Networking Best Practices Guide. Version 6.5

Networking Best Practices Guide. Version 6.5 Networking Best Practices Guide Version 6.5 Summer 2010 Copyright: 2010, CCH, a Wolters Kluwer business. All rights reserved. Material in this publication may not be reproduced or transmitted in any form

More information

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream User Manual Onsight Management Suite Version 5.1 Another Innovation by Librestream Doc #: 400075-06 May 2012 Information in this document is subject to change without notice. Reproduction in any manner

More information

Heroix Longitude Quick Start Guide V7.1

Heroix Longitude Quick Start Guide V7.1 Heroix Longitude Quick Start Guide V7.1 Copyright 2011 Heroix 165 Bay State Drive Braintree, MA 02184 Tel: 800-229-6500 / 781-848-1701 Fax: 781-843-3472 Email: [email protected] Notice Heroix provides

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com [email protected] 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

Sophos Mobile Control Installation guide. Product version: 3.5

Sophos Mobile Control Installation guide. Product version: 3.5 Sophos Mobile Control Installation guide Product version: 3.5 Document date: July 2013 Contents 1 Introduction...3 2 The Sophos Mobile Control server...4 3 Set up Sophos Mobile Control...10 4 External

More information

Sage HRMS 2014 Sage Employee Self Service

Sage HRMS 2014 Sage Employee Self Service Sage HRMS 2014 Sage Employee Self Service Pre-Installation Guide October 2013 This is a publication of Sage Software, Inc. Document version: October 17, 2013 Copyright 2013. Sage Software, Inc. All rights

More information

Kony MobileFabric. Sync Windows Installation Manual - WebSphere. On-Premises. Release 6.5. Document Relevance and Accuracy

Kony MobileFabric. Sync Windows Installation Manual - WebSphere. On-Premises. Release 6.5. Document Relevance and Accuracy Kony MobileFabric Sync Windows Installation Manual - WebSphere On-Premises Release 6.5 Document Relevance and Accuracy This document is considered relevant to the Release stated on this title page and

More information

Citrix EdgeSight User s Guide. Citrix EdgeSight for Endpoints 5.4 Citrix EdgeSight for XenApp 5.4

Citrix EdgeSight User s Guide. Citrix EdgeSight for Endpoints 5.4 Citrix EdgeSight for XenApp 5.4 Citrix EdgeSight User s Guide Citrix EdgeSight for Endpoints 5.4 Citrix EdgeSight for XenApp 5.4 Copyright and Trademark Notice Use of the product documented in this guide is subject to your prior acceptance

More information

MGC WebCommander Web Server Manager

MGC WebCommander Web Server Manager MGC WebCommander Web Server Manager Installation and Configuration Guide Version 8.0 Copyright 2006 Polycom, Inc. All Rights Reserved Catalog No. DOC2138B Version 8.0 Proprietary and Confidential The information

More information

CTERA Agent for Mac OS-X

CTERA Agent for Mac OS-X User Guide CTERA Agent for Mac OS-X September 2013 Version 4.0 Copyright 2009-2013 CTERA Networks Ltd. All rights reserved. No part of this document may be reproduced in any form or by any means without

More information

Gigabyte Content Management System Console User s Guide. Version: 0.1

Gigabyte Content Management System Console User s Guide. Version: 0.1 Gigabyte Content Management System Console User s Guide Version: 0.1 Table of Contents Using Your Gigabyte Content Management System Console... 2 Gigabyte Content Management System Key Features and Functions...

More information

Inter-Tel Web Conferencing and Remote Support

Inter-Tel Web Conferencing and Remote Support MITEL Inter-Tel Web Conferencing and Remote Support User Guide Notice This user guide is released by Inter-Tel (Delaware), Incorporated as a guide for end users. It provides information necessary to use

More information