Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer



Similar documents
Tuning WebSphere Application Server ND 7.0. Royal Cyber Inc.

Job Reference Guide. SLAMD Distributed Load Generation Engine. Version 1.8.2

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Internet Information Services Agent Version Fix Pack 2.

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010

IBM Tivoli Monitoring Version 6.3 Fix Pack 2. Windows OS Agent Reference

IBM Tivoli Composite Application Manager for Microsoft Applications: Microsoft Hyper-V Server Agent Version Fix Pack 2.

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

IBM Tivoli Monitoring Version 6.3 Fix Pack 2. Infrastructure Management Dashboards for Servers Reference

Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON. Ernie Gilman IBM. August 10, 2011: 1:30 PM-2:30 PM.

Monitoring Techniques for Cisco Network Registrar

VMWARE WHITE PAPER 1

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

Analyzing IBM i Performance Metrics

FileNet System Manager Dashboard Help

LinkScope CTI Analysis and Monitoring Solution. Two Products in one Complete Solution. White Paper

Fifty Critical Alerts for Monitoring Windows Servers Best practices

Recommendations for Performance Benchmarking

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

Monitoring DoubleTake Availability

WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE

Web Server (Step 1) Processes request and sends query to SQL server via ADO/OLEDB. Web Server (Step 2) Creates HTML page dynamically from record set

Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Session 11899

Running a Workflow on a PowerCenter Grid

Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Ernie Gilman

11.1 inspectit inspectit

NETWORK ADMINISTRATION

Configuring Nex-Gen Web Load Balancer

Load Testing and Monitoring Web Applications in a Windows Environment

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

DMS Performance Tuning Guide for SQL Server

Enterprise Manager Performance Tips

DDL Systems, Inc. ACO MONITOR : Managing your IBM i (or AS/400) using wireless devices. Technical White Paper. April 2014

Flight Workflow User's Guide. Release

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

Configuring System Message Logging

Insight into Performance Testing J2EE Applications Sep 2008

Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014

Amazon Web Services Primer. William Strickland COP 6938 Fall 2012 University of Central Florida

Emerald. Network Collector Version 4.0. Emerald Management Suite IEA Software, Inc.

Chapter 1 - Web Server Management and Cluster Topology

Performance analysis of a Linux based FTP server

Zarząd (7 osób) F inanse (13 osób) M arketing (7 osób) S przedaż (16 osób) K adry (15 osób)

TRACE PERFORMANCE TESTING APPROACH. Overview. Approach. Flow. Attributes

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance

LCMON Network Traffic Analysis

Quantum StorNext. Product Brief: Distributed LAN Client

Internet Information TE Services 5.0. Training Division, NIC New Delhi

orrelog Ping Monitor Adapter Software Users Manual

Capacity Planning Guide for Adobe LiveCycle Data Services 2.6

Question: 3 When using Application Intelligence, Server Time may be defined as.

Citrix EdgeSight User s Guide. Citrix EdgeSight for Endpoints 5.4 Citrix EdgeSight for XenApp 5.4

Agile Performance Testing

Tivoli Log File Agent Version Fix Pack 2. User's Guide SC

Oracle Net Services - Best Practices for Database Performance and Scalability

Technote: AIX EtherChannel Load Balancing Options

How To Test A Web Server

Syslog Windows Tool Set (WTS) Configuration File Directives And Help

Configuring Health Monitoring

Understanding Slow Start

Snapt Balancer Manual

Integration with CA Application Delivery Analysis

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications

Monitoring Microsoft Exchange to Improve Performance and Availability

Keys to Optimizing Your Backup Environment: Tivoli Storage Manager

An Oracle Technical White Paper November Oracle Solaris 11 Network Virtualization and Network Resource Management

Ekran System Help File

Chapter 8 Monitoring and Logging

Performance Evaluation of Linux Bridge

SAN Conceptual and Design Basics

Automatic Workload Management in Clusters Managed by CloudStack

Application. Performance Testing

Cisco Integrated Services Routers Performance Overview

Internet Firewall CSIS Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS net15 1. Routers can implement packet filtering

Managed File Transfer

Installation and Configuration Guide for Windows and Linux

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN

Enterprise Manager. Version 6.2. Administrator s Guide

IBM Tivoli Web Response Monitor

Agility Database Scalability Testing

In this training module, you learn how to configure and deploy a machine with a monitoring agent through Tivoli Service Automation Manager V7.2.2.

SIDN Server Measurements

Performance and scalability of a large OLTP workload

High Availability Essentials

TECHNICAL NOTE. Technical Note P/N REV 03. EMC NetWorker Simplifying firewall port requirements with NSR tunnel Release 8.

11.1. Performance Monitoring

Performance Testing. Configuration Parameters for Performance Testing

Monitoring Traffic manager

Kepware Technologies Optimizing KEPServerEX V5 Projects

Quality of Service (QoS) on Netgear switches

Monitoring PostgreSQL database with Verax NMS

Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER

Sensitivity Analysis and Patterns Implementation on Load Testing Software Systems

Performance Monitoring API for Java Enterprise Applications

Bigdata High Availability (HA) Architecture

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*

CICS Transactions Measurement with no Pain

How to analyse your system to optimise performance and throughput in IIBv9

Configuring Apache Derby for Performance and Durability Olav Sandstå

Transcription:

Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer Version 2.0.0 Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer Version 2.0.0 Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

ii IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

Contents Chapter 1. Introduction........ 1 Chapter 2. Limitations and New Prerequisites for the Products..... 3 Disclaimer for pseudo-synchronous and asynchronous application transaction segment monitoring in WSA............3 Setting the KFC_SRI_PIPENAME parameter for WRM using iplanet on Solaris........3 Setting the isolation mode for the IIS 6.0 filter for WRM.................3 Potential AIX system crash and reboot on AIX 5.3..3 Chapter 3. Product Enhancements... 5 Stability................5 Resolution of known abends, hangs, and loops..5 Resolution of storage leaks........5 Uninterrupted operation under out-of-storage conditions..............6 Network environment support........6 Small feature enhancements.........6 CPU usage management..........7 New CPU usage management parameters...7 Storage utilization improvements.......7 Throughput efficiency...........8 New WRM Collector environment variable: SM3_LOG_SAMPLE_PERCENTAGE......8 New WSA Engine features and enhancements...8 Shared filter enhancement.........8 Filter data retention algorithm.......8 New default value for TRANSACTION_AGE_INTERVAL_SECONDS parameter..............8 New parameter: HTTP_TRANSACTIONS...9 New parameter: EXCLUDE_RESPONSE_TIME_RANGE....9 Chapter 4. Best Practices in High Load Environments............ 11 Increasing process parallelism........11 Optimizing storage utilization........12 Analyzer processing in degraded mode....12 Parameters affecting Analyzer storage usage..13 Minimizing output delay..........13 Parameters affecting Analyzer output.....13 Improving I/O efficiency..........14 Parameters affecting KFC1 API I/O.....14 CPU usage management..........15 Parameters affecting CPU usage management..16 Best Practices in a High Load Environment for WRM................16 Best Practices in a High Load Environment for WSA 17 iii

iv IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

Chapter 1. Introduction The following are notes regarding fixpack 1.2.0-TIV-W3_Analyzer-IF0003 for the Analyzer component of the IBM Tivoli Web Response Monitor (WRM) 2.0.0 and IBM Tivoli Web Segment Analyzer (WSA) 2.0.0 products. This document provides details about the following: v Limitations and new prerequisites for the products that you need to be aware of. See Chapter 2, Limitations and New Prerequisites for the Products, on page 3. v Enhancements to the Analyzer component that the fixpack provides. See Chapter 3, Product Enhancements, on page 5. v Best practices in a high load environment. See Chapter 4, Best Practices in High Load Environments, on page 11. 1

2 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

Chapter 2. Limitations and New Prerequisites for the Products The following are limitations of the WRM and WSA products that you should be aware of: Disclaimer for pseudo-synchronous and asynchronous application transaction segment monitoring in WSA WSA is designed to be a distributed application transaction path segment bottleneck analysis tool for request and response transaction models. It operates in a synchronous TCP transaction environment that exhibits a cause and effect transaction-to-subtransaction relationship, for example, between the request to and the response from a Web server, application server, and database. Transaction application segments that exhibit pseudo-synchronous, fully asynchronous behavior, or models where the request and return paths are not the same, are currently not supported, although WSA will accurately report on a subset of bottlenecks found in the synchronous transaction segments of that environment. Examples of non-synchronous environments include, but are not limited to, applications using WebSphere MQ and applications that send on one TCP session and receive their response on another TCP session. Setting the KFC_SRI_PIPENAME parameter for WRM using iplanet on Solaris Since iplanet on Solaris can run multiple instances of an NSAPI Filter, such as the WRM HTTPS iplanet Filter, you must set the environment variable KFC_SRI_PIPENAME=DEFAULT in the iplanet start script and in the Analyzer s kfcmenv environment variable file. Setting the isolation mode for the IIS 6.0 filter for WRM There are two modes for IIS 6.0: the worker process isolation mode and the IIS 5.0 isolation mode. The default mode is the worker process isolation mode. However, some application characteristics conflict with the worker process isolation mode. In particular, the IIS filter for WRM has a dependency on inetinfo.exe, and it must run in the inetinfo.exe process. In order to use the IIS filter to monitor HTTPS transactions in IIS 6.0, the IIS 5.0 isolation mode must be used instead of the default worker process isolation mode. Potential AIX system crash and reboot on AIX 5.3 If you see an AIX system crash and reboot while running the Analyzer on AIX 5.3, you need to install APAR interim fix IY86400 on top of AIX 5300-05. The fix and installation instructions can be found at the following site: ftp:// ftp.software.ibm.com/aix/efixes/iy86400/ 3

4 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

Chapter 3. Product Enhancements Stability The fixpack provides enhancements to the Analyzer component of the WRM and WSA products in the following general areas: v Stability See Stability. v Support for current network environments See Network environment support on page 6. v Small features See Small feature enhancements on page 6. v CPU usage management See CPU usage management on page 7. v Storage utilization improvements See Storage utilization improvements on page 7. v Throughput efficiency See Throughput efficiency on page 8. v A new environment variable for the WRM Collector See New WRM Collector environment variable: SM3_LOG_SAMPLE_PERCENTAGE on page 8. v New WSA Engine features and enhancements See New WSA Engine features and enhancements on page 8. Resolution of known abends, hangs, and loops The fixpack resolves all known program abends, hangs, and loops caused by the following: v Overlaid storage v Timing and lock serialization v Collector/Engine disconnection and re-connection v Addition or deletion of monitoring filters v Program shutdown processing v Capture interface restart v Unmonitored UNIX signals v Unsuccessful time synchronization v Parsing of data contents v Truncated capture packets v 64-bit integer alignment issues v Long URL size v Forming HTTP transaction and objects merging Resolution of storage leaks With the fixpack, Rational Purify runs repetitively with clean reports. 5

Uninterrupted operation under out-of-storage conditions The fixpack allows uninterrupted operation under out-of-storage conditions. This capability involves the following: v Requires large amounts of virtual storage due to high network volumes in your environment v Continued productive response time monitoring even after all available allowed address space storage has been exhausted v Improved error checking and handling throughout program logic v Faster buffer reuse Network environment support Small feature enhancements v Enforcement of internal queue limit thresholds v Automatically lowered frame capture percentages to reduce workload v Temporarily inhibited monitoring of new TCP connections v Resumption of normal operations when storage constraints are relieved The fixpack provides enhanced support for your network environment in the following areas: v TCP sessions leveraging Type of Service (TOS) v Virtual IP and Virtual LAN configuration v Network frame formats other than IEEE 802 or Ethernet (RFC894) v Very large network interface configuration v Several hundreds of real or virtual interface definitions The following are small feature enhancements introduced with the fixpack: v An improved KFC1 client and Analyzer time synchronization procedure to reduce WSA/Engine startup delays v Customizable KFC1 callback response time table sizes to accommodate very high transaction arrival volumes involving reuse if out of storage (KFC_API_CALLBACK_TABLE_LIMIT) v Selectable KFC1 network interface for connecting to the Analyzer (KFC_ENV_API_SPECIFY_NIC) v 64-bit integer support on all platforms v Option to select use of either the first request packet or the last request packet as the beginning of application response time calculation, affecting the WSA correlation window (KFC_TCP_APPL_BEGIN_LAST_PACKET) v XML page output support (used to only report as objects) v Customized HTTP and TCP embedded object output control by parameter (KFC_HTTP_REPORT_OBJECTS and KFC_TCP_REPORT_HTTP_OBJECTS) v WebSeal HTTPS Filter support v Various program changes that resolve WSA missing transaction segment problems v Analyzer API server component binding to well-known port 12121. Previously, Quick Analyzer recycling usually failed because the well-known port could not be immediately reused per TCP implementation. Adding periodic startup retry logic to the Analyzer helps to resolve this problem and improve product usability. 6 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

CPU usage management v Leveraging of the ITM KBB trace set feature that enables customizable total number of files, file size, and file reuse. This allows capturing of hard-to-reproduce and long-running problems with predetermined disk space requirements. The fixpack provides the following improvements to CPU management: v Allows you to specify maximum Analyzer CPU utilization limits v The Analyzer monitors its own CPU usage and adjusts internal processing control by applying heuristic rules and algorithms, maintaining CPU consumption within an 85% to 100% range. v The Analyzer produces the best possible monitoring transaction throughput per resource availability and CPU constraints. v The default Analyzer behavior is no limit. New CPU usage management parameters The following are the new CPU usage management parameters: v v v v KFC_CPU_ENFORCE_MAX_LIMIT KFC_CPU_TARGET_THRESHOLD KFC_CPU_MANAGE_PERIOD KFC_CPU_ACTION_INTERVAL See a description of these parameters at Parameters affecting CPU usage management on page 16. Storage utilization improvements The fixpack provides the following improvements to storage utilization: v Readjustment of headers, data, sessions, and work buffer sizes to fit multiple buffers into a 4K page v Initial allocation of multiple pages of buffers per pool v Allocation of frequent recycled header and data buffers by page instead of by individual buffer v Because buffers are used and returned in random order, the Analyzer never releases buffers from initial allocation sets while performing buffer contraction since they are in contiguous page storage. v Under high network loads, tens of thousands of TCP connections come and go per minute. Changing the background task session clean up frequency resolves large session pool allocation problems. v Network activities always take place in bursts, even under very high network loads. Frequent buffer pools contractions as a method to reduce overall storage usage actually causes high storage fragmentation. Thus, the Analyzer's method of reducing the buffer pools contraction frequency and contraction percentage avoids thrashing and helps to preserve steady state buffer pool allocation sets. v Most systems handle similar network loads every day. The Analyzer continuously maintains calculated system steady state buffer pool allocation characteristics and saves them across product restarts. This enables the Analyzer, based on actual runtime history, to initially allocate contiguous buffers in storage that closely match steady-state needs, thereby requiring little buffer pool expansion or contraction and avoiding the possibility of storage fragmentation. Chapter 3. Product Enhancements 7

v On UNIX systems, captured frames are stored in capture buffers allocated per network MTU size. However, the majority of packets are smaller than the maximum negotiated size. Allocation of capture buffers based on actual frame size helps to reduce Analyzer storage requirements. Throughput efficiency The fixpack provides the following improvements in throughput efficiency: v Implementation of dynamic and customizable packet captures on UNIX systems v Increase of network protocol processing parallelism by implementation of configurable protocol task thread pools v Reduction of response time data filter processing and formatting of KFC1 API transmission data delays by implementing a configurable output task thread pool v Improvement of KFC1 communication I/O efficiency New WRM Collector environment variable: SM3_LOG_SAMPLE_PERCENTAGE This new environment variable, specified in the kflmenv file, allows the specified percentage of transactions to be logged and inserted into the WRM database. It can be used to control the volume of information recorded. The default is 100%. The unit is an integer. For example: SM3_LOG_SAMPLE_PERCENTAGE = 75 New WSA Engine features and enhancements The following are new WSA Engine features and enhancements provided by the fixpack: Shared filter enhancement This performance enhancement reduces system resource utilization for both the Analyzer Agents and WSA Engine and significantly decreases network activity between all Analyzer Agents and the WSA Engine. This is accomplished by sharing application segment data which is common to multiple WSA Engine transaction definitions. At WSA Engine startup, these common application segments are determined and then a single transaction filter is defined for each common application segment to the appropriate Analyzer Agent. Each time the transaction filter data is sent to the WSA Engine, it is shared by all interested transaction definitions. Filter data retention algorithm This performance enhancement reduces the number of filter data answer areas that must be retained during Engine transaction profiling activity. This lowers storage utilization and decreases internal processing queue lengths. New default value for TRANSACTION_AGE_INTERVAL_SECONDS parameter Due to improved Analyzer Agent transaction filter data delivery, this existing global WSA Engine configuration file parameter should be set to 15. This decreases the number of filter data answer areas that are retained during Engine transaction profiling activity. The default value is now 15 seconds. For example: 8 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

TRANSACTION_AGE_INTERVAL_SECONDS = 15 New parameter: HTTP_TRANSACTIONS This new WSA Engine configuration file parameter allows URLs for HTTP transactions to not be recorded if so desired. It is specified at the individual transaction definition level. The default is Yes to record the URL for HTTP transactions. Indicate either Yes or No. For example: HTTP_TRANSACTIONS = No New parameter: EXCLUDE_RESPONSE_TIME_RANGE This new Engine configuration file parameter enables unwanted transactions such as a maintenance transaction like a long running heartbeat to be excluded from response time averages so that these averages are not skewed. It is specified at the individual TIER or OUTBOUND level within a transaction definition. There are no defaults for this parameter. A comma separated minimum and maximum value in milliseconds must be specified. The following example will ignore any transaction from 59 to 61 seconds, inclusive: EXCLUDE_RESPONSE_TIME_RANGE = 59000, 61000 Chapter 3. Product Enhancements 9

10 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

Chapter 4. Best Practices in High Load Environments The Analyzer component captures and processes network data for monitoring application transactions. Depending upon your environment and applications' characteristics, the Analyzer might process millions of network packets per minute. The Analyzer would require considerable amounts of system storage and CPU cycles for its operation under high network and transaction workload conditions. The work demand is beyond the Analyzer's control, and frequently, it is also beyond your control or predication. Furthermore, network activities always occur in bursts. The Analyzer must be able to handle sudden bursts of transaction activities that are far above average steady-state expected volumes and throughputs. The Analyzer accomplishes high load demand through a balanced approach of the following five strategies: v Increasing process parallelism See Increasing process parallelism. v Optimizing storage utilization See Optimizing storage utilization on page 12. v Minimizing output delay See Minimizing output delay on page 13. v Improving I/O efficiency See Improving I/O efficiency on page 14. v CPU usage management See CPU usage management on page 15. In addition, this chapter describes best practices for configuring the WRM and WSA products in high load environments, which are made possible by the fixpack. See Best Practices in a High Load Environment for WRM on page 16 and Best Practices in a High Load Environment for WSA on page 17. Increasing process parallelism Network communication takes place between two end point parties. The Analyzer captures network transmission data and groups them by logical communication connections, or sessions, per the communication parties IP addresses and port numbers. A session key is easily generated using addresses and ports. It enables the Analyzer to queue any captured network packet quickly to its session anchor through a hashing lookup algorithm. There are thousands of active sessions at any one time, and the Analyzer must process all of their network and application protocols in order to calculate the application transaction response time. The Analyzer sets up a protocol task thread pool so that it can process sessions concurrently. The sessions are assigned to protocol tasks of the thread pool based on a least-busy formula. This is important because sessions correspond to application transactions and they vary in duration and complexity. By using the least-busy formula instead of a simple round-robin approach, it achieves balanced 11

Optimizing storage utilization Table 1. LDR_CNTRL Settings workloads among all protocol tasks in the thread pool, thereby avoiding abnormalities in session processing delays. The following Analyzer parameter controls protocol task thread pool: KFC_MAX_PROTOCOL_POOL_SIZE: The total number of protocol tasks in the thread pool. The default is 20. You may need to increase pool size per workload. Check the Analyzer active and maximum session buffer used operation matrix and keep the average session per task below 1000 and the total pool size less than 100. For very high volumes, consult IBM Software Support. Storage is critical to the Analyzer's operation and performance. Network volume is beyond the Analyzer's control, yet it must capture and process them regardless of circumstance. All jobs in a system are given a set of virtual space for its program and data storage. This usually corresponds to operating system architecture addressability limits. In AIX, however, the process s data segment size is controlled by the LDR_CNTRL environment variable and the default is only 256 MB. So for high volume environments, higher process storage limits must be set for the Analyzer to operate satisfactorily. This parameter must be set as an environment variable for the running process and cannot be preset in system configuration files. The table below describes the variable's settings and the corresponding process data segment limits. Because this is simply virtual storage size imposed on the Analyzer, the maximum setting is recommended. LDR_CNTRL Setting Additional Data Segments Process Memory Limit Unset 0 (Default) 256 MB LDR_CNTRL-MAXDATA=0x1000000 1 512 MB LDR_CNTRL-MAXDATA=0x2000000 2 768 MB LDR_CNTRL-MAXDATA=0x3000000 3 1 GB LDR_CNTRL-MAXDATA=0x4000000 4 1.25 GB LDR_CNTRL-MAXDATA=0x5000000 5 1.5 GB LDR_CNTRL-MAXDATA=0x6000000 6 1.75 GB LDR_CNTRL-MAXDATA=0x7000000 7 2.0 GB LDR_CNTRL-MAXDATA=0x8000000 8 2.25 GB Analyzer processing in degraded mode Regardless of design precautions and care in implementation, the Analyzer can still run out of storage because of network volumes and limits on machine capabilities. In such a situation, the Analyzer will continue processing in degraded mode through the following methods: v Inhibiting new session monitoring while continuing to process existing sessions v Reducing network packet capture rates v Reducing transaction data content copy size v Speeding up HTTP object merge processes and increasing object queue sizes v Dropping of response time data if an I/O buffer cannot be obtained 12 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

The Analyzer resumes normal operation as soon as storage constraints are relieved. Parameters affecting Analyzer storage usage The following parameters affect Analyzer storage usage: Table 2. Parameters Affecting Analyzer Storage Usage Parameter Description KFC_BUFFER_POOL_STAT Specifies whether to periodically output buffer pool utilization statistics. The default is Yes. KFC_BUFFER_STAT_INTERVAL The buffer pool utilization statistics output interval. The default is 300 seconds. KFC_MAX_APPLICATION_DATA_SIZE KFC_BUFFER_POOL_CONTRACTION_PERCENT CAUTION: Do not modify this parameter unless instructed to do so by IBM Software Support. The transaction reply content size copied for application protocol interpretation. The default size is 32768 bytes. CAUTION: Do not modify this parameter unless instructed to do so by IBM Software Support. The target buffer pool contraction percentage of the free buffer. For example, 80 means releasing 20% of unused buffer. The default is 90. Minimizing output delay The WRM Collector and WSA Engine are Analyzer applications. They define monitoring criteria to the Analyzer through the KFC1 API. The Analyzer creates a filter corresponding to the monitoring requests. The application can define as many filters as needed, and different application filters may be either identical or similar. The Analyzer processes captured data per network and application protocols and creates transaction response time data records. Every transaction response time data record must be examined by every filter for its applicability. Once a transaction response time data record passes a filter, the response time data attributes must be formatted into a transmission buffer to be delivered to the owning application. It is evident that it is a time-consuming procedure to run response time data records through all filter criteria and format all transaction attributes and timestamps into transmission buffers. WRM monitors individual Web page response times. From the monitoring perspective, it does not matter that the Web page response time data is delivered to the WRM Collector a second after or a minute after the Analyzer creates the response time data record, as long as it is delivered and response time is calculated correctly. However, WSA imposes additional time constraints on the Analyzer, because WSA correlates transaction response time data from several Analyzers and all data must be available in a short time frame in order for them to be valid correlation candidates. Consequently, the Analyzer must deliver response time data results to the WSA Engine in near-real time. Parameters affecting Analyzer output The following parameters affect Analyzer output: Chapter 4. Best Practices in High Load Environments 13

Table 3. Parameters Affecting Analyzer Output Parameter Description KFC_MAX_OUTPUT_TASK_POOL_SIZE The output worker thread pool size. The default is 10. Check the Analyzer operation metrics Max RTDB Output Queue Delay and Max RTDB Output Transaction Delay first to see if the delay is greater than the WSA time frames. In general, no changes are necessary for arrival rates less than 40,000 per minute. KFC_TCP_REPORT_HTTP_OBJECTS Specifies whether to output or discard HTTP embedded object response time data for a TCP filter type (WSA). The default is No. KFC_HTTP_REPORT_OBJECTS Specifies whether to output or discard HTTP embedded object response time data for an HTTP filter type (WRM). The default is Yes. KFC_HTTP_OUTPUT_UNMERGED_OBJECTS Specifies whether to output or discard unmerged HTTP page embedded objects. The default is No. Improving I/O efficiency Network I/O frequently contributes to program throughput bottleneck. Under high load conditions, the report rate for Analyzer-to-WRM/WSA-collector applications exceeds tens of thousands of transaction response time data records per minute. Any slight network issue might cause immediate output queue build up that leads to significant increases in storage requirements. Furthermore, unless input network volume decreases, the output throughput might not recover, resulting in serious performance problems. Parameters affecting KFC1 API I/O The following parameters affect KFC1 API I/O: Table 4. Parameters Affecting KFC1 API I/O Parameter Description KFC_ENV_API_SPECIFY_NIC Specifies an alternate network interface for connecting the KFC1 API client to the Analyzer. If not specified, the first NIC is used. Specify the NIC using its assigned IP address in dot decimal format. KFC_TIME_SYNC_REQUIRED Specifies whether to enable or disable the timestamp clock synchronization procedure between the KFC1 API client and the Analyzer. WSA requests timestamps synchronization. The default is No. KFC_TIMESYNC_THRESHOLD_PERCENT Sets timestamp synchronization delta accuracy. The default is 25%. Higher accuracy requires more iterations, thus longer KFC1 client application start up times. KFC_REPORT_TRANS_ARRIVAL_RATE Specifies whether to enable or disable outputting transaction response time data arrival counts per minute. The default is No. KFC_API_MEDIASERVER_LISTEN_PORT Specifies the Analyzer application server listening port. The default is 12121. 14 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

Table 4. Parameters Affecting KFC1 API I/O (continued) Parameter KFC_API_CALLBACK_TABLE_LIMIT Description CAUTION: Do not modify this parameter unless instructed to do so by IBM Software Support. Specifies the maximum KFC1 API callback result table size. The default is 12,000. The result table is reused when this size is exceeded in order to support a high response time data arrival rate that is otherwise unable to be handled by the client application. CPU usage management The Analyzer may require high CPU and storage system resources in order to process high network transaction volumes. When the Analyzer requires more system resources due to workload, your applications, such as Web servers and databases, will also require more system resources because they also incur high volume stress. Consequently, the Analyzer could impact business processing for all programs competing for the same limited resources. The Analyzer implements an internal CPU management facility. This capability allows you to specify the maximum CPU limit that the Analyzer is allowed to use for response time monitoring processing regardless of workload. The Analyzer continuously monitors its CPU usage. The CPU usage is derived from the cumulative Analyzer CPU time, both Kernel and User time, from the beginning of a management period to the end of that management period. The total CPU time is then normalized against total processor time online in order to calculate the CPU usage percentage of the management period. Upon confirmation of consecutive CPU usage, the Analyzer adjusts its internal work management and dispatching heuristic rules, thus maintaining its CPU consumption below allowed limits. Nevertheless, the Analyzer still attempts to process and deliver the best possible response time monitoring throughput within such resource constraints. In the chart below, the Analyzer s CPU limit is set at 45% and the Analyzer successfully tracks its CPU usage at a 85% to 100% of 45% (38.25% to 24%) target zone for a given workload volume (50,000+ per minute) for this particular machine. Chapter 4. Best Practices in High Load Environments 15

Parameters affecting CPU usage management The following parameters affect CPU usage management: Table 5. Parameters Affecting CPU usage management Parameter Description KFC_CPU_ENFORCE_MAX_LIMIT Specifies whether to turn on or off the CPU management feature. The default is No (turned off) so that the Analyzer uses as much CPU as needed per workload. KFC_CPU_TARGET_THRESHOLD Sets the maximum allowed Analyzer CPU usage limit as a percentage. The minimum is 10% and the maximum is 100%. The default is 40%. KFC_CPU_MANAGE_PERIOD The CPU time calculation period. The default is 60 seconds. Setting it to less than 30 seconds is ignored. KFC_CPU_ACTION_INTERVAL The number of consecutive CPU usage management periods needed of either higher or lower than the target threshold in order for the Analyzer to initiate adjustment actions. The default is 2. KFC_CPU_STAT Specifies whether to periodically output CPU usage data. The default is No. This parameter is effective regardless of the KFC_CPU_ENFORCE_MAX_LIMIT setting. Best Practices in a High Load Environment for WRM In a high load or high transaction arrival rate environment for WRM, the following considerations and suggestions should be noted: v Consider reducing the number of recorded transactions by lowering the transaction sampling percentage: For example: SM3_LOG_SAMPLE_PERCENTAGE = 75 v Turn off HTTP object logging, and reduce the number of daily log files maintained to a minimum. This will reduce WRM Collector CPU, file I/O and disk space, and Analyzer Agent CPU utilization. 16 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

This would be accomplished by modifying the following parameters in the WRM Collector kflmenv file. For example: SM3_LOG_KEEPDAYS = 2 SM3_LOG_HTTPOBJ = N And the following parameter in the Analyzer Agent kfcmenv file. For example: KFC_HTTP_REPORT_OBJECTS=N Note: When HTTP object logging is turned off, supporting objects for a main page will not be reported through the WRM Presentation Manager. Best Practices in a High Load Environment for WSA In a high load or high transaction arrival rate environment for WSA, the following considerations and suggestions should be noted: v Turn off detail and transaction logging, use summarization and alert logging, and reduce the number of daily log files maintained to a minimum. This will significantly reduce CPU, file I/O and disk space utilization. This would be accomplished by the following parameters in the WSA Engine kfbseg.cfg configuration file. For example: DELETE_DAYS = 2 LOG_DETAIL_RECORDS = No LOG_PATH_ALERT_RECORDS = Yes LOG_TRANSACTION_RECORDS = No LOG_SUMMARIZATION_RECORDS = Yes v Ensure that alert thresholds are set accurately. This can be accomplished by running Learn Mode during a peak workload time so that average response times for heaviest system loads can be observed. v Lower TRANSACTION_HISTORY_CT to 50. This parameter is used to specify the number of client transactions, including their related subtransaction chains, to retain for viewing during a summarization interval. By lowering this value, storage usage will be reduced. v For transaction definitions that are monitoring HTTP activity, always use the Include/Exclude URL feature to limit the URLs that are recorded by the WSA Engine. This will also have the effect of filtering out the URLs that are not significant for monitoring. If URL recording is not desired, then it can be turned off at the transaction definition level by specifying HTTP_TRANSACTIONS = No. Note: Recording for HTTP objects is turned off by default. Chapter 4. Best Practices in High Load Environments 17

18 IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer: Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003

Printed in USA