Whitepaper ADF Performance Monitor Measuring, analyzing, tuning, and checking the performance of Oracle ADF applications



Similar documents
Whitepaper ADF Performance Monitor Measuring, Analyzing, Tuning, and Controlling the Performance of Oracle ADF Applications

How To Create A Graph On An African Performance Monitor (Dvt)

Transaction Monitoring Version for AIX, Linux, and Windows. Reference IBM

Troubleshooting PHP Issues with Zend Server Code Tracing

Holistic Performance Analysis of J2EE Applications

Winning the J2EE Performance Game Presented to: JAVA User Group-Minnesota

MyOra 3.0. User Guide. SQL Tool for Oracle. Jayam Systems, LLC

<Insert Picture Here> Java Application Diagnostic Expert

Quick Start Guide. Ignite for SQL Server. Confio Software 4772 Walnut Street, Suite 100 Boulder, CO CONFIO.

Don t get it right, just get it written.

The Evolution of Load Testing. Why Gomez 360 o Web Load Testing Is a

Java Monitoring. Stuff You Can Get For Free (And Stuff You Can t) Paul Jasek Sales Engineer

XpoLog Center Suite Log Management & Analysis platform

Monitoring the Real End User Experience

White Paper. The Ten Features Your Web Application Monitoring Software Must Have. Executive Summary

Application Performance Monitoring (APM) Technical Whitepaper

WEBLOGIC SERVER MANAGEMENT PACK ENTERPRISE EDITION

Tuning WebSphere Application Server ND 7.0. Royal Cyber Inc.

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad

Monitoring applications in multitier environment. Uroš Majcen A New View on Application Management.

Introduction. AppDynamics for Databases Version Page 1

Monitoring Replication

RTI v3.3 Lightweight Deep Diagnostics for LoadRunner

Monitoring and Diagnosing Production Applications Using Oracle Application Diagnostics for Java. An Oracle White Paper December 2007

White Paper. How to Achieve Best-in-Class Performance Monitoring for Distributed Java Applications

Oracle WebLogic Server 11g: Administration Essentials

IBM Tivoli Composite Application Manager for WebSphere

Citrix EdgeSight User s Guide. Citrix EdgeSight for Endpoints 5.4 Citrix EdgeSight for XenApp 5.4

vrealize Operations Manager User Guide

SOA Software: Troubleshooting Guide for Agents

Monitoring HP OO 10. Overview. Available Tools. HP OO Community Guides

IBM WebSphere Server Administration

Next Generation Siebel Monitoring: A Real World Customer Experience. An Oracle White Paper June 2010

Enterprise Manager 12c for Middleware

Java Management Extensions (JMX) and IBM FileNet System Monitor

STEELCENTRAL APPINTERNALS

Desktop Activity Intelligence

Monitoring Oracle Enterprise Performance Management System Release Deployments from Oracle Enterprise Manager 12c

MyOra 3.5. User Guide. SQL Tool for Oracle. Kris Murthy

Response Time Analysis

Diagnosing Production Java Applications Madhav Sathe

VMware vcenter Log Insight User's Guide

NetBeans Profiler is an

Business Application Services Testing

APPLICATION MANAGEMENT SUITE FOR SIEBEL APPLICATIONS

WebSphere Server Administration Course

MONITORING A WEBCENTER CONTENT DEPLOYMENT WITH ENTERPRISE MANAGER

An Oracle White Paper June, Enterprise Manager 12c Cloud Control Application Performance Management

RIVERBED APPRESPONSE

Red Hat Network: Monitoring Module Overview

ITG Software Engineering

WebSphere Business Monitor

STEELCENTRAL APPRESPONSE

PTC System Monitor Solution Training

The Impact of Transaction-based Application Performance Management

BMC ProactiveNet Performance Management Application Diagnostics

JBoss AS Administration Console User Guide. by Shelly McGowan and Ian Springer

Transaction Performance Maximizer InterMax

Configuring ehealth Application Response to Monitor Web Applications

APPLICATION MANAGEMENT SUITE FOR ORACLE E-BUSINESS SUITE APPLICATIONS

Application. 1.1 About This Tutorial Tutorial Requirements Provided Files

Response Time Analysis

Cognos8 Deployment Best Practices for Performance/Scalability. Barnaby Cole Practice Lead, Technical Services

P6 Analytics Reference Manual

SQL Server Solutions GETTING STARTED WITH. SQL Diagnostic Manager

Why Alerts Suck and Monitoring Solutions need to become Smarter

WebLogic Server Admin

Proactive Performance Monitoring Using Metric Extensions and SPA

Enterprise Manager Performance Tips

Business Usage Monitoring for Teradata

5 Days Course on Oracle WebLogic Server 11g: Administration Essentials

WebSphere Business Monitor V6.2 KPI history and prediction lab

Identifying Problematic SQL in Sybase ASE. Abstract. Introduction

TUTORIAL WHITE PAPER. Application Performance Management. Investigating Oracle Wait Events With VERITAS Instance Watch

Oracle WebLogic Integration

Building and Using Web Services With JDeveloper 11g

How To Use Ibm Tivoli Composite Application Manager For Response Time Tracking

WEBLOGIC ADMINISTRATION

VMware vcenter Operations Manager Administration Guide

Oracle JRockit Mission Control Overview

Advanced Performance Forensics

Solutions for detect, diagnose and resolve performance problems in J2EE applications

SQL Server Performance Intelligence

Oracle Exam 1z0-599 Oracle WebLogic Server 12c Essentials Version: 6.4 [ Total Questions: 91 ]

Oracle Enterprise Manager 13c Cloud Control

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

APPLICATION MANAGEMENT SUITE FOR ORACLE E-BUSINESS SUITE APPLICATIONS

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

Proactive Performance Management for Enterprise Databases

Tool - 1: Health Center

WEBAPP PATTERN FOR APACHE TOMCAT - USER GUIDE

Citrix EdgeSight Administrator s Guide. Citrix EdgeSight for Endpoints 5.3 Citrix EdgeSight for XenApp 5.3

Monitoring and Log Management in Hybrid Cloud Environments

An Oracle White Paper September Advanced Java Diagnostics and Monitoring Without Performance Overhead

Transcription:

AMIS Edisonbaan 15 Postbus 24 3430 AA Nieuwegein T +31(0) 30 601 60 00 E info@amis.nl I amis.nl BTW nummer NL8117.70.400.B69 KvK nummer 30114159 Statutair gevestigd te Enschede Whitepaper Measuring, analyzing, tuning, and checking the performance of Oracle ADF applications Author Frank Houweling Function Java and Oracle ADF specialist

Executive Overview The AMIS is specifically designed for measuring, analyzing, tuning, and checking the performance of Oracle ADF applications. It helps development, QA teams and administrators detect, analyze and resolve common and less common issues in response times and resource usage of ADF applications. This document gives more information about the architecture and implementation of the. For more information please consult AMIS. Note: Appendix A: Quotes of ADF experts and managers contains quotes and experiences with the ADF Performance Monitor. Introduction Oracle ADF applications and performance A good performance is the key to the success of a web application. Oracle ADF applications are no exception to this rule. Identifying and diagnosing bottlenecks in Oracle ADF applications can be time intensive, costly and quite a challenge. ADF is a powerful, advanced and highly configurable framework that is very performing and scalable if the ADF developer chooses the right combination of parameter settings. However, web applications in general and ADF applications in particular have many pitfalls that that can be circumvented by choosing the correct performance configuration parameter settings. In most cases, unfortunately, the default values are not the most optimal values. Frequently even experienced ADF developers cannot pinpoint why an ADF application is slow. In this case information of what is happening behind the scenes would be very useful in order to get a better understanding of their ADF application. In JDeveloper Oracle provides some support for performance analysis through the ODL Analyzer. Diagnostic log files can be produced in the Oracle Diagnostic Logging (ODL) format. This tool can be useful in the development stage, but can't be used in test en product environments because of the amount of logging generated and the significant performance overhead. Java APM tools like AppDynamics and New Relic are generally not aware of the intricacies of the ADF framework. They focus on generic, low level JVM metrics or rather on high level business transactions (like 'add to cart' or 'pay bill'), but miss the relevant details regarding the behavior of the ADF framework. They can only distinguish business transactions by looking at the URL and its parameters. In ADF11g and ADF12c applications, the URL of ADF applications does not change during a session due to the use of AJAX for partially refreshing pages. And thus these APM tools cannot distinguish business transactions. Also, these generic APM tools do not measure important key actions inside the ADF framework. 2/22

The detects and identifies the causes of performance problems in a production, test and development environment. The tool consists of a library that is attached to the ADF application and a separate dashboard reporting application. The library collects runtime performance metrics. During the development phase issues are reported in JDevelopers console logging. In a test or production environment, issues are reported real-time in the dashboard application where they can be further analyzed (Figure 1). Figure 1: Performance metrics are reported in JDeveloper during development (1). In a test or production environment, issues are reported in a separate dashboard application (2). With the development-, QA- and operation teams can: Detect bottlenecks, inefficiencies and typical bad practices during the whole lifecycle of an ADF application. Get visibility and insight of what is happening inside the ADF application and in the ADF framework (what operations are executed, when and how often). With this insight better application-architecture design decisions can be made Get an analysis-monitor to do tuning (it gives insight in the effect of configuration parameters settings) - especially during load tests Make the ADF application more scalable (optimal utilization of infrastructure, hardware and licenses) Get reporting of error messages, their type, severity, and their stack traces (both technical and functional) 3/22

Use of the In JDeveloper The metrics of the can be used during development. This enables developers to diagnose and solve performance problems at an early stage and to build an efficient, responsive ADF application before it goes into production. In JDeveloper, ADF developers can switch the metrics logging on and off with the standard ADFLogger of Oracle. For every HTTP request a call stack is printed. A call stack provides visibility into which ADF methods caused other methods to be executed, organized by the sequence of their execution. A complete breakdown of the ADF request processing, including all ADF method executions, along with elapsed times is printed, organized by lifecycle phase. Now Inefficiencies can be identified from this report, like a long running ViewObject query of 31858 Milliseconds (Figure 2): Figure 2: ADF metrics are printed in JDeveloper WebLogic Server Log (can be switched on and off). Shown is an example of a long running ViewObject query of 31858 milliseconds (usage name of HRService.EmployeesView1) 4/22

Dashboard 24/7 Overview The dashboard (Figure 3) gives a 24/7 overview of the performance of the application in a selected time range. It opens with a 24 hours overview for the current date. It shows real-time critical information about the ADF application's performance (are response times within or outside SLA boundaries? What is the error rate? What is the health of the JVM? Is immediate action required?). In the toolbar a time range can be chosen and reports of the worst performing ADF BC, BindingContainer and webservice executions can be opened. The dashboard is separated into 4 regions: 1. Summary of HTTP response times (top left) 2. Details of HTTP response times (top right) 3. Errors (bottom left) 4. JVM performance metrics (bottom right) Figure 3: Performance overview in a selected time range (month, week, day, hour, 5 minutes) Common Functionality Users can select a time range (Figure 4) from a 5 minute level to a level of a month. Users can drill down and roll up to any time range (5 minute, hour, day, week, month), and navigate to next or previous time range. Figure 4: Drill down, roll up, or navigate to the next or previous time range. For example from day to hour and from an hour to a 5 minute time range or to the next or previous day. 5/22

Summary HTTP response times In the top left section of the dashboard a summary of HTTP requests response times is shown for the selected time range. It shows 3 categories (Figure 5): Sunny (Normal, well within the SLA range) Rainy (Slow, somewhat outside the specified SLA response time) Thunder (Very slow, well outside the SLA boundaries, considered seriously problematic) Figure 5: Summary HTTP Response Times. In this case requests with a response of more than 5 seconds are considered as very slow (configurable). Depending on the performance targets or SLA, QA and operation teams can configure how the monitor should interpret HTTP request response times. For example, response times over 5 seconds can be considered very slow for one application while other applications already regard 2 seconds as very slow. To see request snapshots for further analysis, users can click on the number of HTTP requests to drill down to request snapshots. Details HTTP response times In the top right section the HTTP response times can be analyzed over the selected time range (Figure 6). This graph makes visible when the load is high (and how high), how the response times are distributed over the categories (very slow, slow and normal) and when there are performance problems (more red and yellow colored parts of the bars). Figure 6: Http response times for each hour during a day. Visible is that from 15:00 to 16:00 the response times have increased significantly (more red and yellow colored parts in the 15u bar). This should be a trigger to drill down to this hour for further analysis in order to find the root cause. 6/22

Errors On the bottom left, the number of errors over the selected time range (Figure 7) are shown. Insight in the number, type and severity of errors that happen in a test or production environment is crucial to resolve them, and to make a stable ADF application that is less error-prone. Application errors (and their stack traces) are often hard to retrieve or take a lot of time to find. Project teams commonly depend on the reports of end-users and testers, and they typically do not report all errors or provide insufficient information about them. Operational teams do not always have the time to monitor for errors in the WebLogic console or the Enterprise Manager or to wade through large log files to manually piece together what happened. The errors are collected by the ADF Performance Monitor to address this issue. Development, QA and operational teams can drill down to the error messages, their type, severity, and their stack traces to quickly troubleshoot errors and make the application more stable. Paragraph Error Stacktraces describes drilling down to stacktraces. Figure 7: Number of errors. In this case, 132 errors have been collected during the selected time range. Developers can drill down to these errors, their type, severity and messages. Also the stacktraces and callstacks can be viewed to analyze the root cause of the errors. 7/22

JVM performance An important aspect of performance management is a healthy JVM. The shows realtime or historic heap usage and garbage collection times (Figure 8). If garbage collections are running longer than a configurable threshold (for example 20 seconds) a warning sign is shown. This is often an indication of a problem such as a freeze of all current requests because the JVM cannot clear enough memory. Figure 8: Example of very long running garbage collections (red line). The heap space (blue) over time evolves more into a horizontal line than the saw-tooth shaped line a healthy JVM would be characterized by. In this case an out-of-memory-error occurred and the server needs to be restarted. This should be a trigger to investigate whether the JVM heap space is set too low or that the ADF application overconsumes memory. The request call stacks should be analyzed in order to find the root cause of the memory overconsumption. This typically happens when the ADF application fetches too many (database) rows and too many attributes. Like other web applications, ADF applications potentially use a lot of memory. Many times, the root cause of a high memory usage is that application data retrieved from the database into memory is not properly limited; too many rows with too many attributes are fetched and held in memory. To make matters worse, these rows and their attributes are frequently retained in the session for an unnecessary long period of time. The solution to this problem can be found in reducing the size of sessions by decreasing of the amount of data loaded and held in the session. The detects how many database rows are fetched by ADF ViewObjects. Loading too many database rows can lead to memory overconsumption. When a ViewObject loads more than a 1000 rows (configurable) into memory the gives a warning sign in the call stacks. It suggests solutions like using ViewObject range paging or setting an appropriate (maximum) fetchsize on the ViewObject. 8/22

HTTP request details and ADF call stack The monitor retrieves all HTTP requests down to the smallest details for further analysis: Http Request Process times Time spent in the database (SQL and PL/SQL execution) and invoked webservices User IDs Buttons/links clicked on Timestamp of Requests Application server URIs session details (link) to zoom in to all HTTP requests of this session A call stack (Figure 9) gives visibility into which ADF method caused other methods to execute, organized by the sequence of their execution. This is the same call stack that is printed in JDeveloper during development. A complete breakdown of the HTTP request is shown by actions in the ADF framework (Fusion lifecycle phases, model (BindingContainer) and ADF BC executions, start & end of taskflows, e.g.), with elapsed times and a view of what happened when. Appendix B: What is measured has more details what actions in the ADF framework are collected (and are shown in the call stacks). The parts of the ADF Request that consume a lot of time are highlighted and indicated with an alert signal. Figure 9: A call stack. In this case the bottleneck is a slow ViewObject query (HRService.EmployeesView1) of 25611 milliseconds. A detail information popup is shown when the an execution is clicked. When an execution is clicked, a popup comes up and shows detail information like ViewObject name, current taskflow, bind variable values, number of rows fetched from database, ADF object names, e.g. 9/22

Warnings and suggested solutions During the whole application lifecycle, QA teams can make sure problems are found and dealt with. They can routinely check to ensure the application meets the desired quality. In the callstacks clickable warnings signs (Figure 10) are shown for slow executions and detected inefficiencies. These give a quick help with suggested solution(s) and a link to more detailed help. Figure 10: A call stack reporting that thousands of rows are being fetched from the database into memory. By clicking on the warning image a quick help popup is shown with a suggested solution. More detailed help is also available. Warnings and suggested solutions are shown for example in the following cases: Slow executions ViewObject queries EntityObject DML operations ApplicationModule activations, passivations, transactions PL/SQL calls BindingContainer operations, iterator executions Java methods Webservice calls Slow passivation and activation of transient ViewObject attributes Fetching a very high number (configurable) of rows from database into Java memory Multiple redundant ViewObject query executions during the same HTTP request Inefficient ViewObject fetchsize Inefficient BindingContainer iterator rangesize 10/22

Worst performing executions in ADF BC and model layer In various overviews the worst performing parts of the ADF BC and model layer are indicated. They are accessible by clicking on the links in the toolbar (Figure 11). In these overviews bottlenecks can be found. The development team can take action to improve the application. There are overviews for the worst performing executions of: ADF Business Components (PageDefinition) BindingContainer executions Webservices (calls to JAX-WS webservices) Java methods instrumented by the ADF developer Figure 11: Links to various overviews (ADF BC database executions, BindingContainer executions, e.g.) and the number of very slow and slow executions in each category. Worst ADF BC executions Figure 12 is an example of an ADF BC overview. It shows very slow, slow and normal ADF BC executions: ApplicationModule pooling passivations / activations ApplicationModule transactions ViewObject queries EntityObject DML operations PL/SQL procedure/function calls executed from ApplicationModules Figure 12: Overview of ADF Business Components by AVG execution time. First a marker (red, orange, green) is shown (configurable). Then the type and the name and method (ViewObject usagename and method, ApplicationModule usagename and method, e.g.), the AVG execution time, occurrences and the total time. The occurrences and their callstacks can be further analyzed in a popup window. 11/22

ApplicationModule Pooling analysis It is important to choose the right combination of ApplicationModule pooling parameter settings to make the application scalable. Not setting these parameter settings correct (and leaving them at the default values) can be very inefficient and may cause many unneeded passivations and activations. In the ADF BC overview (filtered on ApplicationModule pooling) the ApplicationModule pooling performance can be analyzed. This overview gives insight in the effect of these parameter settings (how often passivations and activations happen, how long their AVG and total execution time is). QA teams can research and experiment with these parameter settings on test environments during load tests and evaluate the performance results in the monitor. They can determine the most optimal ApplicationModule pooling parameter settings for their situation. The monitor also detects for which VO data are being activated/passivated. The root cause of long running activations/passivations is often unnecessarily passivated calculated and transient attributes in combination with an unnecessarily high number of fetched database rows. See Figure 13 for an example. After detection, this inefficiency can be addressed. Figure 13: Long running ApplicationModule activation. The root cause is often unnecessarily passivated calculated and transient attributes in combination with an unnecessarily high number of fetched database rows. 12/22

Worst BindingContainer executions Figure 14 is an example of a (PageDefinition) BindingContainer overview. It shows very slow, slow and normal BindingContainer executions: Operations (all standard ADF operations, client methods on ApplicationModule and ViewObject) Iterators from the executable section Figure 14: Overview of BindingContainer executions by AVG execution time. The type and the name of the BindingContainer execution is shown (operation name, iterator execution method and name). Total execution time of worst performing executions The performance impact of frequent invoked executions can be much higher than executions that occur only a few times but are very slow on average. For example, a 'slow' ViewObject query with an average time of 1500 milliseconds but executed 10k a day, has more impact than a 'very slow' ViewObject query with an average time of 15 seconds that is executed only 2 times a day. There are additional overviews for the total execution time. Ignore well-known long running executions Not all executions can be as fast as we would like them to be. For example, some SQL queries just need time to execute because it needs to perform a lot of processing. The has a feature to filter these identified executions from the overviews. A threshold can be set to ignore or show an execution. 13/22

Error Stacktraces The logs all errors with all relevant detail information. This information helps to discover, analyze, reproduce and resolve errors that happen frequently on test and production environments. Detailed information like exception message, timestamp, user ID, button/link clicked on, application server URI, and session is important to get insight into the errors and to resolve them (Figure 15). Additionally, the ADF developer can view the error stacktrace and call stack - allowing the developer to quickly troubleshoot these errors. Figure 15: Overview of errors. At the top a pie graph is shown with the type of the top 4 errors to get insight into how often the errors occur. In the table the exception message, timestamp, user ID, button/link clicked on, application server URI are visible. Additionally, the developer can view the stacktrace (and call stack) of the error. 14/22

Product architecture How it works The product consists of an ADF library and a dashboard reporting application. The ADF library (attached to your application) collects runtime behavior and performance metrics. The metrics are saved in a separate database schema. The dashboard reporting application retrieves the metrics from the same database (Figure 16). Figure 16: Architectural overview of the 1. Metrics collection Raw metrics data is collected from every HTTP request by the attached ADF library. When the HTTP request is finished, the metrics data are buffered. 2- Metrics logging A background job saves the metrics into an Oracle database. This database is a separate, different one than the database of your application - and preferably on a different machine to not impact machine resources of application database. 3- Dashboard reporting application The dashboard reporting application is a web-based ADF application that can be accessed from any supported browser. This application is deployed on a separate WebLogic managed server and communicates with the metrics database. Monitored Events The monitor collects performance information from specific ADF objects. Appendix B: What is measured has more details of what is monitored. Overhead Load tests show that the overhead is less than 4 percent. This overhead is caused by the very detailed collection of performance metrics. The metrics collection can be dynamically turned on and off (at runtime) at all times. When the monitor is turned off there is no performance overhead because the metrics classes are not active. Clear error messages The logs clear error messages in the Enterprise Manager in case resources needed for the monitor are not available (for example if the metrics database is offline). Or in case an error occurs. This will not cause any application error in the ADF application. 15/22

Configuration Prerequisites The can be implemented on all ADF11g and all ADF12c applications deployed on a WebLogic server. Other application servers like Glassfish are not (yet) supported. Any Oracle11g (XE) database can be used for the metrics database. It should be an Oracle database because of the complex PL/SQL units it uses. The Oracle XE database is a free edition. If ADF BC is used, the use of ADF BC framework extension classes is required Configuration An ADF library needs to be added to your ADF application. A very small amount of code needs to be added to instrument the ADF application. The metrics classes make use of extension points in the ADF framework in order to measure the time every action/step takes in a http request. On a test or production environment, a JDBC datasource must be configured on the managed server that connects to the metrics database. The dashboard application EAR file must be deployed on a separate WebLogic server instance to not impact WebLogic resources. This EAR can also be run locally in JDeveloper. Other application servers like Glassfish are not (yet) supported because ADF security is used. Custom instrumentation It is possible to instrument (calls to) custom Java methods and 3 rd party libraries. Custom methods are visible on the call stacks and the Java methods performing executions overview. Turn on/off at all times The makes use of the standard Oracle ADFLogger and can be turned on and off at runtime at all times in the Enterprise Manager. Installation guide All configuration steps are well documented in the installation guide. The implementation takes less than 1 day, usually no more than a few hours. A user manual is available from within the dashboard application. Training AMIS can provide a training on the, ADF performance tuning and ADF best practices (on location or online via webcast). 16/22

Conclusion The is a tool for measuring, analyzing, tuning, and checking the performance of Oracle ADF applications. Throughout the whole application lifecycle, the development, QA and operation teams get insight into what is happening inside their ADF application. With this insight ADF developers can diagnose and solve performance problems already in an early stage, make better (architecture) decisions and can build more responsive and scalable ADF applications. With the warnings and suggested solutions they can circumvent common and uncommon performance problems, use best practices and deliver a higher quality. This will lead to an application that is more consistent and better to maintain. They can reduce the utilization of infrastructure, hardware and licenses. The end-users will be more happy. Because of the low performance overhead, it is safe to use the in production environments. The monitor has proven to be very useful in many ADF projects. Appendix A: Quotes of ADF experts and managers. Resources More information on www.amis.nl/adfperformancemonitor, including several demo videos. 17/22

Appendix A: Quotes of ADF experts and managers Henri Peters - Project Manager Spir-it Dutch Ministry of Justice 5000+ end-users 15+ developers 4 WLS nodes in production system: The primary information system for the Dutch judiciary is replaced with a new system based on Oracle technology Very large ADF application; 1000+ ViewObjects, 15+ ApplicationModules "The is for us interesting during application development. But for us it is even more interesting during the implementation/production phase. We use it as a real-time dashboard that shows how much load the application puts on our system. We can even drill down to the level of an individual user. Sometimes we go so far that we proactively approach end-users: before they complain we already have contacted them. For Spir-it is of great value that we can continuously identify the weakest link, even if there is no actual performance problem (yet)." The AMIS not only affects systems. Henri Peters: "Users are right when they are concerned with the introduction of new information systems. When we showed the real- time performance dashboard of the monitor, they understood that we cannot always avoid performance problems, but are able to resolve problems quickly when they arise. This has resulted in a lot of support of customers that had to implement the new ADF application." Steven Janssens - Software Development Manager AXI-Belgium 1000+ end-users 3 developers SAAS model system: an advanced document management system extendable with customer specific functionality. Originally build in ADF10g, redesigned and build in ADF11g. "As an ADF Developer with more than 5 years experience, and more than 5 years experience with ADF Business Components, quite frequently there are still times when you cannot pinpoint why an application is not working properly or slow. At such times you would like to know precisely what is happening behind the scenes in order to get insight into the ADF internals. The is the instrument of choice. Now I use the ADF Performance Monitor daily as an additional quality control during development. Also, when we are performing load tests we use the to identify any performance problems." 18/22

Christiaan Braad - Software developer Impulse Info Systems 5000+ end-users 15+ developers SAAS model system: healthcare information system for therapists Very large ADF application; 1000+ ViewObjects, 15+ ApplicationModules "As a software developer the helps me to discover performance bottlenecks at an early stage. During the development phase I get an immediate analysis in JDevelopers console log. With this information I can quickly resolve performance issues. Additionally, the helps us with the analysis of our load and stress tests. With the monitor we can drill down to parts of the ADF application that cause performance problems. This saves us time analyzing performance problems." Matthieu de Graaf - Architect Pagoni 100+ end-users 6 developers SAAS model, 1 WLS node at customer location in production system: custom application for the planning of real-estate development for large governments in The Netherlands. Originally build in ADF10g, redesigned and build in ADF11g. Large ADF application; 500+ ViewObjects, 25+ ApplicationModules "While developing our application, we used the from the start on. In the ADF framework, it is not always clear what actions exactly are performed during a request. The ADF Performance Monitor gives us an excellent insight. Already during the development of our application it shows us performance bottlenecks that we can resolve immediately. Additionally, we use the during the test phase. On a regular basis, we watch the ADF Dashboards that highlight performance issues, which we have not discovered during the development phase. Additionally, the logs all errors that have occurred during the test phase, even those that are not reported by our testers. The is for us a very valuable tool to create a high-quality and performing ADF application." Paul Swiggers - Principal Consultant AMIS 5000+ end-users 15 developers 4 WLS nodes in production system: The primary information system for the Dutch judiciary is replaced with a new system based on Oracle technology "Since 4 months we using the within a large-scale project. The monitor is useful during the whole lifecycle; from the development to the production/operation phase. During the development phase, the monitor gives direct insight into potential performance risks in JDevelopers console. The number of performance issues since the use of the Monitor has decreased immediately. During the operational phase we use the monitor real- time for monitoring and supporting production issues. Each request can be historically retrieved and analyzed - down to the smallest detail. The dashboard gives a good real- time overview of the performance of the application. The overviews of the worst performing parts are useful; the development team can take immediate 19/22

action to improve the application performance continuously. The is a valuable addition to our project." Software Developer Tax Office City of Amsterdam 50+ end-users 3 developers 1 WLS node in production system: administrative application supporting tax inspectors "We had urgent performance problems, a few weeks before our application went into production the response time of a few important screens was more than 10 seconds while it should had been less than 3 seconds. We couldn't pinpoint the exact location that caused the performance problems. With the we were able to detect the bottlenecks and we got insight into our application. With this information we have tuned inefficient database queries executed and changed the structure of the ADF application. We solved our performance problems and ended up with response times of less than 2 seconds." 20/22

Appendix B: What is measured 21/22

Disclaimer The information in this whitepaper is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality of this product products remains at the sole discretion of AMIS. 22/22