White Paper 6 Steps to Enhance Performance of Critical Systems



Similar documents
8 Steps to Measure ADM Vendor Deliverables

FireScope + ServiceNow: CMDB Integration Use Cases

The Evolution of Load Testing. Why Gomez 360 o Web Load Testing Is a

BMC ProactiveNet Performance Management Application Diagnostics

Interactive Application Security Testing (IAST)

Implement a unified approach to service quality management.

Application Performance Management

SAP Performance Management. A Trend Study by Compuware and PAC

Five Fundamental Data Quality Practices

Application Performance Testing Basics

Solution Overview. Optimizing Customer Care Processes Using Operational Intelligence

Cisco Security Optimization Service

Reduce IT Costs by Simplifying and Improving Data Center Operations Management

Business Usage Monitoring for Teradata

How To Test For Elulla

A TECHNICAL WHITE PAPER ATTUNITY VISIBILITY

Redefining Infrastructure Management for Today s Application Economy

How To Manage A Network Security Risk

Successful Outsourcing of Data Warehouse Support

SOLUTION WHITE PAPER. Align Change and Incident Management with Business Priorities

Elastic Application Platform for Market Data Real-Time Analytics. for E-Commerce

Using WebLOAD to Monitor Your Production Environment

White Paper Software Quality Management

Why Alerts Suck and Monitoring Solutions need to become Smarter

Performance Management for Enterprise Applications

HP APPLICATION PERFORMANCE MONITORING

Closed Loop Incident Process

Chapter. Solve Performance Problems with FastSOA Patterns. The previous chapters described the FastSOA patterns at an architectural

Copyright 11/1/2010 BMC Software, Inc 1

MDM and Data Warehousing Complement Each Other

HP End User Management software. Enables real-time visibility into application performance and availability. Solution brief

WHITE PAPER. Meeting the True Intent of File Integrity Monitoring

Response Time Analysis

Capacity planning with Microsoft System Center

Attack Intelligence: Why It Matters

SAP Cybersecurity Solution Brief. Objectives Solution Benefits Quick Facts

ROCANA WHITEPAPER How to Investigate an Infrastructure Performance Problem

Use service virtualization to remove testing bottlenecks

case study Coverity Maintains Software Integrity of Sun Microsystems Award-Winning Storage Products

The Power of Risk, Compliance & Security Management in SAP S/4HANA

Bringing wisdom to ITSM with the Service Knowledge Management System

Thinking about APM? 4 key considerations for buy vs. build your own

effective performance monitoring in SAP environments

Continuous???? Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

with Managing RSA the Lifecycle of Key Manager RSA Streamlining Security Operations Data Loss Prevention Solutions RSA Solution Brief

Prevent cyber attacks. SEE. what you are missing. Netw rk Infrastructure Security Management

Operationalizing Data Governance through Data Policy Management

Optimize Application Performance and Enhance the Customer Experience

White Paper The Dynamic Nature of Virtualization Security

DEPLOYMENT ARCHITECTURE FOR JAVA ENVIRONMENTS

Knowledge Base Data Warehouse Methodology

Application Security in the Software Development Lifecycle

Introducing SAP s Landscape and Data Center Innovation Platform. Phil Jackson SAP Solution Engineer

CA Workload Automation

Monitor and Manage Your MicroStrategy BI Environment Using Enterprise Manager and Health Center

10 Best Practices for Application Performance Testing

White paper: Unlocking the potential of load testing to maximise ROI and reduce risk.

SELECTING ECOMMERCE WEB HOSTING & SUPPORT

Bringing Value to the Organization with Performance Testing

An Oracle White Paper February Oracle Data Integrator 12c Architecture Overview

Move beyond monitoring to holistic management of application performance

A Near Real-Time Personalization for ecommerce Platform Amit Rustagi

Align IT Operations with Business Priorities SOLUTION WHITE PAPER

Optimizing Your Database Performance the Easy Way

Automating Healthcare Claim Processing

Vulnerability Management

SAP Solution Brief SAP Technology SAP IT Infrastructure Management. Unify Infrastructure and Application Lifecycle Management

The Value of Vulnerability Management*

Avanade ViewX Technology

Meeting the Challenge of Big Data Log Management: Sumo Logic s Real-Time Forensics and Push Analytics

How To Protect Your Network From Attack From A Network Security Threat

White Paper. The Ten Features Your Web Application Monitoring Software Must Have. Executive Summary

VARONIS CASE STUDY THE HAGADONE CORPORATION

API Management Introduction and Principles

WHITE PAPER Using SAP Solution Manager to Improve IT Staff Efficiency While Reducing IT Costs and Improving Availability

Mitigate Risk for Data Center Network Migration

BUSINESS IMPACT OF POOR WEB PERFORMANCE

Measuring end-to-end application performance in an on-demand world. Shajeer Mohammed Enterprise Architect

Network Management Slide Set 3

IBM Software Enabling business agility through real-time process visibility

Response Time Analysis

Application Outsourcing: The management challenge

How to Define SIEM Strategy, Management and Success in the Enterprise

Predictive Intelligence: Identify Future Problems and Prevent Them from Happening BEST PRACTICES WHITE PAPER

Enterprise Energy Management with JouleX and Cisco EnergyWise

BIG DATA AND THE ENTERPRISE DATA WAREHOUSE WORKSHOP

The AppSec How-To: 10 Steps to Secure Agile Development

Transcription:

White Paper 6 Steps to Enhance Performance of Critical Systems Despite the fact that enterprise IT departments have invested heavily in dynamic testing tools to verify and validate application performance and scalability before releasing business applications into production, performance issues and response time latency continue to negatively impact the business. By supplementing dynamic performance testing with automated structural quality analysis, development teams have the ability to detect, diagnose, and analyze performance and scalability issues more effectively. This white paper presents a six-step Performance Modeling Process using automated structural quality analysis to identify these potential performance issues earlier in the development lifecycle.

Page 2 Contents I. Introduction II. Approach: Structural Quality Analysis of Source Code to Tackle Performance Issues III. Case Studies: Solving Performance Issues with Structural Quality Analysis IV. The Requirements of a Solid Structural Quality Analysis Platform V. Conclusion I. Introduction Despite the fact that enterprise IT departments have invested heavily in dynamic testing tools to verify and validate application performance and scalability before releasing these business applications into production, performance issues and response time latency continue to negatively impact the business. Application Development and Maintenance (ADM) teams often spot performance issues in mission-critical applications during the dynamic or live testing phase when an application is almost complete, and theoretically ready for production. By the time they discover these performance issues, it is too late to make the design or architectural changes needed to address the issues without business disruption or costly additional development cycles resulting in significant delays and/ or business losses. System-level structural quality analysis provides the ability to detect, diagnose, and analyze performance and scalability issues. While performance checks are still seen as the domain of dynamic testing (e.g. It does not make sense to solve performance problems by diving into the code ), automated solutions to analyze and detect performance and scalability issues based on structural quality analysis of the source code, are emerging. By supplementing dynamic performance testing with automated structural quality analysis, development teams get early and important information that might be missed with a pure dynamic approach, such as inefficient loops or SQL queries. The combined approach results in better detection of latent performance issues within application software. This white paper presents a six-step Performance Modeling Process using automated structural quality analysis to identify these potential performance issues earlier in the development lifecycle. The paper also presents cases studies to illustrate the proposed modeling process at work. II. Approach: Structural Quality Analysis of Source Code to Tackle Performance Issues Once upon a time, a seasoned software professional building advanced military systems used to tell young developers this rather provocative saying: You should not optimize, but rather pessimize. The developers would laugh at him before finally understanding his advice. He meant: Do not try to write a sophisticated and difficult algorithm or query, but rather create a simple and functional one first. Then optimize those that really need to perform at a very high speed.

Page 3 Highlight The performance modeling process allows ADM teams to identify and eliminate performance issues before they reach production, and with a high level of confidence and efficiency His provocative advice should not lead developers to write quick and dirty routines every time, only to optimize as an afterthought. Rather, it should inspire them to think about performance from the outset. Just like security, application performance should be taken seriously from the beginning of the development lifecycle. To achieve a performance perspective from the start of development, we propose a six-step Performance Modeling Process that is in use today by many professional developers and advanced ADM teams. Performance Modeling Process 1. Identify high-level use cases: Focus on areas of high value such as key functionalities and components (the performance sweet spot). 2. Run dynamic tests on transactions: Use different sets/ranges/sizes of input data, and capture results/values. Some of these tests should be intended to fail to expose performance issues. 3. Identify transactions with poor performance: Include use cases where a performance hit/degradation is most pronounced. 4. Analyze the application source code: Use structural quality analysis to identify specific poor performing transactions. (This step has traditionally been performed manually with a high rate of mistakes; however there are now tools that provide automation, so error rates are virtually non-existent. We will discuss this more later.) Experience shows that most bugs are due to poor coding standards or missing standard requirements or best practices, which cannot be identified without human intervention. 5. Identify violations of best practices: Determine violations of performance best practices and performance coding standards (e.g. memory leaks, resource leaks, poorly written SQL queries). Check compliance to set baseline requirements or industry standard requirements (e.g. performance standard requirements). 6. Fix the violations and re-test/repeat: Run dynamic tests on the modified transactions again, and re-examine updated source code. When used as an ongoing process during development of a structural quality analysis platform combined with dynamic testing tools, ADM teams can identify and eliminate performance issues before they reach production, and with a high level of confidence and efficiency. Following are two real-world examples where application development teams have used this combined approach to fix or prevent performance issues before they happen in production.

Page 4 Highlight A structural quality analysis solution capable of analyzing different technologies such as Java, XML, and SQL, can understand how each technology is integrated through a particular framework III. Case Studies: Solving Performance Issues with Structural Quality Analysis Case 1 - UPDATE Trigger Caused Major Troubles at a Global Travel Company At a global travel company, different travel providers and travel agencies make reservations using a legacy system. Using mainframe applications to manage the entire transaction has higher costs depending on its duration, so the company decided to revamp all the reservation selection routines for flights, hotels, and cars in Java EE. When the customer was ready to buy, the Java EE application would call the mainframe to finalize the transaction. The application development and testing went well, but after putting the system into production, the application had significant performance latency issues that resulted in lost revenue since many customers abandoned their transaction during processing. The ADM team was forced to revert back to the legacy application and investigate the new Java EE application. The architects designed the Java EE application using Hibernate, Spring, and Spring MVC and deployed on a Java EE 5 application server. The team used the database as-is because of the legacy mainframe system, and it could not be changed. The team chose the architecture because it used well-known frameworks with large communities, and permitted use of POJO (Plain Old Java Object) to develop the application. In addition, Hibernate had features to adapt to a specific legacy database, which would facilitate future enhancements of the application. Furthermore, with this new Java EE application, the company estimated a 30% reduction in the operational cost of the mainframe system. After testing the Java EE application and releasing it into production, the team noticed a performance issue when a certain volume of transactions occurred at one time (around 26 transactions per second). Several days were spent to set-up an environment similar to the production environment to simulate the transaction activity. The team determined that it needed a structural quality analysis solution, which was capable of analyzing different technologies such as Java, XML, and SQL, and could understand how each technology is integrated through a framework such as Hibernate, to help focus the investigation. To reproduce the issue, the team simulated the number of transactions that were resulting in performance issues to see what was happening in the application and on the database. They saw abnormal activity on the database due to an on update trigger that fired too frequently, which

6 Steps to Enhance Performance of Critical Systems Page 5 the architects kept in the database for use by other legacy applications. In turning on the Hibernate show SQL property to see what was happening, the team observed that the trigger was firing even if the data had not changed. This error was due to a specific parameter in Hibernate: select-beforeupdate on the entity that was set to false. When set to false, Hibernate updated the table systematically. See Figure 1. Figure 1 - UPDATE Trigger Firing To fix the issue, the team simply needed to set select-before-update to true so that when Hibernate selects the data from the table and compares it, it performs an update only if the data are different. The cost of this issue was estimated at about $400,000, which included the sum of the transactions the company lost during the time period plus the number of man-days lost to investigate and fix the issue. This does not include impacts to the company s reputation or other soft costs. Using an automated structural quality analysis solution was instrumental in solving the problem in this complex environment. In this example, using a powerful Java EE framework like Hibernate to manage the complexity of database transactions can be a great enabler; however, it requires keen architectural skills to understand the ramifications of what will happen in the back end. Similarly, it can be difficult to test all the possible scenarios that might happen in reality. Structural quality analysis filled in these critical knowledge gaps and provided speedy resolution to the issue.

Page 6 Highlight By using an automated structural quality analyzer, the ADM team identified performance defects causing severe latency issues Case 2 - Critical SAP Transaction Suffers Huge Performance Hit In a global chemical company, an enterprise-level implementation of SAP ERP software manages the core business processes of the company. The ADM team manages the system in a centralized company-managed technical center, while end-users access the SAP applications around the world. To better meet the requirements of specific departments, multiple ADM teams drive custom development to adapt standard applications, as well as create new ones that extend functionality. Some of the applications, which are not all defined as mission-critical, have several thousand users and handle huge volumes of data daily. In some of these cases, the amount of information is consistently large, and in others the volume of information grows quickly before being processed, and then is removed or archived. As is commonly known, SAP is built on an RDBMS and uses many calls to/from the database. In this SAP implementation, a custom designed and developed application enabled the recording of technical data and metadata, the calculations of new values based on previous information, and the production of reports with statistics for technical managers. The goal of this new application was to minimize the time spent recording information by allowing a large number of employees to access the system for management statistics, while mitigating the volume of database calls. Unfortunately, the development team for this application did not completely evaluate the quantity of information managed, and it did not realize that the volume of data can grow very quickly in certain circumstances. After some weeks in production, end-users began to complain about abnormal response times for specific transactions in the new custom application. The ADM team analyzed the production log files and it found effectively abnormal execution times, up to 10 hours, for some transactions connected to the application. After using an automated structural quality analyzer, the team identified the cause of the trouble. The performance defects were the consequence of misuse of Open SQL statements regarding the volume of data to process. The team found in some Open SQL queries, the addition FOR ALL ENTRIES IN used without any check to control the internal table content. As a result, the queries would end up performing a full table scan, and thus cause severe latency issues, especially for very large database tables. In a few other cases, a SELECT - ENDSELECT statement had been used instead of a SELECT INTO TABLE used in conjunction with a LOOP AT statement. The SELECT - ENDSELECT worked as a loop fetching a single record at a time, and caused a problem when this statement

Page 7 selected from large tables. The team revealed in its investigation on the database that very big tables, with more than 1 million rows, were common in the calls by the application. Figure 2 illustrates some of the performance risk-laden transactions. Figure 2 - SELECT Statement Errors After the development team fixed these issues and retested, the situation in production returned to normal, and response times decreased to less than 3 hours for transactions with the largest volumes of data. Even with the best test environment before production, unit tests and integration tests are often not sufficient to prevent performance issues, since load test cases devised to simulate the production environment cannot address every possible scenario. Unfortunately, creating test cases that are similar to transaction and data volumes used in production is expensive and is often difficult to do in a short time window between integration testing and production. Therefore, one might conclude that the solution would be to perform structural quality analysis in order to detect potential issues. However, this technique is much more efficient when enriched with runtime information, such as execution times, number of rows in database tables, etc. Connecting both allows ADM teams to focus on the most critical results that need to be fixed immediately. IV. The Requirements of a Solid Structural Quality Analysis Platform To tackle performance issues directly from source code before they happen in production, it is necessary to analyze the application as a whole, analyzing all layers of the application, especially when written in different languages. System-level analysis is a requirement in most IT domains as application layers are written in different programming languages. For example, an application using a Java EE code-level analyzer will only analyze the Java code and is unable to analyze the

Page 8 Highlight To successfully implement the Performance Modeling Process, it is important to use a structural quality analytic approach that provides an end-to-end view of the application SQL code that includes the dynamic SQL, the SQL stored procedures in the database, and the table schema. Effective analysis of the application should also take into account the framework information stored in XML files. An effective structural quality analysis solution must also detect violations of performance best practices or performance vulnerabilities, so that the team is aware of the appropriate practices during development. Our experience has shown that to successfully implement the Performance Modeling Process described earlier, it is important to use a structural quality analytic approach that provides an end-to-end view of the application one that includes a system-level view of the application s transactions across all the technology layers. Application owners, project managers, and ADM managers will gain valuable insight from the information generated by structural quality analysis, enabling them to address issues like the following: Manage and improve structural quality of applications with an objective and data-driven approach Understand the risk impact of violations on specific modules and systems Prioritize the violations to remediate Perform root cause analysis of production outages Quantify the technical debt being accumulated in applications These types of issues can only be addressed with a structural quality analysis platform that can relate performance vulnerabilities to known transactions and the results from dynamic testing.

V. Conclusion Enriching dynamic testing with structural quality analysis gives ADM teams insight into the performance behavior of applications by highlighting critical performance issues, especially when combined with runtime information. By adding structural quality analysis, ADM teams learn important information about violations of architectural and programming best practices earlier in the development lifecycle than with a pure dynamic testing approach. Structural quality analysis as part of the performance modeling process allows for fact-based insight into application complexity (e.g. multiple layers, dynamics of their interactions, complexity of SQL, etc.) and allows ADM managers to anticipate evolution of the runtime context (e.g. growing volume of data, higher number of transactions, etc.). The combined approach results in better detection of latent performance issues within application software. Resolving these issues early in the development cycle, these alerts help to not only save money but also prevent complete business disruptions. About the Authors Jerome Chiampi, Product Manager, CAST Manages mainframe and SAP application intelligence products at CAST, researches software quality and best practices in legacy environments. Frederic Kihm, Product Manager, CAST Manages the Java EE software quality and application intelligence products at CAST, author of innovative software risk ranking methodology and tool. Laurent Windels, Product Manager, CAST Manages the implementation and deployment of CAST Application Intelligence Platform in the development cycle. About CAST Questions? Email us at contact@castsoftware.com CAST is a pioneer and world leader in Software Analysis and Measurement, with unique technology resulting from more than $100 million in R&D investment. CAST introduces fact-based transparency into application development and sourcing to transform it into a manage Europe 3 rue Marcel Allégot 92190 Meudon - France Phone: +33 1 46 90 21 00 North America 373 Park Avenue South New York, NY 10016 Phone:+1 212-871-8330