Performance Testing. Slow data transfer rate may be inherent in hardware but can also result from software-related problems, such as:



Similar documents
Project 2 Performance Testing

Performance Testing Percy Pari Salas

How To Test For Elulla

Application Performance Testing Basics

Web Application s Performance Testing

10 Best Practices for Application Performance Testing

Bringing Value to the Organization with Performance Testing

Performance Testing and Improvement in Agile

How To Test For Performance

Load Testing Scenarios Selection

Web Performance Testing: Methodologies, Tools and Challenges

Performance Testing Guidance for Web Applications

How To Test A Web Server

Business Application Services Testing

Performance Testing of a Large Wealth Management Product

Performance Test Report

SOFTWARE PERFORMANCE TESTING SERVICE

Performance Test Process

Charu Babbar 1, Neha Bajpai 2 and Dipti Kapoor Sarmah 3

Successful Factors for Performance Testing Projects. NaveenKumar Namachivayam - Founder - QAInsights

Recommendations for Performance Benchmarking

A closer look at HP LoadRunner software

PERFORMANCE TESTING. New Batches Info. We are ready to serve Latest Testing Trends, Are you ready to learn.?? START DATE : TIMINGS : DURATION :

Cisco Application Networking for Citrix Presentation Server

Windows Server Performance Monitoring

Performance Testing Process A Whitepaper

WHAT WE NEED TO START THE PERFORMANCE TESTING?

Copyright 1

Mike Chyi, Micro Focus Solution Consultant May 12, 2010

How To Test On The Dsms Application

Whitepaper Performance Testing and Monitoring of Mobile Applications

Levels of Software Testing. Functional Testing

Application. Performance Testing

Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010

Case Study: Load Testing and Tuning to Improve SharePoint Website Performance

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2009

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

Load and Performance Testing

Noelle A. Stimely Senior Performance Test Engineer. University of California, San Francisco

Website Performance Analysis Based on Component Load Testing: A Review

Cisco Application Networking for IBM WebSphere

Test Run Analysis Interpretation (AI) Made Easy with OpenLoad

Rapid Bottleneck Identification

Siebel & Portal Performance Testing and Tuning GCP - IT Performance Practice

Performance And Scalability In Oracle9i And SQL Server 2000

Performance Testing. What is performance testing? Why is performance testing necessary? Performance Testing Methodology EPM Performance Testing

Table of Contents INTRODUCTION Prerequisites... 3 Audience... 3 Report Metrics... 3

An Oracle White Paper February Rapid Bottleneck Identification - A Better Way to do Load Testing

Best Practices for Web Application Load Testing

Load Testing and Monitoring Web Applications in a Windows Environment

Performance Analysis of Web based Applications on Single and Multi Core Servers

TPC-W * : Benchmarking An Ecommerce Solution By Wayne D. Smith, Intel Corporation Revision 1.2

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

An introduction to load testing for Web applications. Business white paper

Summer Internship 2013 Group No.4-Enhancement of JMeter Week 1-Report-1 27/5/2013 Naman Choudhary

Load Testing Analysis Services Gerhard Brückl

Using Multipathing Technology to Achieve a High Availability Solution

Scalability and BMC Remedy Action Request System TECHNICAL WHITE PAPER

BridgeWays Management Pack for VMware ESX

Performing Load Capacity Test for Web Applications

Performance analysis and comparison of virtualization protocols, RDP and PCoIP

Addressing Mobile Load Testing Challenges. A Neotys White Paper

Cisco Application Networking for BEA WebLogic

Getting Things Done: Practical Web/e-Commerce Application Stress Testing

Performance Testing. Why is important? An introduction. Why is important? Delivering Excellence in Software Engineering

Introducing Performance Engineering by means of Tools and Practical Exercises

Guideline for stresstest Page 1 of 6. Stress test

1. Welcome to QEngine About This Guide About QEngine Online Resources Installing/Starting QEngine... 5

QUALITYMATE FOR LOAD TESTING

Everything you need to know about flash storage performance

Business white paper. Load factor: performance testing for Web applications

The Association of System Performance Professionals

STeP-IN SUMMIT June 18 21, 2013 at Bangalore, INDIA. Enhancing Performance Test Strategy for Mobile Applications

Closing The Application Performance Visibility Gap Inherent To Citrix Environments

Network Management and Monitoring Software

Tools for Testing Software Architectures. Learning Objectives. Context

Understanding the Impact of Running WAN Emulation with Load Testing

ProSystem fx Engagement. Deployment Planning Guide

Learning More About Load Testing

Case Study - I. Industry: Social Networking Website Technology : J2EE AJAX, Spring, MySQL, Weblogic, Windows Server 2008.

Sensitivity Analysis and Patterns Implementation on Load Testing Software Systems

Application Performance Monitoring (APM) Technical Whitepaper

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

How To Test A Web Application For Campaign Management On A Web Browser On A Server Farm (Netherlands) On A Large Computer (Nostradio) On An Offline (Nestor) On The Web (Norton

27 th March 2015 Istanbul, Turkey. Performance Testing Best Practice

Monitoring Cloud Applications. Amit Pathak

W21. Performance Testing: Step On It. Nadine Pelicaen. P r e s e n t a t i o n

Comparative Study of Load Testing Tools

Q: What is the difference between the other load testing tools which enables the wan emulation, location based load testing and Gomez load testing?

Silk Performer LOAD TESTING. The key to keeping business applications running

CHAPTER 3 PROBLEM STATEMENT AND RESEARCH METHODOLOGY

An Oracle White Paper March Load Testing Best Practices for Oracle E- Business Suite using Oracle Application Testing Suite

Accelerating Time to Market:

Transcription:

Performance Testing Definition: Performance Testing Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing. Performance testing can verify that a system meets the specifications claimed by its manufacturer or vendor. The process can compare two or more devices or programs in terms of parameters such as speed, data transfer rate, bandwidth, throughput, efficiency or reliability. Performance testing can also be used as a diagnostic aid in locating communications bottlenecks. Often a system will work much better if a problem is resolved at a single point or in a single component. For example, even the fastest computer will function poorly on today's Web if the connection occurs at only 40 to 50 Kbps (kilobits per second). Slow data transfer rate may be inherent in hardware but can also result from software-related problems, such as: Too many applications running at the same time A corrupted file in a Web browser A security exploit Heavy-handed antivirus software The existence of active malware on the hard disk Effective performance testing can quickly identify the nature or location of a software-related performance problem. Pre-requisites for Performance Testing A performance test is not valid until the data in the system under test is realistic and the software and configuration is production like. The following table list pre-requisites for valid performance testing, along with tests that can be conducted before the pre-requisites are satisfied:

Performance Test Pre-Requisites Production Like Environment Production Like Configuration Production Like Version Production Like Access Comment Performance tests need to be executed on the same specification equipment as production if the results are to have integrity. Configuration of each component needs to be production like. For example: Database configuration and Operating System Configuration. The version of software to be tested should closely resemble the version to be used in production. If clients will access the system over a WAN, dial-up modems, DSL, ISDN, etc. then testing should be conducted using each communication access method. See Network Sensitivity Tests for more information on testing WAN access. Caveats on testing where pre-requisites are not satisfied. Lightweight transactions that do not require significant processing can be tested, but only substantial deviations from expected transaction response times should be reported. Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested. While system configuration will have less impact on performance testing than load testing, only substantial deviations from expected transaction response times should be reported. Only major performance problems such as missing indexes and excessive communications should be reported with a version substantially different from the proposed production version. Only tests using production like access are valid. Production Like Data All relevant tables in the database need to be populated with a production like quantity with a realistic mix of data. e.g. Having one million customers, 999,997 of which have the name "John Smith" would produce some very unrealistic responses to customer search transactions Low bandwidth performance testing of high bandwidth transactions where communications processing contributes to most of the response time can be tested.

Types of Performance Testing: 1. Bench Mark Test 2. Load Test 3. Stress Test 4. Volume Test 5. Endurance Test 6. Spike Test 7. Scalability Test 8. Failover Test Bench Mark Test: Test an application performance to meet the performance targets at early stages of performance life cycle. Load Test: Stress Test: Test an application performance with incremental changes to the work load. Test an application performance by overloading on the server(not beyond the limit). Volume Test: Testing the software with heavy volumes of data.it is done to findout memory leaks and buffer overflows. Endurance Test (En-duration): Test an application behaviour in a prolong duration.this types of testing especially requires to find out the "Memory Leakes" of an application. Spike Test: Test an application performance with "dynamic" work load changes. In load runner use the option "Run/Stop Vuser". Scalability Test: Test an application performance by gradually increasing the amount of user load on the server until unless break the system. Failover Test: If any server fails(during execution time) to handle the users request how the request should be allocated to the next availability server is the "failover test". This is applicable only load balancing concept.

Terminologies: Performance testing Terminology The following definitions are used throughout this guide. Every effort has been made to ensure that these terms and definitions are consistent with formal use and industry standards; however, some of these terms are known to have certain valid alternate definitions and implications in specific industries and organizations. Keep in mind that these definitions are intended to aid communication and are not an attempt to create a universal standard. Capacity The capacity of a system is the total workload it can handle without violating predetermined key performance acceptance criteria. Capacity test A capacity test complements load testing by determining your server s ultimate failure point, whereas load testing monitors results at various levels of load and traffic patterns. You perform capacity testing in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network bandwidth) are necessary to support future usage levels. Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out. Component test A component test is any performance test that targets an architectural component of the application. Commonly tested components include servers, databases, networks, firewalls, and storage devices. Endurance test An endurance test is a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time. Endurance testing is a subset of load testing. Investigation Investigation is an activity based on collecting information related to the speed, scalability, and/or stability characteristics of the product under test that may have value in determining or improving product quality. Investigation is frequently employed to prove or disprove hypotheses regarding the root cause of one or more observed performance issues.

Latency Latency is a measure of responsiveness that represents the time it takes to complete the execution of a request. Latency may also represent the sum of several latencies or subtasks. Metrics Metrics are measurements obtained by running performance tests as expressed on a commonly understood scale. Some metrics commonly obtained through performance tests include processor utilization over time and memory usage by load. Performance Performance refers to information regarding your application s response times, throughput, and resource utilization levels. Performance test A performance test is a technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance testing is the superset containing all other subcategories of performance testing described in this chapter. Performance budgets or allocations Performance budgets (or allocations) are constraints placed on developers regarding allowable resource consumption for their component. Performance goals Performance goals are the criteria that your team wants to meet before product release, although these criteria may be negotiable under certain circumstances. For example, if a response time goal of three seconds is set for a particular transaction but the actual response time is 3.3 seconds, it is likely that the stakeholders will choose to release the application and defer performance tuning of that transaction for a future release. Performance objectives Performance objectives are usually specified in terms of response times, throughput (transactions per second), and resource-utilization levels and typically focus on metrics that can be directly related to user satisfaction. Performance requirements Performance requirements are those criteria that are absolutely non-negotiable due to contractual obligations, service level agreements (SLAs), or fixed business needs. Any performance criterion that will not unquestionably lead to a decision to delay a release until the criterion passes is not absolutely required and therefore, not a requirement.

Performance targets Performance targets are the desired values for the metrics identified for your project under a particular set of conditions, usually specified in terms of response time, throughput, and resource-utilization levels. Resource-utilization levels include the amount of processor capacity, memory, disk I/O, and network I/O that your application consumes. Performance targets typically equate to project goals. Performance testing objectives Performance testing objectives refer to data collected through the performance-testing process that is anticipated to have value in determining or improving product quality. However, these objectives are not necessarily quantitative or directly related to a performance requirement, goal, or stated quality of service (QoS) specification. Performance thresholds Performance thresholds are the maximum acceptable values for the metrics identified for your project, usually specified in terms of response time, throughput (transactions per second), and resource-utilization levels. Resource-utilization levels include the amount of processor capacity, memory, disk I/O, and network I/O that your application consumes. Performance thresholds typically equate to requirements. Concurrency / Throughput If an application identifies end-users by some form of login procedure then a concurrency goal is highly desirable. By definition this is the largest number of concurrent application users that the application is expected to support at any given moment. The work-flow of your scripted transaction may impact true application concurrency especially if the iterative part contains the Login & Logout activity If your application has no concept of end-users then your performance goal is likely to be based on a maximum throughput or transaction rate. A common example would be casual browsing of a web site such as Wikipedia. Server response time This refers to the time taken for one application node to respond to the request of another. A simple example would be a HTTP 'GET' request from browser client to web server. In terms of response time this is what all load testing tools actually measure. It may be relevant to set server response time goals between all nodes of the application landscape. Render response time A difficult thing for load testing tools to deal with as they generally have no concept of what happens within a node apart from recognising a period of time where there is no activity 'on the wire'. To measure render response time it is generally necessary to include functional test scripts as part of the performance test scenario which is a feature not offered by many load testing tools

Resource utilization Resource utilization is the cost of the project in terms of system resources. The primary resources are processor, memory, disk I/O, and network I/O. Response time Response time is a measure of how responsive an application or subsystem is to a client request. Saturation Scalability Saturation refers to the point at which a resource has reached full utilization. Scalability refers to an application s ability to handle additional workload, without adversely affecting performance, by adding resources such as processor, memory, and storage capacity. Scenarios In the context of performance testing, a scenario is a sequence of steps in your application. A scenario can represent a use case or a business function such as searching a product catalog, adding an item to a shopping cart, or placing an order. Smoke test A smoke test is the initial run of a performance test to see if your application can perform its operations under a normal load. Spike test A spike test is a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time. Spike testing is a subset of stress testing. Stability In the context of performance testing, stability refers to the overall reliability, robustness, functional and data integrity, availability, and/or consistency of responsiveness for your system under a variety conditions. Stress test A stress test is a type of performance test designed to evaluate an application s behavior when it is pushed beyond normal or peak load conditions. The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application s weak points, and shows how the application behaves under extreme load conditions.

Throughput Throughput is the number of units of work that can be handled per unit of time; for instance, requests per second, calls per day, hits per second, reports per year, etc. Unit test In the context of performance testing, a unit test is any test that targets a module of code where that module is any logical subset of the entire existing code base of the application, with a focus on performance characteristics. Commonly tested modules include functions, procedures, routines, objects, methods, and classes. Performance unit tests are frequently created and conducted by the developer who wrote the module of code being tested. Utilization In the context of performance testing, utilization is the percentage of time that a resource is busy servicing user requests. The remaining percentage of time is considered idle time. Validation test A validation test compares the speed, scalability, and/or stability characteristics of the product under test against the expectations that have been set or presumed for that product. Workload Workload is the stimulus applied to a system, application, or component to simulate a usage pattern, in regard to concurrency and/or data inputs. The workload includes the total number of users, concurrent active users, data volumes, and transaction volumes, along with the transaction mix. For performance modeling, you associate a workload with an individual scenario

Performance Testing Web Applications Methodology According to the Microsoft Developer Network the Performance Testing Methodology consists of the following activities: Activity 1- Identify the Test Environment: Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project s life cycle. Activity 2- Identify Performance Acceptance Criteria: Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics. Activity 3-Plan and Design Tests: Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed. Activity 4- Configure the Test Environment: Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. Activity 5-Implement the Test Design: Develop the performance tests in accordance with the test design. Activity 6-Execute the Test: Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment. Activity 7-Analyze Results, Tune, and Retest: Analyze, Consolidate and share results data. Make a tuning change and retest. Improvement or degradation? Each improvement made will return smaller improvement than the previous improvement. When do you stop? When you reach a CPU bottleneck, the choices then are either improve the code or add more CPU

Myths of Performance Testing Some of the very common myths are given below 1. Performance Testing is done to break the system. Stress Testing is done to understand the break point of the system. Otherwise normal load testing is generally done to understand the behavior of the application under the expected user load. Depending on other requirements, such as expectation of spike load, continued load for an extended period of time would demand spike, endurance soak or stress testing. 2. Performance Testing should only be done after the System Integration Testing Although this is mostly the norm in the industry, performance testing can also be done while the initial development of the application is taking place. This kind of approach is known as the Early Performance Testing. This approach would ensure a holistic development of the application keeping the performance parameters in mind. Thus the finding of a performance bug just before the release of the application and the cost involved in rectifying the bug is reduced to a great extent. 3. Performance Testing only involves creation of scripts and any application changes would cause a simple refactoring of the scripts. Performance Testing in itself is an evolving science in the Software Industry. Scripting itself although important, is only one of the components of the performance testing. The major challenge for any performance tester is to determine the type of tests needed to execute and analyzing the various performance counters to determine the performance bottleneck. The other segment of the myth concerning the change in application would result only in little refactoring in the scripts is also untrue as any form of change on the UI especially in the Web protocol would entail complete re-development of the scripts from scratch. This problem becomes bigger if the protocols involved include Web Services, Siebel, Citrix, and SAP