TRANSFORMING AUTOMATED SOFTWARE FUNCTIONAL TESTS FOR PERFORMANCE AND LOAD TESTING
|
|
|
- Steven Sparks
- 10 years ago
- Views:
Transcription
1 TRANSFORMING AUTOMATED SOFTWARE FUNCTIONAL TESTS FOR PERFORMANCE AND LOAD TESTING Peter Sabev 1, Katalina Grigorova 2 1 PhD Student, Angel Kanchev University of Ruse, Ruse, Bulgaria 2 Associate Professor, Angel Kanchev University of Ruse, Ruse, Bulgaria ABSTRACT Functional testing is a quality assurance process that could be automated. Functions are tested by feeding them input and examining the output expecting concrete results. However, the performance measurement and execution times of these tests are rarely considered when executing functional tests. Adding simple timestamps for every functional test execution when such a test is started or stopped could bring a valuable benchmarking data for analysis, save a significant amount of time for executing performance tests separately from functional tests, and increase defect removal efficiency. Keywords: Software Engineering, Software Testing, Testing Model, Automated Testing, Functional Testing, Performance Testing, Regression Testing, Benchmarking INTRODUCTION In a typical programming project, approximately 50 percent of the elapsed time and more than 50 percent of the total cost are spent in testing the developed program or system [3]. Software testing has been the main form of defect removal since software began more than 60 years ago. There are at least 20 different forms of testing, and typically between 3 and 12 forms of testing will be used on almost every software application [1]. Functional (or black-box) testing has become one of the most popular testing methods it can be easily designed, executed and implemented for automatic regression tests. This method verifies correct handling of the functions provided or supported by the software, or whether the observed behavior conforms to user expectations or product specifications by only provisioning simple input data and comparing the returned data (i.e. the actual results) to the expected results. However, using only functional testing seems to be 35 percent less efficient, i.e., it finds only about one bug out of three [1]. That is why alternatives that combine higher efficiency levels with lower costs are worth considering. Benchmarking the software using performance testing and analysis is such an important activity. It provides key metrics such as response times or throughput rates under certain workload and configuration conditions. CONTEMPORARY STATE OF THE PROBLEM Since performance analysis is not always part of software engineering or computer science, many software engineers are not qualified to deal with optimizing performance [1]. Although these measurements are important, they are rarely performed during testing in many companies and projects. Even the most mature test processes divide the functional and performance testing in 447
2 separate activities and separate subprojects done by completely different specialists. One of the reasons behind the strong separation of performance and functional test results is that the widely adopted IEEE IEEE Standard for Software and System Test Documentation sets items pass/fail criteria [4] that is often mistaken by the majority of IT specialists as functional test pass/fail. Although passing a test means that software has met its functional and non-functional requirements (such as expected response or processing time), software testers rarely pay attention to the performance measurements during test execution; passing or failing used to be enough during the years and additional performance tests were done if needed. All widely known modern test result formats provide ways to store test execution time (xunit has <testcase time="x">, TestNG has <testmethod started-at="yyyy-dd-mmthh:mm:ssz" finished-at=" yyyy-dd-mmthh:mm:ssz ">, JSON Test Results format has float variable called time, any log file has timestamps and any custom XML/HTML file can have relevant timerelated tags). However, the majority of test execution management systems and bug tracking systems consider tests output as pass/fail flag only, so even if recorded, functional tests time measurements usually remain neglected. Nowadays, even if the testers want to log performance measurements they will eventually need significant modification in the default software configuration. A research based on [6], [7] and [8] was made for 48 of the most popular software test management tools (Aqua, Assembla, Bstriker, Bugzilla, codebeamer, Enterprise Tester, Gemini, HP Quality Center, IBM Rational Quality Manager, informup Test Case Management, JIRA, Klaros- Testmanagement, Mantis, Meliora Testlab, Occygen, Overlook, PractiTest, QAComplete, QABook, qamanager, QMetry, qtest, RADI, Rainforest QA, RTH-Turbo, Silk Central Test Manager, Sitechco, Tarantula, TCW, tematoo, Test Collab, TestComplete, TestCube, Testersuite, Testitool, TestLink, TestLodge, Testmaster, Testopia, TestPad, TestRail, TestTrack, Testuff, TOSCA Testsuite, WebTST, XORICON TestLab, XQual, XStudio and Zephyr). As a result, all of the tools provide test pass/fail reports and none of them has integrated ready-to-use automated reports for functional tests with build-to-build comparison: this is either not supported at all, or requires additional plugins, customization or separate setup for a dedicated performance testing tool or feature. However, many functional tests are automated in everyday life. This is performed by repeatedly executing these tests, which avoids human mistakes during execution and ensures faster retrieval of results. Such automated functional tests could be slightly modified to collect performance data by using several checkpoints and collecting timestamps before and after each test case or even before test step execution. According to Table 5-6 (Defect Removal Efficiency by Defect Type) in [1], adding performance measurements to automated functional tests could improve the defect removal effectiveness by 5% for requirementrelated defects, 10% for design-related defects and 70% for performance-related defects. SEQUENCE DIAGRAM OF A TYPICAL AUTOMATED FUNCTIONAL TEST Figure 1. A typical functional test 448
3 A typical automation functional test is performed by using a test controller (TC), an external application or module that is able to independently configure, execute and terminate the software under test (SUT) directly or by using one or more test agents. Figure 1 shows a typical time sequence diagram for simple automated functional test. test case run with time measurements as shown in Figure 3. ADDING CHECKPOINTS WITH TIMESTAMPS TO THE FUNCTIONAL TEST The first step of the suggested approach is to modify the time controller by adding two checkpoints one at the beginning (immediately before the test execution has started) and one at the end (immediately after the test execution has ended) as shown on Figure 2. Thus, one could easily calculate the execution time as the time difference between the two checkpoints. Note that the modification is done only in TC, and SUT remains unchanged. Figure 3. Executing several functional tests with time measurements Figure 2. A measurement before and after functional test execution In this case it is possible any combination of several different tests or same tests to be run on different systems, subsystems, features, modules, units, versions or environments; it is only required to follow the practice and bracket each Note that two metrics for total execution time can be obtained depending on what measures are needed: 1. Total elapsed time during test execution: (B.end-A.start) 2. Total test execution time: (A.end-A.start) + (B.end-B.start) + However, the nature of performance measurements has a significant limitation to consider functional test performance results as valid, the functional test should have passed. In a case where performance drops to zero when a high-severity bug is encountered and stops the 449
4 test from running properly, test results should be ignored [1]. Figure 4. Final state after modifying tests for performance measurements FINE TUNING THE PERFORMANCE MEASUREMENTS Until now functional tests served as baseline tests for one single transaction in isolation as a single user. The proposed methodology allows ramping up to a desired target maximum currency or throughput for the transaction by executing same tests several times. This could be either synchronous or asynchronous, using one or multiple test agents covering one or multiple test scenarios; the possible combinations are limitless and the only requirement is to repeat the tests and their time measurements a certain number of times. The more measurement points and iterations are performed, the more time for execution is needed; however, a better precision of the performance measurements is obtained [5]. If tests fail or any other problems are encountered at this stage, one only needs to run isolation tests to identify and deal with what s wrong. This would be followed by a load test combining all transactions up to target concurrency and then by further isolation tests, if problems are discovered [2]. Although modifying the SUT is not required to apply the benchmarking method proposed here, there are a number of performance tools and measurement devices such as profilers that collect data on the fly. It is also possible to embed performance measurement capabilities into software applications themselves, which is called instrumentation. Since instrumentation and other forms of performance analysis may slow down application speed, one must take care to ensure that the data is correct [1]. When executing the modified tests, it is important to remove (or at least limit) any external factors that could affect the performance measurement results. Typically, these include inconsistent network bandwidth usage, current user load, noises or signal disturbances, different database states (all of these are especially valid when testing remote or third-party systems), 450
5 resource allocation as CPU, disk or memory used by other processes or users (all of these are especially valid when using virtual machines), TC consumes a lot of time or resources compared to the single test execution (especially valid for simple, fast tests), caching, etc. Normal user flow should also be considered and user response time must be added to functional tests, so baseline tests to establish ideal responsetime performance could be performed [2]. In common practice it is required to benchmark the pure response time for a given SUT feature, module, unit or process. That is why additional empty tests could be run at the beginning and at the end of the test execution, as well as between every other test measurements. These tests could allow measuring average time for specific test maintenance operation such as pausing in the system, resetting environment, setting specific configuration, reverting snapshot, database restore or cache delete. Even for the simplest test, it is a good idea to bracket it for an empty test with null data. For example, if a web application is tested with an online JSON query, variations of such empty tests could be times for: localhost ping, sending/receiving request on localhost, sending/receiving empty request on localhost, sending request second time in order to obtain the response via proxy or server cache, etc. Another important point is to minimise any activity that could impact the performance and that could be avoided at that moment (e.g. writing in logs, time calculations or anything else that could be postponed). Thus, it is advisable all calculations performed in the middle of the test execution (Figure 3) to be postponed after the benchmarking finishes (as shown in Figure 4). Considering all the tests have passed and the time measurements are correct, the latter could be amended accordingly with the empty test results. In the example above, if the average time for server response to empty JSON query is 0.5 seconds and a typical baseline JSON query is 3.7 seconds, obviously the pure JSON query processing time is = 3.2 seconds. The exact calculations and amendment of actual test results are highly dependent on the testing context and should be reviewed individually according to the project and test case specifics. Figure 4 shows the final sequence diagram with all the fine tuning proposals implemented. Calculating minimum, maximum, and average times allows even deeper look into stability, time execution margins, and finally, early diagnosis of potential software defects. Once the performance tests data measurements are collected and analyzed, they can be used as valuable baseline against future releases, comparison between two software implementations, etc. CONCLUSIONS AND FUTURE WORK This article presents a novel approach to functional test design that enables collecting important performance and benchmarking data by bracketing each functional test execution with timestamps. This work differs from existing approaches in that it allows partial performance and load testing to be done while executing functional tests, with minor additional effort. There are several benefits to this method: it allows early detection of performance defects, and potentially increases defect removal efficiency. The theoretical ideas presented in this article have been successfully applied in the real-world in three different software companies: Experian Decision Analytics (in 2010, during the testing of their application fraud prevention software called Hunter), Digital Property Group (in 2012, during the testing of company s real estate websites), ProSyst Labs (in 2014, during the testing of J2ME hardware devices for a Deutsche Telekom s smart home project called QIVICON) and Certivox Ltd. UK (in 2015, during the testing of M-Pin zero-password authentication software product). Continuous performance 451
6 benchmarking data obtained from the functional tests allowed build-to-build comparisons, which assisted the early identification of several defects with critical severity. One of the recent examples for the effectiveness of the proposed approach in practice is illustrated on Figure 5. An automated functional test case consisting of simple user login and logout is triggered and executed every night at 1:00h. Test execution usually took between 3.6 and 5.0 seconds. A recent code change on 09-May-2015 introduced a performance issue that caused the total test execution time to be above 6 seconds which was confirmed during the next executions during the weekend. On Monday, 11-May-2015 the code change was reverted and further performance investigation were made during the next days. As a result, performance improvements were done and the execution time dropped under 1.7 seconds. However, there are some practical and theoretical issues that need to be addressed regarding this approach. On the practical side, the proposed method for obtaining benchmarking results by modifying functional software tests cannot fully substitute regular performance, load or stress testing activities. Even in the case when such substitution is theoretically possible, it would be impractical due to the complexity of implementing and maintaining different multi-agent functional test execution scenarios. Another issue is the time correction using the empty tests described above, which needs to be further researched and improved for further test results. Much remains to be done in this regard, especially when precise performance data is required and many test iterations are not a practical workaround solution. However, if the final goal is just obtaining some performance data that could serve as initial indicator of SUT performance, then this is a relatively easy approach, which has potential for very good return of investment. This work hopes to be a first step toward further development and improvement of the proposed test method. Figure 5. Execution time measurements for a simple functional test ACKNOWLEDGEMENTS This work is supported by the National Scientific Research Fund under the contract ДФНИ - И02/13. REFERENCE 452
7 [1] Jones, C. Software Engineering Best Practices, 3rd ed., New York, NY, USA: The McGraw-Hill Companies, [2] Molyneaux, I. The Art of Application Performance Testing: Help for Programmers and Quality, Sebastopol, CA, USA: O'Reilly Media, Inc., [3] Myers, G. J. The Art of Software Testing, 2nd ed., Hoboken, NJ, USA: John Wiley & Sons, Inc., [4] Software & Systems Engineering Standards Committee, IEEE Standard for Software and System Test Documentation, Fredericksburg, VA, USA: IEEE Computer Society, [5] Žilinskas A., D. Kučinskas, Implementation and Testing of an Algorithm for Global Optimizations, CompSysTech 04 Proceedings of the 5th International Conference on Computer Systems and Technologies, New York, NY, USA: ACM Inc., [6] 15 Best Test Management Tools ( ). Available: [7] Ghahrai A., Best Open Source Test Management Tools ( ). Available: [8] Wikipedia, Test Management Tools ( ). Available: tools 453
Recommendations for Performance Benchmarking
Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best
What Is Specific in Load Testing?
What Is Specific in Load Testing? Testing of multi-user applications under realistic and stress loads is really the only way to ensure appropriate performance and reliability in production. Load testing
Job Reference Guide. SLAMD Distributed Load Generation Engine. Version 1.8.2
Job Reference Guide SLAMD Distributed Load Generation Engine Version 1.8.2 June 2004 Contents 1. Introduction...3 2. The Utility Jobs...4 3. The LDAP Search Jobs...11 4. The LDAP Authentication Jobs...22
Test Run Analysis Interpretation (AI) Made Easy with OpenLoad
Test Run Analysis Interpretation (AI) Made Easy with OpenLoad OpenDemand Systems, Inc. Abstract / Executive Summary As Web applications and services become more complex, it becomes increasingly difficult
The Association of System Performance Professionals
The Association of System Performance Professionals The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing professionals committed to the measurement
Tableau Server Scalability Explained
Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted
E-vote 2011 Version: 1.0 Testing and Approval Date: 26/10/2009. E-vote 2011. SSA-U Appendix 5 Testing and Approval Project: E-vote 2011
E-vote 2011 SSA-U Appendix 5 Testing and Approval Project: E-vote 2011 Change log Version Date Author Description/changes 0.1 26.10.09 First version Page 1 CONTENT 1. INTRODUCTION 3 2. TESTING PROCESS
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk.
White paper: Unlocking the potential of load testing to maximise ROI and reduce risk. Executive Summary Load testing can be used in a range of business scenarios to deliver numerous benefits. At its core,
Performance Tuning Guide for ECM 2.0
Performance Tuning Guide for ECM 2.0 Rev: 20 December 2012 Sitecore ECM 2.0 Performance Tuning Guide for ECM 2.0 A developer's guide to optimizing the performance of Sitecore ECM The information contained
SOFTWARE PERFORMANCE TESTING SERVICE
SOFTWARE PERFORMANCE TESTING SERVICE Service Definition GTS s performance testing services allows customers to reduce the risk of poor application performance. This is done by performance testing applications
Testing Best Practices
ALMComplete, QAComplete, DevComplete This document is used as a guide to improving your testing and quality assurance processes. 1 Test Case Creation Once requirements have been created and approved, while
Introducing Performance Engineering by means of Tools and Practical Exercises
Introducing Performance Engineering by means of Tools and Practical Exercises Alexander Ufimtsev, Trevor Parsons, Lucian M. Patcas, John Murphy and Liam Murphy Performance Engineering Laboratory, School
Bringing Value to the Organization with Performance Testing
Bringing Value to the Organization with Performance Testing Michael Lawler NueVista Group 1 Today s Agenda Explore the benefits of a properly performed performance test Understand the basic elements of
Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction
Software Testing Rajat Kumar Bal Introduction In India itself, Software industry growth has been phenomenal. IT field has enormously grown in the past 50 years. IT industry in India is expected to touch
Application Performance Testing Basics
Application Performance Testing Basics ABSTRACT Todays the web is playing a critical role in all the business domains such as entertainment, finance, healthcare etc. It is much important to ensure hassle-free
How To Test For Elulla
EQUELLA Whitepaper Performance Testing Carl Hoffmann Senior Technical Consultant Contents 1 EQUELLA Performance Testing 3 1.1 Introduction 3 1.2 Overview of performance testing 3 2 Why do performance testing?
Rapid Bottleneck Identification
Rapid Bottleneck Identification TM A Better Way to Load Test WHITEPAPER You re getting ready to launch or upgrade a critical Web application. Quality is crucial, but time is short. How can you make the
Test Automation Integration with Test Management QAComplete
Test Automation Integration with Test Management QAComplete This User's Guide walks you through configuring and using your automated tests with QAComplete's Test Management module SmartBear Software Release
White Paper Tavant Open Source Testing Platform: Helping You Cut Costs
White Paper Tavant Open Source Testing Platform: Helping You Cut Costs Pravin Kawware Tavant Technologies www.tavant.com P-1 Introduction According to a study report titled 'Open Source Paves The Way For
Mike Chyi, Micro Focus Solution Consultant May 12, 2010
Mike Chyi, Micro Focus Solution Consultant May 12, 2010 Agenda Load Testing Overview, Best Practice: Performance Testing with Diagnostics Demo (?), Q&A Load Testing Overview What is load testing? Type
Directions for VMware Ready Testing for Application Software
Directions for VMware Ready Testing for Application Software Introduction To be awarded the VMware ready logo for your product requires a modest amount of engineering work, assuming that the pre-requisites
Case Study: Load Testing and Tuning to Improve SharePoint Website Performance
Case Study: Load Testing and Tuning to Improve SharePoint Website Performance Abstract: Initial load tests revealed that the capacity of a customized Microsoft Office SharePoint Server (MOSS) website cluster
Table of Contents INTRODUCTION... 3. Prerequisites... 3 Audience... 3 Report Metrics... 3
Table of Contents INTRODUCTION... 3 Prerequisites... 3 Audience... 3 Report Metrics... 3 IS MY TEST CONFIGURATION (DURATION / ITERATIONS SETTING ) APPROPRIATE?... 4 Request / Response Status Summary...
How to Plan a Successful Load Testing Programme for today s websites
How to Plan a Successful Load Testing Programme for today s websites This guide introduces best practise for load testing to overcome the complexities of today s rich, dynamic websites. It includes 10
TRACE PERFORMANCE TESTING APPROACH. Overview. Approach. Flow. Attributes
TRACE PERFORMANCE TESTING APPROACH Overview Approach Flow Attributes INTRODUCTION Software Testing Testing is not just finding out the defects. Testing is not just seeing the requirements are satisfied.
Levels of Software Testing. Functional Testing
Levels of Software Testing There are different levels during the process of Testing. In this chapter a brief description is provided about these levels. Levels of testing include the different methodologies
Copyrighted www.eh1infotech.com +919780265007, 0172-5098107 Address :- EH1-Infotech, SCF 69, Top Floor, Phase 3B-2, Sector 60, Mohali (Chandigarh),
Content of 6 Months Software Testing Training at EH1-Infotech Module 1: Introduction to Software Testing Basics of S/W testing Module 2: SQA Basics Testing introduction and terminology Verification and
APPLICATION PERFORMANCE TESTING IN A VIRTUAL ENVIRONMENT
APPLICATION PERFORMANCE TESTING IN A VIRTUAL ENVIRONMENT A REPORT FROM SAP CO-INNOVATION LAB : Joerg Nalik Shunra Software: Dave Berg HP Software: Mark Tomlinson 1.0 Table of Contents 1 Introduction Raising
WHAT WE NEED TO START THE PERFORMANCE TESTING?
ABSTRACT Crystal clear requirements before starting an activity are always helpful in achieving the desired goals. Achieving desired results are quite difficult when there is vague or incomplete information
TEST AUTOMATION FRAMEWORK
TEST AUTOMATION FRAMEWORK Twister Topics Quick introduction Use cases High Level Description Benefits Next steps Twister How to get Twister is an open source test automation framework. The code, user guide
MAGENTO HOSTING Progressive Server Performance Improvements
MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 [email protected] 1.866.963.0424 www.simplehelix.com 2 Table of Contents
Performance Tuning and Optimizing SQL Databases 2016
Performance Tuning and Optimizing SQL Databases 2016 http://www.homnick.com [email protected] +1.561.988.0567 Boca Raton, Fl USA About this course This four-day instructor-led course provides students
Database Replication with MySQL and PostgreSQL
Database Replication with MySQL and PostgreSQL Fabian Mauchle Software and Systems University of Applied Sciences Rapperswil, Switzerland www.hsr.ch/mse Abstract Databases are used very often in business
The usage of system testing for information and control systems
The usage of system testing for information and control systems Pavol Tanuska, Lukas Spendla, Alojz Masar, Ondrej Vlkovic Faculty of Material Science and Technology in Trnava Slovak University of Technology
Chapter 8 Software Testing
Chapter 8 Software Testing Summary 1 Topics covered Development testing Test-driven development Release testing User testing 2 Program testing Testing is intended to show that a program does what it is
How To Test For Performance
: Roles, Activities, and QA Inclusion Michael Lawler NueVista Group 1 Today s Agenda Outline the components of a performance test and considerations Discuss various roles, tasks, and activities Review
A Guide to Getting Started with Successful Load Testing
Ingenieurbüro David Fischer AG A Company of the Apica Group http://www.proxy-sniffer.com A Guide to Getting Started with Successful Load Testing English Edition 2007 All Rights Reserved Table of Contents
Microsoft SQL Server: MS-10980 Performance Tuning and Optimization Digital
coursemonster.com/us Microsoft SQL Server: MS-10980 Performance Tuning and Optimization Digital View training dates» Overview This course is designed to give the right amount of Internals knowledge and
Load/Stress Test Plan
WileyPLUS E5 Load/Stress Test Plan Version 1.1 Author: Cris J. Holdorph Unicon, Inc. 1 Audit Trail: Date Version Name Comment April 2, 2008 1.0 Cris J. Holdorph Initial Revision April 9, 2008 1.1 Cris
Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010
Estimate Performance and Capacity Requirements for Workflow in SharePoint Server 2010 This document is provided as-is. Information and views expressed in this document, including URL and other Internet
Proactive Performance Monitoring Using Metric Extensions and SPA
Proactive Performance Monitoring Using Metric Extensions and SPA Mughees A. Minhas Oracle Redwood Shores, CA, USA Keywords: Oracle, database, performance, proactive, fix, monitor, Enterprise manager, EM,
10 Best Practices for Application Performance Testing
Business white paper 10 Best Practices for Application Performance Testing Leveraging Agile Performance Testing for Web and Mobile Applications 10 Best Practices for Application Performance Testing Table
SQL Server Performance Tuning and Optimization
3 Riverchase Office Plaza Hoover, Alabama 35244 Phone: 205.989.4944 Fax: 855.317.2187 E-Mail: [email protected] Web: www.discoveritt.com SQL Server Performance Tuning and Optimization Course: MS10980A
Whitepaper: performance of SqlBulkCopy
We SOLVE COMPLEX PROBLEMS of DATA MODELING and DEVELOP TOOLS and solutions to let business perform best through data analysis Whitepaper: performance of SqlBulkCopy This whitepaper provides an analysis
Analyzing IBM i Performance Metrics
WHITE PAPER Analyzing IBM i Performance Metrics The IBM i operating system is very good at supplying system administrators with built-in tools for security, database management, auditing, and journaling.
Performance and Load Testing For ArcGIS based systems Ian Sims and John Meza OVERVIEW What is Performance and Load Testing What is the objective Acceptance Testing Ongoing Development Areyoutheclient Want
Oracle Database Performance Management Best Practices Workshop. AIOUG Product Management Team Database Manageability
Oracle Database Performance Management Best Practices Workshop AIOUG Product Management Team Database Manageability Table of Contents Oracle DB Performance Management... 3 A. Configure SPA Quick Check...6
HP NonStop JDBC Type 4 Driver Performance Tuning Guide for Version 1.0
HP NonStop JDBC Type 4 Driver November 22, 2004 Author: Ken Sell 1 Introduction Java applications and application environments continue to play an important role in software system development. Database
Desktop Activity Intelligence
Desktop Activity Intelligence Table of Contents Cicero Discovery Delivers Activity Intelligence... 1 Cicero Discovery Modules... 1 System Monitor... 2 Session Monitor... 3 Activity Monitor... 3 Business
Model-based Testing: Next Generation Functional Software Testing
Model-based Testing: Next Generation Functional Software Testing By Dr. Bruno Legeard Model-based testing (MBT) is an increasingly widely-used technique for automating the generation and execution of tests.
Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2009
Rapid Bottleneck Identification A Better Way to do Load Testing An Oracle White Paper June 2009 Rapid Bottleneck Identification A Better Way to do Load Testing. RBI combines a comprehensive understanding
SoMA. Automated testing system of camera algorithms. Sofica Ltd
SoMA Automated testing system of camera algorithms Sofica Ltd February 2012 2 Table of Contents Automated Testing for Camera Algorithms 3 Camera Algorithms 3 Automated Test 4 Testing 6 API Testing 6 Functional
VDI FIT and VDI UX: Composite Metrics Track Good, Fair, Poor Desktop Performance
VDI FIT and VDI UX: Composite Metrics Track Good, Fair, Poor Desktop Performance Key indicators and classification capabilities in Stratusphere FIT and Stratusphere UX Whitepaper INTRODUCTION This whitepaper
ISTQB Certified Tester. Foundation Level. Sample Exam 1
ISTQB Certified Tester Foundation Level Version 2015 American Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged. #1 When test cases are designed
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
An Oracle White Paper February 2010. Rapid Bottleneck Identification - A Better Way to do Load Testing
An Oracle White Paper February 2010 Rapid Bottleneck Identification - A Better Way to do Load Testing Introduction You re ready to launch a critical Web application. Ensuring good application performance
Performance Testing Uncovered
Performance Testing Uncovered First Presented at: NobleStar Systems Corp. London, UK 26 Sept. 2003 Scott Barber Chief Technology Officer PerfTestPlus, Inc. Performance Testing Uncovered Page 1 Performance
TESTING AND OPTIMIZING WEB APPLICATION S PERFORMANCE AQA CASE STUDY
TESTING AND OPTIMIZING WEB APPLICATION S PERFORMANCE AQA CASE STUDY 2 Intro to Load Testing Copyright 2009 TEST4LOAD Software Load Test Experts What is Load Testing? Load testing generally refers to the
Test Management Tools
Test White Management Paper Tools Test Management Tools Table of Contents Executive Summary 3 Why Test Management Tools are required 4 What is QMetry? 5 QMetry Features 6 The Tools of QMetry 7 Conclusion
Hyperoo 2 User Guide. Hyperoo 2 User Guide
1 Hyperoo 2 User Guide 1 2 Contents How Hyperoo Works... 3 Installing Hyperoo... 3 Hyperoo 2 Management Console... 4 The Hyperoo 2 Server... 5 Creating a Backup Array... 5 Array Security... 7 Previous
IBM Software Information Management. Scaling strategies for mission-critical discovery and navigation applications
IBM Software Information Management Scaling strategies for mission-critical discovery and navigation applications Scaling strategies for mission-critical discovery and navigation applications Contents
SolovatSoft. Load and Performance Test Plan Sample. Title: [include project s release name] Version: Date: SolovatSoft Page 1 of 13
SolovatSoft Load and Performance Test Plan Sample Title: [include project s release name] Version: Date: SolovatSoft Page 1 of 13 Approval signatures Project Manager Development QA Product Development
Use service virtualization to remove testing bottlenecks
Use service virtualization to remove testing bottlenecks Discover integration faults early by pushing integration testing left in the software lifecycle Contents 1 Complex, interconnected applications
Explain how to prepare the hardware and other resources necessary to install SQL Server. Install SQL Server. Manage and configure SQL Server.
Course 6231A: Maintaining a Microsoft SQL Server 2008 Database About this Course Elements of this syllabus are subject to change. This five-day instructor-led course provides students with the knowledge
Performance Testing of Java Enterprise Systems
Performance Testing of Java Enterprise Systems Katerina Antonova, Plamen Koychev Musala Soft Why Performance Testing? Recent studies by leading USA consultancy companies showed that over 80% of large corporations
LotServer Deployment Manual
LotServer Deployment Manual Maximizing Network Performance, Responsiveness and Availability DEPLOYMENT 1. Introduction LotServer is a ZetaTCP powered software product that can be installed on origin web/application
Performance Modeling for Web based J2EE and.net Applications
Performance Modeling for Web based J2EE and.net Applications Shankar Kambhampaty, and Venkata Srinivas Modali Abstract When architecting an application, key nonfunctional requirements such as performance,
Copyright www.agileload.com 1
Copyright www.agileload.com 1 INTRODUCTION Performance testing is a complex activity where dozens of factors contribute to its success and effective usage of all those factors is necessary to get the accurate
Application. Performance Testing
Application Performance Testing www.mohandespishegan.com شرکت مهندش پیشگان آزمون افسار یاش Performance Testing March 2015 1 TOC Software performance engineering Performance testing terminology Performance
ACME Intranet Performance Testing
ACME Intranet Performance Testing Purpose of this document Prior to the official launch of the ACME Intranet, a set of performance tests were carried out to ensure that the platform could cope with estimated
Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases
Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-side
Go2Group JaM Plugin. Atlassian JIRA add-on for HP Quality Center. Quick Install Guide
Go2Group JaM Plugin Atlassian JIRA add-on for HP Quality Center Quick Install Guide Version 5.5 April 2009 Table of Contents Go2Group JaM Plugin Requirements... 3 What s Needed... 3 Go2Group JaM Plugin
InfiniteGraph: The Distributed Graph Database
A Performance and Distributed Performance Benchmark of InfiniteGraph and a Leading Open Source Graph Database Using Synthetic Data Objectivity, Inc. 640 West California Ave. Suite 240 Sunnyvale, CA 94086
Data Warehouse and Business Intelligence Testing: Challenges, Best Practices & the Solution
Warehouse and Business Intelligence : Challenges, Best Practices & the Solution Prepared by datagaps http://www.datagaps.com http://www.youtube.com/datagaps http://www.twitter.com/datagaps Contact [email protected]
How To Test On The Dsms Application
Performance Test Summary Report Skills Development Management System December 2014 Performance Test report submitted to National Skill Development Corporation Version Date Name Summary of Changes 1.0 22/12/2014
IBM DB2 9.7. Backup and Recovery Hands-On Lab. Information Management Cloud Computing Center of Competence. IBM Canada Lab
IBM DB2 9.7 Backup and Recovery Hands-On Lab I Information Management Cloud Computing Center of Competence IBM Canada Lab 1 Contents CONTENTS...1 1. INTRODUCTION...3 2. BASIC SETUP...3 2.1 Environment
AUTOMATED TESTING and SPI. Brian Lynch
AUTOMATED TESTING and SPI Brian Lynch 1 Introduction The following document explains the purpose and benefits for having an Automation Test Team, the strategy/approach taken by an Automation Test Team
Performance Testing. What is performance testing? Why is performance testing necessary? Performance Testing Methodology EPM Performance Testing
Performance Testing What is performance testing? Why is performance testing necessary? Performance Testing Methodology EPM Performance Testing What is Performance Testing l The primary goal of Performance
Proactive Performance Management for Enterprise Databases
Proactive Performance Management for Enterprise Databases Abstract DBAs today need to do more than react to performance issues; they must be proactive in their database management activities. Proactive
SOA Solutions & Middleware Testing: White Paper
SOA Solutions & Middleware Testing: White Paper Version 1.1 (December 06, 2013) Table of Contents Introduction... 03 Solutions Testing (Beta Testing)... 03 1. Solutions Testing Methods... 03 1.1 End-to-End
Table of Contents. 2015 Cicero, Inc. All rights protected and reserved.
Desktop Analytics Table of Contents Contact Center and Back Office Activity Intelligence... 3 Cicero Discovery Sensors... 3 Business Data Sensor... 5 Business Process Sensor... 5 System Sensor... 6 Session
MONITORING A WEBCENTER CONTENT DEPLOYMENT WITH ENTERPRISE MANAGER
MONITORING A WEBCENTER CONTENT DEPLOYMENT WITH ENTERPRISE MANAGER Andrew Bennett, TEAM Informatics, Inc. Why We Monitor During any software implementation there comes a time where a question is raised
Performance Testing. Slow data transfer rate may be inherent in hardware but can also result from software-related problems, such as:
Performance Testing Definition: Performance Testing Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve
XpoLog Center Suite Log Management & Analysis platform
XpoLog Center Suite Log Management & Analysis platform Summary: 1. End to End data management collects and indexes data in any format from any machine / device in the environment. 2. Logs Monitoring -
How to handle Out-of-Memory issue
How to handle Out-of-Memory issue Overview Memory Usage Architecture Memory accumulation 32-bit application memory limitation Common Issues Encountered Too many cameras recording, or bitrate too high Too
Using Redis as a Cache Backend in Magento
Using Redis as a Cache Backend in Magento Written by: Alexey Samorukov Aleksandr Lozhechnik Kirill Morozov Table of Contents PROBLEMS WITH THE TWOLEVELS CACHE BACKEND CONFIRMING THE ISSUE SOLVING THE ISSUE
The Importance of Continuous Integration for Quality Assurance Teams
The Importance of Continuous Integration for Quality Assurance Teams Without proper implementation, a continuous integration system will go from a competitive advantage for a software quality assurance
Maintaining a Microsoft SQL Server 2008 Database
Maintaining a Microsoft SQL Server 2008 Database Course 6231A: Five days; Instructor-Led Introduction Elements of this syllabus are subject to change. This five-day instructor-led course provides students
IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE
IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE Mr. Santhosh S 1, Mr. Hemanth Kumar G 2 1 PG Scholor, 2 Asst. Professor, Dept. Of Computer Science & Engg, NMAMIT, (India) ABSTRACT
Web Application Testing. Web Performance Testing
Web Application Testing Web Performance Testing Objectives of Performance Testing Evaluate runtime compliance to performance requirements Check different properties such as throughput (bits/sec, packets/sec)
