Performance Testing: Step On It Nadine Pelicaen International Conference On Software Testing, Analysis & Review November 19-23 Stockholm, Sweden P r e s e n t a t i o n W21 Friday 23rd November, 2001
Wednesday 21 November 2001 W21 Performance Testing: Step On It Nadine Pelicaen Nadine Pelicaen is a Test Consultant in structured software testing at ps_testware. She has a university degree in informatics and more than 12 years of experience as software developer, QA engineer and manager of a software department. Nadine has experience in Test Assessments and the organisation of Acceptance Testing of large software applications. She has also been working for the ps_testware's QSP service and has done research for the competence center of test methodology.
Performance testing, step on it Nadine Pelicaen ps_testware N.V. Tiensesteenweg 343 B-3010 Leuven Belgium Tel: +32 (16) 35.93.80 Fax: +32 (16) 35.93.88 Email: Nadine.Pelicaen@pstestware.com URL: http://www.pstestware.com Abstract The quality attribute performance is often overlooked in the design phase of a system, in which case client and server response times can turn out to be below expectations and all kinds of resource problems pop up. It can be critical to know whether your multi-client system is performing within well-defined standards under varying loads. Moreover, how is your application holding when running under extreme conditions? The paper describes the different types of performance testing and the role they play in risk management. Certain performance testing techniques allow you to detect the bottleneck in your system, in this case the area, which will break first when put under stress. Some hints and tips are given on how to incorporate these techniques in the overall test approach. We will also discuss how to choose the right tool for performance test automation, for websites as well as for client/server systems with multi-tier architecture. Once you have a tool, the added value of it is mainly determined by the choice of a representative test environment, which has to reflect a real life situation. W21-Paper Copyright 2001 ps_testware 2/12 version: 0 release: 2
1 INTRODUCTION It can be business critical to know whether your multi-client system is performing within well-defined standards under varying loads and in extreme conditions. Performance testing helps you to determine if response times and throughput live up to the expectations, and lets you detect bottlenecks in the system to let it keep pace with the user. This paper provides practical guidelines how to include performance testing in your overall processes, and discusses some significant issues to help you cope with this challenging job. To clarify all aspects, we use the 7 W s: Why, Who, What, Way, When, With and Where. Each of these questions is answered in the subsequent chapters. 2 WHY? THE ROLE OF PERFORMANCE TESTING IN THE OVERALL BUSINESS PROCESS 2.1 General business benefits Performance testing has to be made a part of the overall business process, because the benefits derived from the results from it can be significant. The first reason is that the software application needs to live up to the current expectations of the end users, who expect error free and fast operation. Especially in the e-business world, the organization has to reach the required Quality Of Service. Systematic performance testing can provide the metrics for this. The second reason is that it is essential to guarantee that the system still behaves in a stable way when projected business growth scenarios come true. When the traffic grows, the budgets will have to be adjusted based on calculations made during the performance test. This way, performance testing can provide support to the business goals. 2.2 Risk management Especially when deploying mission critical software, with a significant associated revenue, performance testing is a critical part of risk management. First of all, the costs associated with the downtime of the software application can be tremendous. But not only the lost revenue during the down time has to be taken into account, there is also the potential loss of future business. When a user is unable to get the expected service in an efficient way, he ll take his business elsewhere. A second argument to make performance testing part of risk management is especially true for the deployment of Web sites. If a Web site goes down, it s a public event. So the 3
importance of systematic performance testing in that field is growing because the company s image is at stake. Another aspect of managing the performance risk is situated in the fact that the testing should not be a one-time effort just prior to deployment. The reason is that in that case major design flaws might be discovered late in the development process, taking development back to square one, with heavy fixing costs and delayed deployment as a consequence. To avoid carrying the risk throughout the project, it has to be reduced as early as possible. All the above arguments have to be considered when calculating the ROI. Because of the complexity of the performance testing and the high demand on resources (time, hardware, tool licenses, bandwidth ), the associated investment is high. So you only want to go through with it, if the cost of possible failure of your application is higher than the estimated cost of the test. It can be looked on as an insurance policy for unsuccessful deployment. Also take into account that the risk is higher when working with unproven technology and new skills. 2.3 SLA s In a world where business based on ASP models is getting more and more popular, many companies are bound by Service Level Agreements (SLA s). By setting SLA s, you guarantee that certain actions can be done in a specific time frame, so it is crucial for the provider to get an idea of the performance and availability he can manage. The provider has to avoid that the tolerance levels are violated and that the customers complain. Not only determining the original SLA s requires performance management, but the agreements also have to be maintained because e-business applications undergo a lot of revisions that may result in performance related issues. Because the hosting organization has commitments on the level of scalability, they must use automated performance testing to be alerted of potential problems in that field. 3 WHO? THE PEOPLE INVOLVED In order to succeed in managing the risk using automated performance testing, you will need an interdisciplinary project group. The people directly involved are, dependent of course on the type of application: Project leader: This person is essential to coordinate all the efforts, get all the people and resources together and to keep track of the budget. IT: This is essential because you will want to test the complete architecture (hardware, OS, software, middleware, database, network ), to detect all possible bottlenecks. Furthermore, you need the system administrator not only for his architectural knowledge but also for backup/restore of databases and all kinds of other files that you will probably mess up when executing the tests. A good setup and maintenance of the test environment is necessary, if you want to obtain accurate results. 4
DBA: Whenever a database is involved, you will need the DBA to tune the settings on the database side to check the implications on performance. Developers: People who are involved with the development of all tiers are needed (client, business logic server, database server, middleware, ). Tool specialist: Probably, you will be using an automated test tool, be it commercial or own development. You need someone who knows all the details of this tool, because accurate configuration is essential, and creating good maintainable scripts isn t easy. Test expert: Of course, you need the person who created the test plan and the associated test designs for the performance quality attribute. Analysts: The graphs, reports and all kinds of statistical data that you retrieve at the end need to be interpreted to draw lessons from. Configuration manager: Close collaboration in this field is vital because you want to know that what you test is effectively the version that will go into production. Moreover, you might want to put all your test related data in a version tracking system because this comes in very handy when doing benchmarking. Furthermore, you will need to consult: Marketing manager: You need input from the marketing department because you want to simulate expected usage patterns and you want help to decide on the quantity of virtual users that you are going to use. The marketing department might also give you information concerning predictions of future use, oncoming marketing campaigns, Real users: If you already have them, they can be of great help when scripting their behavior using automated test tools. So, a lot of different departments are involved. When we look at the bright side, we see that it might help bridging the gap between development, QA, IT, marketing and other departments because of the joined effort. On the other hand, if things do go wrong, a lot of time might be wasted by fingerpointing. To avoid this, it is important to involve as much people with an open mind as possible. 4 WHAT? THE MEASUREMENTS In order to come to the metrics that we want to obtain through performance measurements, it s important that we first get some terminology straight. The reason is that there are different types of performance testing that have their own specific goal and interests. 4.1 Types of performance testing Performance testing is a loose term, and the different types are distinguished based on the objective of the test. The most commonly used definitions are represented in Table 1. Unfortunately, these terms are often freely interchanged. Term Load testing Definition Load testing is applying a particular load on the system to observe the behavior and the ability of the software to perform as required. This type of testing is useful to provide insight into how the application behaves under expected peak conditions, and to get an idea of the 5
average response time your users will get in the real world. Stress testing Scalability testing Endurance testing Stress testing refers to running the application under extreme conditions to find the breaking point of the system. In this case, for instance, the think time delays are removed so that transactions are executed as fast as possible to reflect worst case scenarios, like a sudden increase in concurrent users or peak bursts of activity. This kind of testing is used to check deterioration effects, like connectivity problems, diminished shared resources Scalability testing is based on applying an increased load to determine if the application scales gracefully and to determine the maximum traffic that the system can handle in a reliable manner. This type of test is used to determine the usage level limitations, from the point of view of capacity planning. Endurance testing can be seen as long term continuous load testing. The test scripts used for load testing are being run for hours or days, in order to detect defects like memory leaks, queuing problems, Table 1 Types of performance testing Most of the time, you will be doing combinations of all these types of tests to get as much knowledge from your efforts as possible. 4.2 Measurements and metrics Using an overall structured approach and the use of accurate monitoring tools must ensure the scientific value of the measurements taken during the execution of the tests. Measurements must be reported combined with the exact load and stress conditions under which they were taken. The following is a list, which is by no means exhaustive, of possible measures and metrics: Throughput (expressed in requests processed per unit of time) Average response time Length of a scenario (that you scripted) Number of concurrent users when degradation starts (a slow down is noticed) (figure 1) Number of concurrent users when degradation is complete (system fails) Errors reported (number + type + rate) For web sites Number of hits Number of failed hits Number of page views Page download speed Total elapsed time for page load 6
Fig 1.Graph showing response time under load [7] The following things are harder to express in absolute terms but should also be checked: Accessibility of the application (e.g. can you still access the menu s) Session independence (users should not compromise each other s work) You can also check the following resource information, if applicable: Memory usage Average CPU load Number of active threads Number of database connections Connection time Number of users in queues (e.g. waiting for database access) Network saturation Disk utilization Bandwidth information (average Kbytes received and sent per time unit) DNS lookup time HTTP server connection time Whether these measurements are meaningful for you, depends on your system architecture. You should decide which ones are relevant for your project, and try to get those with sufficient detail. What counts of course, is the analysis of these numbers and the associated diagnosis. For this, you need to register a baseline to compare performance results before and after an alteration of the software or a system component. 5 WAY? APPROACH FOR WEB SITES AND CLIENT/SERVER Most of the things that are said in this whirlwind tour of performance testing are valid for both Web site testing and the testing of standard multi-tier applications. Both types of architecture are to be tested for scalability, and the capacity constraints have to be defined. However, there are significant differences, which impact the way you should handle the performance testing. 7
Web site performance engineering (and therefore performance testing) has to take some extra technical issues into account: When a security layer is added to transactions, this has a big impact on response times. You have to carefully select which parts need secure requests, because overhead for encryption and decryption of data has to be counted for. To have representative numbers, measurements have to be taken at different times during the day, and from different remote locations. The delivery time of dynamic pages is a magnitude slower than delivery of static pages. Some other, non technical factors, also play a role in performance testing a Web site: Your audience is unpredictable and will have diverse needs. The demands are also higher, because the user will only wait a limited number of seconds before he gives up and goes elsewhere. Because of the high visibility when something goes wrong, the risk is considerably higher. In contrast to traditional enterprise systems, the transaction volumes are unpredictable, which results in less control over the application. Besides, the Internet itself is unpredictable. All the above things play a role when testing mission critical Web sites. Be very careful, when choosing an approach for testing your Web site and acquiring an automated testing tool, because a lot of Web site tests are inaccurate and therefore dangerously misleading. Enterprise client-server systems, on the other hand, can suffer from performance problems because overhead is being created when hopping from one architectural layer to another, and. Distributed processing is complex and creates overhead because of the delays between processes running on different machines. It can also be more difficult to find a suitable tool, especially if you are using more exotic operating systems or protocols. 6 WHEN? INCORPORATION IN THE OVERALL TEST STRATEGY An overall structured testing approach is advised to identify problems, in which the performance validation should be smoothly integrated. As we will see in this chapter, performance testing should not be considered as an effort that is addressed just prior to deployment. 6.1 Requirements As early as the gathering of the user requirements, you should be aware of performance issues, so explicit requirements on this subject have to be stated. Requirements should be related to the measurements that will be taken later on: number of users, throughput, response time, Effort that is spent in the requirements phase to put this on paper will be rewarded later. 8
6.2 Design The end result of an application is often of poor performance quality, because the quality attribute performance has been overlooked during the design phase. This is the phase in which you can prevent key bottlenecks to come to life, and through it win a lot of time when the application is about to being deployed. Scalability is an issue that is to be addressed here. If you make fundamental design flaws, you carry the risk with you throughout the project. At the end of the life cycle, it is a very costly operation to go back to the drawing board and it will certainly have a negative impact on the release schedule. 6.3 Development In all stages of development, performance engineering should be applied, along with ongoing performance analysis and tuning. Be reminded that performance is not something that can be added conveniently afterwards. Allocate enough time in your project schedule to start performance testing early in the development process, because it is the only way to avoid scalability surprises at the end. 6.4 Test preparation During test preparation, a comprehensive test strategy is advised where performance testing is integrated in a coherent future proof test plan. The reason is, that if you are going to do it by trial and error, you might as well not do it all. Squeezing the bottlenecks out of the system can only be done by a using a structured methodology, and by starting early enough so that you are not threatened with a looming deadline. The largest part of the scalability testing will be done in the system test phase. Of course, it s better if you can start earlier on a prototype, to determine the viability of the system that will be built, and if possible, to try out some of your automated test scripts. Don t focus your test strategy solely on the application under test, but design the tests in such a way that the involved hardware components are stressed also. We are talking about testing the machines themselves (processors, memory, disks, ) and the network (routers, firewalls, switches, ). 6.5 Test execution The test execution needs an iterative approach, because found issues have to be fixed and the tests run again to check if the solution works and to go hunting for new bottlenecks. Some system tuning or a small source code optimization can sometimes have a serious impact on the measurements. Several iterations will be necessary to obtain something that corresponds to the scalability requirements. In this phase, you will also establish some benchmarks that can be used in the future, to check that changes in the application or the environment do not result in degradation. To compare data for analysis, you will need to put effort in making your tests repeatable. 6.6 Maintenance The existing tests should be rerun now and then, even when the application is up and running, especially after architectural modifications have been made or new features have 9
been added to the software application. Therefore it is essential to have the following four things: A set of automated regression tests, so that repeatability can be guaranteed. Test data that goes with it. Test environment, whether it is the same one that was used to do the performance testing before deployment, or the production environment itself. Benchmarks obtained in passed runs, to compare your measurements against. For Web sites, it s even more important that your testing paradigm includes proactive performance validation. You can even use continuous monitoring to ensure that the performance criteria are met, and to keep track of the user and transaction growth. 7 WITH? THE TOOLS 7.1 The basics When you want to get a grip on the performance of your application, you will need an automated testing tool. Depending on your needs, you might buy a commercial one or develop one yourself. Studies indicate that, in most cases, the return on investment will be sufficiently high. Most of the load testing tools have as paradigm that virtual users are generated that mimic the business processes performed by real users. There is a central point of control (master controller), and several distributed workstations (agents) that drive a large number of virtual client applications. To represent realistic scenarios, the agents are capable of running a number of scripts (that you have to create) that are parameterized to test with different sets of data. 7.2 Tool selection Selecting an appropriate tool is not an easy task. Here is some advice on how to go about selecting the right tool: The abilities of the tool under investigation must be evaluated against the needs of your software application and the system architecture. Check for (in)compatibilities with the technology and the platforms used, but also for the complexity of installation and use. In some cases, a set of tools will be needed, especially when working with a complex architecture where all aspects of the network must be stressed and one tool cannot cover all of it. However, one of the tools must be able to emulate real transactions and not just bomb the wire with traffic. A lot of free tools are on the market, especially for Web testing. The danger here lurks in the limited functionality that most of them offer. When you are satisfied for the moment with a free tool, but later you want to take the step to an extended commercial tool, you will need to do the scripting all over again. Make sure that the tool does not become the bottleneck! When the total number of virtual users becomes too big, the master controller might have difficulties to cope with all the data provided by the agents and might become the bottleneck itself. 10
Keep in mind that the cost does not only come from the tool licenses, but also the training costs, the time spent on writing decent scripts, the test platforms, the debugging time in order to resolve the bottlenecks, not to mention possible human resources who are blocked in their job because of the performance tests being run. Check if the virtual users will consume the same amount of processor time and memory as the application (e.g. a browser) that is being mimicked. Check for the amount of intrusiveness of the tool (overhead, effect of running the tool itself on the reported response times, calibration options, cooperation with other tools, ). Also important are the facilities the tool provides for the analysis phase. Does it provide detailed and reliable results in a straightforward way? Does it produce realtime graphs and reports to let you detect the bottlenecks in an easy way? Check for the need of a separate measurement workstation, running a stand alone performance analyzer or network analyzer. When selecting a tool for Web site performance testing (figure 2), make sure that the tool can emulate the browser features: optional caching, possibility to select the type of browser, Fig 2.Automated performance test of Web sites [6] You might also come to the conclusion that no commercial tool can be found for your operating system, your protocols or whatever it is that makes your architecture special and not the type that belongs to the market selected by the tool vendors. In that case, you will have to develop custom code and scripts yourself, if this is technically feasible. 7.3 Scripting The scripts should represent the real world as best as possible: The recording of the fundamentals of the script is done by manually carrying out the desired scenario. The recorded script is then modified, by parameterization and by e.g. adding timers, synchronization points, parsing functions, error handling, 11
Delays between execution transactions should be realistic (except in the case of stress testing when the delays are removed). These delays represent the think time the user needs in his decision making process. Scripts must include all the possible transactions that users can initiate. For Web site visitors different patterns can be seen: people that e.g. browse, buy, or register. But also for enterprise client/server systems, categorization is necessary: people that take care of the input, that retrieve the existing information, that start batch jobs This requires a granular breakdown of the user types. Scripts should be repeatable and scalable. The test data that is used as input for the scripted scenarios should be a mix of small and large volumes. For maintenance purposes, it is best to import this test data from an external source, like a spreadsheet. Results should be as accurate as possible. Upfront, unambiguous pass/fail criteria must be defined. 7.4 Analysis At the end of the test, measurements are consolidated and prepared for analysis. This includes comparing the measurements to the baseline measurements gathered during the benchmarking phase. Graphs and reports will have to be interpreted, as well as error and log files. The main part, however is to pinpoint the bottlenecks and possible scalability issues, and to fine-tune your system to cope with these. It is crucial to alter only one thing at a time; otherwise you can t know what alteration impacted the performance. 7.5 Integration In order not to be disappointed by the results of the measurements taken by your newly acquired tool, you need to have clearly defined goals upfront. The test scripts should represent the implementation of your test strategy defined in your test plan, and reflect your logical test designs. The tool is nothing without decent test procedures to go with it. 8 WHERE? THE TEST ENVIRONMENT Having the right test environment is a fundamental (but also expensive) asset for the success of your performance testing project. The main criterion is that it should portray the real production environment in all aspects. Unfortunately, this expensive reproduction is not always possible, the consequence being that production performance is only determined once the application runs in a stable way in the real production environment. In that case, we will have to be content with emulating the production environment as closely as possible, but make sure that the features of the entire infrastructure can be exercised the same way as in production. One solution for instance, is to use a scaled down version of the production system (e.g. 5 instead of 20 Web servers) and extrapolate the results. 12
Using the real production environment is not a good solution. First of all, you can t afford the downtime caused by breaking the system. Second, you will want to play around with the environment to see the impact of some modifications on the measurements. Third, you don t want to share your environment with others. Not only might they influence your results, but most likely they will be blocked because of your experiments. For Web site testing, it is possible that your test environments carrying the user load will be geographically dispersed. 9 CONCLUSION Performance testing appears to be a combination of science and art. This magical combination however can be crucial for your business, since poor application performance might result in revenue losses and lost opportunities. The right combination of a tool, a test environment, a good test strategy and the right people, is needed to accurately predict the system behavior and performance. Beware not to give in to the chaos, but there is one thing you can be sure of: it will never blow up where you expect it! 10 REFERENCES [1] Getting Started with Rational Suite PerformanceStudio, Rational manual [2] Ensuring the performance of hosted E-business applications - white paper, Mercury Interactive website [3] Automated performance testing the importance of planning, David Goldberg, ImagoQA, article in Professional Tester March 2001 [4] The science and art of web site load testing Alberto Savoia presentation International conference on software testing analysis and review 2000 Orlando [5] Design for scalability IBM high volume web site team December 1999 [6] Minimize Risk with Proactive Performance Testing - Bill Jaeger (E-business advisor magazine): http://www.advisor.com/articles.nsf/aidp/jaegb01 [7] Load Testing to Predict Web Performance White paper Mercury Interactive: http://www-heva.mercuryinteractive.com/resources/library/whitepapers/load_testing/ 13
Copyright 2001 ps_testware Eurostar Performance testing: step on it Eurostar 2001 Nadine Pelicaen ps_testware
Agenda Introduction Why? Role in overall business process Who? The people involved What? The measurements Way? Web Sites <-> < > Client/Server When? Incorporation in Test Strategy With? The tools Where? The test environment Conclusions Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 2
Introduction Business critical knowledge Detecting bottlenecks Who? Why? What? Way? When? With? Where? Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 3
Why? Role in overall business process What? Way? When? Who? Why? With? Where? Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 4
Why? Role in business process (1/2) General business benefits Live up to expectations» Quality of Service Business growth scenarios» Budget adjustments Risk management Costs of downtime» Lost revenue» Potential loss of future business Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 5
Why? Role in business process (2/2) Risk management Web sites : public event Carrying the risk through the project» Not a one time effort» Back to the drawing board ROI» Investment is high» Calculate the risk Service Level Agreements» Determining SLA s» Maintenance (scalability) Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 6
Who? The people involved What? Way? When? Who? Why? With? Where? Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 7
Who? The people involved (1/2) Interdisciplinary team Project leader IT DBA Developers Tool specialist Test expert Analyst Configuration manager Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 8
Who? The people involved (2/2) Get help from Marketing Real users Different departments involved Positive: Bridging the gap Negative: : Fingerpointing Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 9
What? The measurements What? Way? When? Who? Why? With? Where? Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 10
What? Terminology Load testing Stress testing Scalability testing Endurance testing Particular load Observe behaviour Expected peak conditions Expected response time Extreme conditions Breaking points Deterioration effects Increased load Capacity planning Long term load testing Memory leaks Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 11
What? Measurements (1/2) Scientific value Reporting measurements Things like: Throughput Response time Length of scenario Nr users on moment of degradation starts Errors reported Nr hits, failed hits, page views Page download speed Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 12
What? Measurements (2/2) Harder to measure: Accessibility Session independence Resouce information : Memory CPU Database connections Network saturation Bandwidth information Number of users in queues Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 13
Way? Web sites vs. Client/Server What? Way? When? Who? Why? With? Where? Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 14
Way? Web sites vs. Client/Server Web Sites Security layer Different times and locations Dynamic vs. Static pages Unpredictable audience Higher demands Higher risk Unpredictable transaction volumes Client/Server Overhead because of hopping Distributed processing Exotic OS s and protocols Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 15
When? Incorporation in test strategy What? Way? When? Who? Why? With? Where? Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 16
When? Incorporation in test strategy (1/2) Requirements Nr of users, transactions, Design Quality attributes Scalability Risk Development Performance engineering Test preparation Coherent Test Plan in a Structured Methodology Prototyping Design to test the whole architecture Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 17
When? Incorporation in test strategy (2/2) Test execution Iterative approach Small change, big impact Establish benchmarks Maintenance Needed: Automated regression tests Test data Test environment Benchmarks Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 18
With? The tools What? Way? When? Who? Why? With? Where? Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 19
With? The tools (1/2) Paradigm: Master controller Agents running virtual clients through scripts Tool selection: Abilities vs. needs (check for incompatibilities) Make sure the tool does not become a bottleneck Costs: licenses, training, scripting, Intrusiveness Facilities for analysis phase Separate measurement workstation For Web site testing : Emulate browser features Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 20
With? The tools (2/2) Scripting: Recording + Parameterization Realistic delays All possible transactions Repeatable and scalable Mix of test data Unambiguous pass/fail criteria Analysis Consolidation of measurements Comparing to the benchmarks Pinpointing bottlenecks Integration Test plan + test designs Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 21
Where? The test environment What? Way? When? Who? Why? With? Where? Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 22
Where? The test environment Portray production environment Expensive Emulation + extrapolation Using the real production environment Downtime Tweak the environment Mutual influence Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 23
Conclusions Science and art Crucial for business Right mix of tool, environment, test strategy, people Don t t give in to the chaos Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 24
Questions Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 25
Tiensesteenweg 343 B-3010 Leuven Tel.: +32 (16) 35.93.80 Fax: +32 (16) 35.93.88 e-mail: info@pstestware pstestware.com http://www.pstestware pstestware.com Copyright 2001 ps_testware Eurostar - Nadine Pelicaen - 26