1 Introduction to Automated Testing
2 What is Software testing? Examination of a software unit, several integrated software units or an entire software package by running it. execution based on test cases expectation reveal faults as failures Failure incorrect execution of the system usually consequence of a fault Fault/defect/bug result of a human error
3 Objectives of testing To find defects before they cause a production system to fail. To bring the tested software, after correction of the identified defects and retesting, to an acceptable level of quality. To perform the required tests efficiently and effectively, within budgetary and scheduling limitation. To compile a record of software errors for use in error prevention (by corrective and preventive actions)
4 Software Testing Process Test Planning Test Design Test Implementation Test Execution Planning include completion criteria (coverage goal) Design - approaches for test case selection to achieve coverage goal Implementation - find for test cases input/output data state before/after test procedure Execution run tests Result verification pass or fail? Coverage? Test Library Management maintain relationships keeping track, etc. Results Verification Test Library Management
5 What is a Test Case? Test Case is a pair of <input, expected outcome> For state-less systems (e.g. a compiler) Test cases are very simple Outcome depends solely on the current input For state-oriented (e.g. ATM) Test cases are not that simple. A test case may consist of a sequences of <input, expected outcome> The outcome depends both on the current state of the system and the current input ATM example: < check balance, $ >, < withdraw, amount? >, < $200.00, $ >, < check balance, $ > Various ways input may be specified
6 Expected Outcome An outcome of program execution may include Value produced by the program State Change A sequence of values which must be interpreted together for the outcome to be valid A test oracle is a mechanism that verifies the correctness of program outputs Generate expected results for the test inputs Compare the expected results with the actual results of execution of the IUT
7 Levels of Testing Unit testing Individual program units, such as procedure, methods in isolation Integration testing Modules are assembled to construct larger subsystem and tested System testing Includes wide spectrum of testing such as functionality, and load Acceptance testing Customer s expectations from the system
8 Levels of Testing New test cases are not designed Test are selected, prioritized and executed To ensure that nothing is broken in the new version of the software
9 When to automate testing? (1) The benefits of test automation need to be greater than the (expensive!) costs of automation. General rule of thumb: it is expected that tests will have to be run many times regression testing configuration testing conformance testing agile development process capacity / stress testing performance measurements
10 When to automate testing (2) Automated testing is especially beneficial if the tests need to be re-executed quickly Frequent recompiles Large number of tests Using an agile development process An automated test can be duplicated to create many instances for capacity / stress testing.
11 Example: Test-First process in XP
12 When NOT to automate Initial functional testing Automated testing is more likely to find bugs introduced by changes to code or the execution environment, rather than in new functionality. Automated test scripts may not be ready for first software release. Situations requiring human judgment to determine if system is functioning correctly.
13 Types of Testing Tools Test Planning and Management Create/maintain test plans integrate with project plan Maintain links to Requirements/Specification generate Requirements Test Matrix Reports and Metrics on test case execution Tracking of history/status of test cases defect tracking
14 Types of Testing Tools Test Design & Implementation Automatic creation of test cases Based on test design approaches graph based data flow analysis logic based... Very few concrete usable tools Random test data generator Stubs/Mocks
15 Types of Testing Tools Test Execution Test Drivers and Execution Frameworks Run test scripts and report result e.g. JUnit Runtime test execution assistance memory leak checkers comparators
16 Types of Testing Tools Test Performance assessment Analysis of the effectiveness of test cases for extend of system covered Coverage analyzers report on various levels of coverage Analysis of the effectiveness of test cases for bug detection mutation testing
17 Types of Testing Tools Specialized testing Security testing tools password crackers vulnerability scanners packet crafters... Performance / Load testing tools performance monitors load generators...
18 Types of Test Tools Capture and Replay For user interface testing, one approach to automating tests is, after the system is working, record the input supplied by the user, and capture the system responses. When the next version of the software needs to be tested, play back the recorded user input and check if the same responses are detected as are stored in the capture file. Benefits: relatively simple approach, easy to do Drawbacks: very difficult to maintain specific to one environment
19 Tool support at different levels Unit testing Tools such as JUnit Integration testing Stubs, mocks System testing Security, performance, load testers Regression testing Test Management tools (e.g. defect tracking,...)
20 What do we need to do automated testing? Test script Test Case Specification Actions to send to system under test (SUT). Responses expected from SUT. How to determine whether a test was successful or not? Test execution system Mechanism to read test script, and connect test case to SUT. Directed by a test controller.
21 Test Architecture (1) Includes defining the set of Points of Control and Observation (PCOs) Test controller Test script Test Execution System PCO SUT A PCO could be a particular method to call a device interface a network port etc.
22 Test Architecture (2) The test architecture will affect the test script because it may be significant as to which PCO is used for an action or response. Test controller Test controller m m PCO PCO 1 PCO 2 SUT SUT
23 Potential PCOs Determining the PCOs of an application can be a challenge. Potential PCOs: Direct method call (e.g. JUnit) User input / output Data file input / output Network ports / interfaces Windows registry / configuration files Log files Temporary files or network ports Pipes / shared memory
24 Potential PCOs (2) 3 rd party component interfaces: Lookup facilities: network: Domain Name Service (DNS), Lightweight Directory Access Protocol (LDAP), etc. local / server: database lookup, Java Naming and Directory Interface (JNDI), etc. Calls to: remote methods (e.g. RPC) Operating System For the purposes of security testing, all of these PCOs could be a point of attack.
25 Distributed Test Architecture (1) May require several local test controllers and a master test controller Master Test controller PCO Local Test controller SUT Component 1 Local Test controller PCO SUT Component 2
26 Distributed Test Architecture (2) Issues with distributed testing: Establishing connections at PCOs Synchronization Where are pass/fail decisions made? Communication among test controllers
27 Choosing a test architecture User Browser Web Server mouse clicks / keyboard HTTP / HTML SQL Data base
28 Choosing a Test Architecture Testing from the user s point of view: Need a test tool to simulate mouse events, or keyboard input Need to be able to recognize correct web pages Small web page changes might require large changes to test scripts. Testing without the browser: Test script would send HTTP commands to web server, and check HTTP messages or HTML pages that are returned. Easier to do, but not quite as realistic.
29 Test Scripts What should the format of a test script be? tool dependent? a standard test language? a programming language?
30 Test Script Development Creating test scripts follows a parallel development process, including: requirements creation debugging configuration management maintenance documentation Result: they are expensive to create and maintain
31 Making the automation decision (1) Will the user interface of the application be stable or not? To what extent are oracles available? To what extent are you looking for delayed-fuse bugs (memory leaks, wild pointers, etc.)? Does your management expect to recover its investment in automation within a certain period of time? How long is that period and how easily can you influence these expectations? Are you testing your own company s code or the code of a client? Does the client want (is the client willing to pay for) reusable test cases or will it be satisfied with bug reports and status reports? Do you expect this product to sell through multiple versions?
32 Making the automation decision (2) Do you anticipate that the product will be stable when released, or do you expect to have to test Release N.01, N.02, N.03 and other bug fix releases on an urgent basis after shipment? Do you anticipate that the product will be translated to other languages? Will it be recompiled or re-linked after translation (do you need to do a full test of the program after translation)? How many translations and localizations? Does your organization make several products that can be tested in similar ways? Is there an opportunity for amortizing the cost of tool development across several projects?
33 Making the automation decision (3) How varied are the configurations (combinations of operating system version, hardware, and drivers) in your market? (To what extent do you need to test compatibility with them?) What level of source control has been applied to the code under test? To what extent can old, defective code accidentally come back into a build? How frequently do you receive new builds of the software? Are new builds well tested (integration tests) by the developers before they get to the tester?
34 Making the automation decision (4) To what extent have the programming staff used custom controls? How likely is it that the next version of your testing tool will have changes in its command syntax and command set? What are the logging/reporting capabilities of your tool? Do you have to build these in?
35 Making the automation decision (5) To what extent does the tool make it easy for you to recover from errors (in the product under test), prepare the product for further testing, and re-synchronize the product and the test (get them operating at the same state in the same program). In general, what kind of functionality will you have to add to the tool to make it usable? Is the quality of your product driven primarily by regulatory or liability considerations or by market forces (competition)? Is your organization subject to a legal requirement that test cases be demonstrable?
36 Making the automation decision (6) Will you have to be able to trace test cases back to customer requirements and to show that each requirement has associated test cases? Is your company subject to audits or inspections by organizations that prefer to see extensive regression testing? If you are doing custom programming, is there a contract that specifies the acceptance tests? Can you automate these and use them as regression tests? What are the skills of your current staff?
37 Making the automation decision (7) Do you have to make it possible for non-programmers to create automated test cases? To what extent are cooperative programmers available within the programming team to provide automation support such as event logs, more unique or informative error messages, and hooks for making function calls below the UI level? What kinds of tests are really hard in your application? How would automation make these tests easier to conduct?
38 Suggested reading Henk Coetzee, Best Practices in Software Test Automation, (2005) on line at C. Kaner, Architectures of Test Automation. (2000). On line at: C. Kaner, Improving the maintainability of automated test suites, Software QA, Vol. 4, No. 4 (1997). On line at: J. Bach, Test automation snake oil, Proceedings of 14th Int l conference on Testing Computer Software (revised 1999). On line at: