Topic # 10 Software Testing: Strategies (Ch. 17) 1 Objectives 1. Software Testing Strategy 2. Unit Testing 3. Integration Testing 4. Validation Testing 5. System Testing 2
Software Testing Strategy The goal of software testing is to uncover errors. To achieve this goal objective, the following strategic approach should be used: a series of 4 types (levels) of software tests 1) unit tests, 2) integration tests, 3) validation tests, and 4) system tests. should be planned and executed. Software testing accounts for the largest percentage (up to 25-30%) of technical effort in the software engineering process. Testing -- a systematic, planned activity Debugging -- an art (luck, intuition, etc). 3 Verification and Validation Verification -- process-related related term corresponds to Are we building the product right? (Is software engineering process correct?) Validation - product-related related term corresponds to Are we building the right product? (Is software product correct: Does it meet all customer requirements?) Ex: somebody used software engineering theory, software design procedures, graphic user interface, testing procedures, BUT the final software product is useless for a customer (or, maybe it is not popular among vast majority of customers, users). In this case: - verification YES, - validation NO. 4
Components of Software Testing Strategy 1. Unit tests 2. Unit integration tests concentrate on functional verification of a component and incorporation concentrate on functional verification of a component and incorporation of components into a program structure. 4. System tests 3. Validation tests validates software once it has been Incorporated into a larger system (actual environment) demonstrates traceability to software requirements 5 Question: Why do we need so many types of tests? Answers: Due to various types of bugs of software system. Due to various consequences of SW bugs. Bug Categories: 1) variables-related related bugs, 2) input/output data-related bugs, 3) coding (syntax)-related bugs, 4) system-related bugs, 5) functionality (functions)-related bugs, 6) design-related bugs, 7) standards violation-related bugs, 8) documentation-related bugs, etc. Consequences (Types) of SW Bugs infectious In 2002, a study commissioned by the US Department of Commerce' National Institute of Standards and Technology concluded that software bugs, or errors, are so prevalent and so harmful that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product. damage mild annoying serious disturbing catastrophic extreme Bug Type 6
What Testing Shows (Outcomes) syntax (coding) errors, logic errors, input date errors, etc. requirements conformance performance quality of final product 7 1. Unit Testing Testing begins at the unit (module) level and works outward toward the integration of the entire SW system Testing is conducted by the developer in the developer s s environment (alpha( alpha-testing) ) and by independent testing group outside development environment (beta( beta-testing). ). module to be tested results software engineer test cases Unit testing focuses verification effort on the smallest unit. Parallel testing on multiple modules is possible. 8
Unit Testing (cont.) module to be tested We should test: scope of all declared variables (local, global) data types of variables (integer, real, string, char, etc.) boundary conditions (what will happen with variables when we leave this unit or module) all functional tests (to test all required functions inside this s unit) independent paths inside this unit or module all interfaces inside unit/module identify and test all error-handling paths test cases 9 2. Integration Testing Strategies The goal of integration testing is to ensure that all interacting software units/modules/subsystems in a system interface correctly with one another to produce the desired results. Furthermore, in trying to attain this goal, integration tests will ensure that the introduction of one or more subsystems into the system does not have an adverse affect on existing functionality. An integration test covers the testing of interface points between subsystems. Integration testing is performed once unit testing has been completed for all units contained in the subsystems being tested. 2 main approaches: top-down integration (the big bang approach) bottom-up integration (an incremental construction strategy) 10
Top Down Integration (cont.) A top module is tested with stubs B F G D C E stubs are replaced one at a time, "depth first" as new modules are integrated, some subset of tests is re-run run 11 Bottom-Up Integration (cont.) A B F G C drivers are replaced one at a time, "depth first" D E working modules are grouped into building blocks and integrated cluster 12
3, 4, and other High Order Testing other specialized testing 3. validation test 4. system test Validation testing: Focus is on software requirements System testing: Focus is on system integration Alpha/Beta testing: Focus is on customer usage Recovery testing: : forces the software to fail in a variety of ways and verifies that recovery is properly performed Security testing: verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration Stress testing: executes a system in a manner that demands resources in abnormal quantity, frequency, or volume Performance Testing: test the run-time performance of software within the context of an integrated system 13 Software Testing: Tools by IBM Company (examples) 14
Topic # 10 Software Testing: Strategies Ch. 17: Additional Information 15 Software Testing: Additional Information 1. Software Testing Standards and Procedures http://it.toolbox.com/blogs/enterprise-solutions/sample solutions/sample-software-testing-standards-and- procedures-12772 2. Unit Testing http://mauriziostorani.wordpress.com/2008/07/09/unit-testing testing-examples-concepts-and- frameworks/ 3. Integration Testing http://www.exforsys.com/tutorials/testing/integration-testing testing-whywhathow.html 16
Topic # 10: Software Testing: SW Testing Techniques (Ch. 18) 17 Objectives 1. Software Testing Principles 2. Black-Box Box Testing technique 3. White-Box Testing technique 18
SW Testing Principles 1. All tests should traceable to customer requirements. *) most severe problems are those that cause program to fail to meet customer requirements 2. Tests should be planned long before actual testing process begins. *) think about testing as soon as customer requirements are completed 3. The Pareto principle applies to SW testing. *) 80% of all errors uncovered during testing will likely be traceable to 20 percent of all program components OR 20% of components generate 80% of errors 4. Testing should begin in the small (modules) and progress toward testing in the large (subsystems and systems). *) think about different testing procedures and content (tests) for each SW level -- system, subsystem, unit, component as soon n as requirements are completed 5. Exhaustive testing is NOT possible. *) number of path permutations even in a small program is exceptionally large if required reliability is greater than 0.9 6. To be most effective, testing should be conducted by both a developer (alpha-testing) and an independent tester (beta-testing). testing). 19 Who Tests the Software? Alpha-Testing SW developer understands the system is driven by "delivery will test "gently" Beta-Testing Independent tester must learn about the system, is driven by quality will attempt to test as much as possible; even break software system 20
Software Testing Techniques white-box technique black-box technique Techniques Strategies 21 White Box Testing Technique The goal of white-box testing (inside- module) is to 1) exercise all program logic paths within a module at least once, 2) check all loop execution constraints on both TRUE and FALSE (YES/NO) sides, and 3) check all internal data structure boundaries to ensure their validity. It focuses on the program control structure within a single module.... our goal is to ensure that all statements and conditions have been executed at least once... It is a very time- and effort- consuming testing method. 22
White-Box Testing Mechanics Input Data Unit/Module Output Data Environment Code Example: Currency (EURO to DOLLAR) converter unit/module (1-999 Euros -- 1.4; 1000-9999 1.45; >10000 1.45) Input: Amount (in EUROs) Output: Amount in U.S. DOLARS (with a rate based on amount of EURos) 23 Example 24
Exhaustive Testing vs Selective Testing Selective Testing: applications of the the Pareto principle : 80% of all errors uncovered during testing will likely be traceable to 20 percent of all components OR 20% of components generate 80% of errors Knowledge? Experience! Intuition! 25 Topic # 10 Software Testing: SW Testing Techniques In-Classroom Exercise # 1 26
Black-Box Testing Technique requirements Black-box testing focuses on the functional requirements of the software without regard to the internal workings of a module or program. outputs It is not an alternative to whitebox testing; it is a complementary approach. inputs events Both white-box and black-box techniques uncover different classes (types) of errors. 27 Black-Box Box Testing Input Data (of specified data types and structures) Environment Output Data (of specified data types and structures) Unit/module Code (with code) Events Example: Webster system Input: Alpha-numeric UserName and Password Output: a) BUID number (integer data type), b) CORRECT (Yes, Enter)/INCORRECT (Boolean data type) 28
Topic # 10 Software Testing: SW Testing Techniques In-Classroom Exercise # 2 29 Software Testing Techniques: Examples in Software Development Industry http://research.microsoft.com/en-us/projects/pex/ http://www.ibm.com/developerworks/rational/library/1147.html 30
Topic # 10 Software Testing: SW Testing Techniques In-Classroom Exercise # 3 31 Topic # 10 Software Testing: SW Testing Techniques Homework assignment 32
Topic # 10 Software Testing: SW Testing Techniques Additional Information 33 Characteristics of SW Testability 1. Operability it operates correctly (no bugs) 2. Observability 3. Controlability the results of each test case are readily observed (what you see is what you test) the degree to which testing can be automated and optimized 4. Decomposability testing can be targeted (inside independent modules) 5. Simplicity reduce complex architecture and logic to simplify tests 6. Stability few changes are requested during testing 7. Understandability of the design (more information we have, the smarter we will test) *) There are still PH.D. dissertations on how to automatically generate tests for SW systems 34
Other Black Box Techniques 1. error guessing methods 2. decision table techniques 3. cause effect graphing 35 Boundary Value Analysis (BVA) user queries mouse picks output formats prompts data input domain output domain A great number of errors tends to occur at the boundaries of the input domain rather then in the center. 36
Equivalence Partitioning mouse picks prompts user queries output formats data This black-box technique divides the input domain of a program into classes of data from which test cases can be derived. 37 Sample Equivalence Classes Valid data user supplied commands responses to system prompts file names computational data physical parameters bounding values initiation values output data formatting responses to error messages graphical data (e.g., mouse picks) Invalid data data outside bounds of the program physically impossible data proper value supplied in wrong place 38
Software Testing Testing is the process of exercising a code or program with the specific intent of finding errors prior to delivery to the end user. Testing is the one step in the software process that could be viewed (psychologically, at least) as destructive rather than constructive. 39