Requirements Based Testing Process Overview
|
|
|
- Jewel Holmes
- 10 years ago
- Views:
Transcription
1 Requirements Based Testing Process Overview 2009 Bender RBT Inc. 17 Cardinale Lane Queensbury, NY
2 The requirements-based testing (RBT) process addresses two major issues: first, validating that the requirements are correct, complete, unambiguous, and logically consistent; and second, designing a necessary and sufficient (from a black box perspective) set of test cases from those requirements to ensure that the design and code fully meet those requirements. In designing tests two issues need to be overcome: reducing the immensely large number of potential tests down a reasonable size set and ensuring that the tests got the right answer for the right reason. The RBT process does not assume, going in, that we will see good Requirements Specifications. That is very rarely the case. The RBT process will drive out ambiguity and drive down the level of detail. The overall RBT strategy is to integrate testing throughout the development life cycle and focus on the quality of the Requirements Specification. This leads to early defect detection which has been shown to be much less expensive than finding defects during integration testing or later. The RBT process also has a focus on defect prevention, not just defect detection. To put the RBT process into perspective, testing can be divided into the following eight activities: 1. Define Test Completion Criteria. The test effort has specific, quantitative and qualitative goals. Testing is completed only when the goals have been reached (e.g., testing is complete when all functional variations, fully sensitized for the detection of defects, and 100% of all statements and branch vectors have executed successfully in single run or set of runs with no code changes in between). 2. Design Test Cases. Logical test cases are defined by five characteristics: the initial state of the system prior to executing the test, the data in the data base, the inputs, the expected outputs, and the final system state. 3. Build Test Cases. There are two parts needed to build test cases from logical test cases: creating the necessary data, and building the components to support testing (e.g., build the navigation to get to the portion of the program being tested). 4. Execute Tests. Execute the test-case steps against the system being tested and document the results. 5. Verify Test Results. Verify that the test results are as expected. 6. Verify Test Coverage. Track the amount of functional coverage and code coverage achieved by the successful execution of the set of tests. 7. Manage and Track Defects. Any defects detected during the testing process are tracked to resolution. Statistics are maintained concerning the overall defect trends and status. 8. Manage the Test Library. The test manager maintains the relationships between the test cases and the programs being tested. The test manager keeps track of what tests have or have not been executed, and whether the executed tests have passed or failed. 1
3 All of the test tools and test techniques fit into one or more of these categories a convenient way of looking at the problem. The RBT process focuses on Steps 1, 2, and the black box portion of 6. Typically, capture/playback tools address Steps 3, 4, and 5. The RBT process complements them very well, as will be discussed later. The Business Case for RBT There are numerous economic reasons to improve the quality of your software. These are addressed in some detail in the Bender-Business Case For Software Quality document. For this discussion, there are a few we will focus on: reducing the costs to detect and remediate defects, reducing the time it takes to deliver the software, and improving the probability of successfully installing the right solution. Relative Cost to Fix an Error The reason for integrating testing earlier into the life cycle is simple economics. Studies have shown two things. First, that the majority of defects have their root cause in poorly defined requirements (see Figure 1). And second that the cost of fixing an error is cheaper the earlier it is found. The issue is scrap and rework. If a defect was introduced while coding, you just fix the code and re-compile. However, if a defect has its roots in poor requirements and is not discovered until integration testing then you must re-do the requirements, re-do the design, re-do the code, re-do the tests, re-do the user documentation, and re-do the training materials. It is all this re-do work that sends projects over budget and over schedule. Let s say that a defect introduced during the requirements phase is found during the requirements phase. Define the cost of finding that defect as 1X. If that same defect is not found until integration testing or production it will cost hundreds or even thousands of times more (see Figure 2). Distribution of Bugs Distribution of Effort To Fix Bugs Requirements 56% Design 27% Requirements 82% Design 13% Code 7% Other 10% Figure 1: Distribution of Defects James Martin Code 1% (James Martin) Other 4% 2
4 Phase in Which Found Cost Ratio Requirements 1 Design 3-6 Coding 10 Unit/Integration Testing System/Acceptance Testing Production (IBM, GTE, et. al) Figure 2: Relative Cost to Fix an Error Why Good Requirements Are Critical A study by the Standish Group showed that American companies spend about $300 billion per year on software development and maintenance. Of that, $84 billion was spent for cancelled software projects which delivered nothing. Another $192 billion was spent on software projects that so exceeded their time and budget estimates as to reduce their ability to result in any positive return on investment. The Standish Group and other studies show there are three top reasons why software projects fail: Requirements and specifications are incomplete. Requirements and specifications change too often. There is a lack of user input (to requirements). The RBT process addresses each of these issues: It begins at the first phase of software development where the correction of errors is the least costly. It begins at the requirements phase where the largest portion of defects have their root cause. It addresses improving the quality of requirements: Inadequate requirements often are the reason for failing projects. A Good Test Process The characteristics of a good test process are as follows: Testing must be timely. Any system without a tight feedback loop is a fatally flawed system. Testing is that feedback loop for the software development process. Therefore, it must begin in the initial phase of the project: when objectives are being defined and the scope of the requirements are first drafted. In this way, testing is not perceived as a bottleneck operation. Test early, test often. 3
5 Testing must be effective. The approach to test-case design must have rigor to it. Testing should not rely solely on individual skills and experiences. Instead, it should be based on a repeatable test process that produces the same test cases for a given situation, regardless of the tester involved. The test-case design approach must provide high functional coverage of the requirements. Testing must be efficient. Testing activities must be heavily automated to allow them to be executed quickly. The test-case design approach should produce the minimum number of test cases to reduce the amount of time needed to execute tests, and to reduce the amount of time needed to manage the tests. Testing must be manageable. The test process must provide sufficient metrics to quantitatively identify the status of testing at any time. The results of the test effort must be predictable (i.e., the outcome each time a test is successfully executed must be the same). Testing and the Software Development Life Cycle In most software development life cycles, the bulk of testing occurs only when code becomes available (see Figure 3). With the RBT process, testing is integrated throughout the life cycle for all of the reasons stated above (see Figure 4). [Note: Actually, in our life cycle we use an iterative approach, not a water fall approach but it would have made the diagrams needlessly complicated to fully represent this.] 4
6 Requirements Design Code Test Write User Manual Write Training Materials Delivered System Figure 3 Standard Development Life Cycle Test Requiremen Requiremen ts Test Design Design Write User Manuals Write Training Test Code Code Delivered System Figure 4 - Life Cycle With Integrated Testing 5
7 The RBT Process The RBT methodology is a 12-step process. [That it is a 12 step program is kind of appropriate for the dysfunctional world of software.] Each of these steps is described below. 1) Validate requirements against objectives 2) Apply scenarios against requirements 3) Perform initial ambiguity review 4) Perform domain expert reviews 5) Create cause-effect graph 6) Logical consistency check by BenderRBT 7) Review of test cases by specification writers 8) Review of test cases by users 9) Review of test cases by developers 10) Walk test cases through design 11) Walk test cases through code 12) Execute test cases against code 1. Validate requirements against objectives. Compare the objectives, which describe why the project is being initiated, to the requirements, which describe what is to be delivered. The objectives define the success criteria for the project. If the what does not match the why, then the objectives cannot be met, and the project will not succeed. If any of the requirements do not achieve the objectives, then they do not belong in the project scope. 2. Apply scenarios against requirements. Apply scenarios against requirements. A scenario is a task oriented users view of the system. The individual requirements, taken together, must be capable of satisfying any scenario; otherwise the requirements are incomplete. 3. Perform an initial ambiguity review. An ambiguity review is a technique for identifying and eliminating ambiguous words, phrases, and constructs. It is not a review of the content of the requirements. The ambiguity review produces a higherquality set of requirements for review by the rest of the project team. 4. Perform domain expert reviews. The domain experts review the requirements for correctness and completeness. 5. Create Cause-Effect Graph. The requirements are translated into a Cause-Effect Graph, which provides the following benefits: o It resolves any problems with aliases (i.e., using different terms for the same cause or effect). o It clarifies the precedence rules among the requirements (i.e., what causes are required to satisfy what effects). o It clarifies implicit information, making it explicit and understandable to all members of the project team. 6
8 o o It begins the process of integration testing. The code modules eventually must integrate with each other. If the requirements that describe these modules cannot integrate, then the code modules cannot be expected to integrate. The cause-effect graph shows the integration of the causes and effects. Cause-Effect Graphs may look intimidating on the surface. They would appear to need someone with a formal logic background or electrical engineering background to create them. However, all you need to know to build them is the definition of AND, OR, and NOT. If you are in the software profession and unsure of these three terms then it is time for a career change. 6. Logical consistency checks performed and test cases designed by BenderRBT tool. The tool identifies any logic errors in the cause-effect graph. The output from the tool is a set of test cases that are 100 percent equivalent to the functionality in the requirements. 7. Review of test cases by requirements authors. The designed test cases are reviewed by the requirements authors. If there is a problem with a test case, the requirements associated with the test case can be corrected and the test cases redesigned. 8. Validate test cases with the users/domain experts. If there is a problem with the test case, the requirements associated with it can be corrected and the test case redesigned. The users/domain experts obtain a better understanding of what the deliverable system will be like. From a Capability Maturity Model Integration SM (CMMI SM ) perspective, you are validating that you are building the right system. 9. Review of test cases by developers. The test cases are also reviewed by the developers. By doing so, the developers understand what they are going to be tested on, and obtain a better understanding of what they are to deliver so they can deliver for success. 10. Use test cases in design review. The test cases restate the requirements as a series of causes and effects. As a result, the test cases can be used to validate that the design is robust enough to satisfy the requirements. If the design cannot meet the requirements, then either the requirements are infeasible or the design needs rework. 11. Use test cases in code review. Each code module must deliver a portion of the requirements. The test cases can be used to validate that each code module delivers what is expected. 12. Verify code against the test cases derived from requirements. The final step is to build test cases from the logical test cases that have been designed by adding data and navigation to them, and executing them against the code to compare the actual behavior to the expected behavior. Once all of the test cases execute successfully against the code, then it can be said that 100 percent of the functionality has been verified and the code is ready to be delivered into production. From a CMMI perspective, you have verified that you are building the system right. Characteristics of Testable Requirements In order for the requirements to be considered testable, the requirements ideally should have all of the following characteristics: 7
9 1. Deterministic: Given an initial system state and a set of inputs, you must be able to predict exactly what the outputs will be. 2. Unambiguous: All project members must get the same meaning from the requirements; otherwise they are ambiguous. 3. Correct: The relationships between causes and effects are described correctly. 4. Complete: All requirements are included. There are no omissions. 5. Non-redundant: Just as the data model provides a non-redundant set of data, the requirements should provide a non-redundant set of functions and events. 6. Lends itself to change control: Requirements, like all other deliverables of a project, should be placed under change control. 7. Traceable: Requirements must be traceable to each other, to the objectives, to the design, to the test cases and to the code. 8. Readable by all project team members: The project stakeholders, including the users, developers and testers, must each arrive at the same understanding of the requirements. 9. Written in a consistent style: Requirements should be written in a consistent style to make them easier to understand. 10. Processing rules reflect consistent standards: Processing rules should be written in a consistent manner to make them easier to understand. 11. Explicit: Requirements must never be implied. 12. Logically consistent: There should be no logic errors in the relationships between causes and effects. 13. Lends itself to reusability: Good requirements can be reused on future projects. 14. Terse: Requirements should be written in a brief manner, with as few words as possible. 15. Annotated for criticality: Not all requirements are critical. Each requirement should note the degree of impact a defect in it would have on production. In this way, the priority of each requirement can be determined, and the proper amount of emphasis placed on developing and testing each requirement. We do not need zero defect in most systems. We need good enough testing. 16. Feasible: If the software design is not capable of delivering the requirements, then the requirements are not feasible. The key characteristics in the above list are Deterministic and Unambiguous. Testing, by definition, is comparing the expected results to the observed results. This is not unique to software. When you were in school and took a math test, the instructor did not just look to see if your answer was reasonable. They compared your answer to their precalculated answer. If your answer was different it was wrong. Software testing is supposed to have that characteristic. Unfortunately, you almost never find Requirements Specifications written in sufficient detail to be able to determine the expected outcome. In the hundreds of projects that my staff and I have been involved in over the last twenty years, we have seen only two good specifications going into the project. As stated above, the RBT drives out ambiguity and drives down the level of detail in the specifications. 8
10 Ambiguity Review Checklist The Ambiguity Review Checklist is made up of 15 classes of ambiguities that are commonly found in requirements. The Ambiguity Review Checklist is used to manually review requirements and identify all of the ambiguities so they can be eliminated. The Ambiguity Review Checklist includes the following types of ambiguities: The dangling Else Ambiguity of reference Scope of action Omissions Ambiguous logical operators Negation Ambiguous statements Random organization Built-in assumptions Ambiguous precedence relationships Implicit cases Etc. I.E. versus E.G. Temporal ambiguity Boundary Ambiguity Note: For a more complete description of the ambiguity review process and checklist, see the Bender-Ambiguity Review White Paper. Savings Via Ambiguity Reviews A study by Bender RBT Inc. showed that, if defects are found in the requirements phase using ambiguity reviews, the cost of finding each of these defect is only about $25 to $40 per severity 1 or severity 2 defect. This was calculated using a burden rate (employee salary, benefits, etc.) of $150K/year. Compare this figure to the results of a study by the Software Engineering Institute, which showed that if defects are found in integration/system test, the cost of finding each of these defects is between $750 to $3,000. Studies by Hewlett Packard and IBM showed that if defects are found in production, the cost of finding each of these defects is between $10,000 and $140,000. Our experience has shown that if these ambiguities are not dealt with in the Requirements Definition Phase, nearly 100% of them will show up as defects in the code. The Test Case Design Challenge One of the major challenges in test case design is this simple fact: any system, except the most trivial ones, has more possible combinations and permutations of the inputs/states than there are molecules in the universe (which is according to Stephen Hawking in 9
11 "A Brief History In Time"). The key challenge then is to select an infinitesimally small subset of tests which, if they run correctly, give you a very high degree of assurance that all of the other combinations/permutations will also run correctly. The software industry has done a very poor job at meeting this challenge. The result is that software still averages worse than 5 defects / 1K executable lines of code. On the other hand those producing high end integrated circuits (e.g., Intel, Motorola) achieve defect rates of less than 1 defect / million gates. Even though high end chips now exceed 40 million gates you rarely hear about a logic error on the released products. There is no fundamental difference in testing rules in software versus rules in hardware - rules are rules are rules. The difference in quality is solely due to the difference in the development and testing approaches. Hardware engineers use disciplined, rigorous approaches while software is still essentially a "craftsman" endeavor. Software is now far too complex and critical to continue in this manner. Having created the Cause-Effect Graph, which can be done from any style specification (e.g., use-cases, inheritance diagrams, finite state diagrams, narratives, decision tables), the logic rules are now in a form that looks like a circuit diagram. We can then apply the same type of rules to test design that the hardware industry has used for years. These are the path sensitizing algorithms for hardware logic testing. We have extended the rules in them to cover software specific issues. The basic issue is ensuring that you got the right answer for the right reason. If there is any defect in the code it must show up at an observable point. Given the long path lengths in most code today this is a non-trivial challenge. Let s look at what these algorithms do for us in designing tests using two examples. Figure 5 shows a simple application rule that states that if you have A or B or C you should produce D. The test variations to test are shown in Figure 6. The dash just means that the variable is false. For example, the first variation is A true, B false, and C false, which should result in D true. Each type of logical operator simple, and, or, nand, nor, xor and not has a well defined set of variations to be tested. The number is always n+1 where n is the number of inputs to the relation statement. In the case of the or you take each variable true by itself with all the other inputs false and then take the all false case. You do not need the various two true at a time cases or three true at a time, etc. These turn out to be mathematically meaningless from a black box perspective. The test variations for each logical operator are then combined with those for other operators into test cases to test as much function in as small a number of tests as possible. Let us assume that there are two defects in the code that implements our A or B or C gives us D rule. No matter what data you give it, it thinks A is always false and B is always true. There is no Geneva Convention for software that limits us to one defect per function. 10
12 Figure 5 - Simple "OR" Function With Two Defects Figure 6 - Required Test Cases For The "OR" Function Figure 7 shows the results of running the tests. When we run test variation 1 the software says A is not true, it is false. However, is also says B is not false, it is true. The result is we get the right answer for the wrong reason. When we run the second test variation we enter B true, which the software always thinks is the case we get the right answer. When we enter the third variation with just C true, the software thinks both B and C are true. Since this is an inclusive or, we still get the right answer. We are now reporting to management that we are three quarters done with our testing and everything is looking great. Only one more test to run and we are ready for production. However, when we enter the fourth test with all inputs false and still get D true, then we know we have a problem. Figure 7 - Variable "B" Stuck True Defect Found By Test Case 4 11
13 There are two key things about this example so far. The first is that software, even when it is riddled with defects, will still produce correct results for many of the tests. The second thing is that if you do not pre-calculate the answer you were expecting and compare it to the answer you got you are not really testing. Sadly, the majority of what purports to be testing in our industry does not meet these criteria. People look at the test results and just see if they look reasonable. Part of the problem is that the specifications are not in sufficient detail to meet the most basic definition of testing. When test variation four failed, it led to identifying the B stuck true defect. The code is fixed and test variation four, the only one that failed, is rerun. It now gives the correct results. This meets the common test completion criteria that every test has run correctly at least once and no severe defects are unresolved. The code is shipped into production. However, if you rerun test variation one, it now fails (see Figure 8). The A stuck false defect was not caused by fixing the B defect. When the B defect is fixed you can now see the A defect. When any defect is detected all of the related tests must be rerun. Figure 8 - Variable "A" Stuck False Defect Not Found Until Variable "B" Defect Fixed The above example addresses the issue that two or more defects can sometimes cancel each other out giving the right answers for the wrong reasons. The problem is worse than that. The issue of observability must be taken into account. When you run a test how do you know it worked? You look at the outputs. For most systems these are updates to the databases, data on screens, data on reports, and data in communications packets. These are all externally observable. In Figure 9 let us assume that node G is the observable output. C and F are not externally observable. We will indirectly deduce that the A, B, C function worked by looking at G. We will indirectly deduce that the D, E, F function worked by looking at G. Let us further assume there is a defect at A where the code always assumes that A is false no matter what the input is. A fairly obvious test case would be to have all of the inputs set to true. This should result in C, F, and G being set to true. When this test is entered the software says A is not true, it is false. Therefore, C is not set to the expected true value but is set to false. However, when we get to G it is still true as we expected because the D, E, F leg worked. In this case we did not see the defect at C because it was hidden by the F leg working correctly. 12
14 Figure 9 - Variable "A" Stuck False Defect Not Observable Therefore, the test case design algorithms must factor in: The relations between the variables (e.g., and, or, not) The constraints between the data attributes (e.g., it is physically impossible for variables one and two to be true at the same time) The functional variations to test (i.e., the primitives to test for each logical relationship) Node observability The design of the set of tests must be such that if one or more defects are present, you are mathematically guaranteed that at least one test case will fail at an observable point. When that defect is fixed, if any additional defects are present, then one or more tests will fail at an observable point. The algorithms that the BenderRBT tool uses to design the tests are highly optimized. As an example, in the test of a GUI screen, the theoretical number of tests that could have been created was 137,438,953,472 possible test cases. RBT resolved the problem with 22 test cases that provided 100 percent functional coverage. Consider the following thought experiment: Put 137,438,953,450 red balls in a giant barrel. Add 22 green balls to the barrel and mix well. Turn out the lights. Pull out 22 balls. What is the probability that you have selected all 22 of the green balls? If this does not seem likely to you, try again. Return the balls and pull out 1,000 balls. What is the probability that you now have selected all 22 of the green balls? If this still does not seem likely to you, try again. Return the balls and pull out 1,000,000 balls. What is the probability that you now have selected all 22 of the green balls? This is what gut-feel testing really is. 13
15 For most complex problems it is impossible to manually derive the right combination of test cases that covers 100 percent of the functionality. The right combination of test cases is made up of individual test cases, and each covers at least one type of error that none of the other test cases covers. Taken together, the test cases cover 100 percent of the functionality. Any more test cases would be redundant because they would not catch an error that is already covered by an existing test case. Gut-feel testing often focuses only on the normal processing flow. Another name for this is the go right path. Gut-feel testing often creates too many (redundant) test cases for the go path. Gut-feel testing also often does not adequately cover all the combinations of error conditions and exceptions, i.e., the processing off the go path. As a result, gut-feel testing suffers when it comes to functional coverage. Even reasonably methodical approaches, such as equivalence class testing with boundary analysis or orthogonal pairs testing, only address trying to reduce the total number of possible tests down to a reasonably small number. They do not address the other critical issue: did you observe the defect. This is why we are testing after all. We have measured the coverage of existing test libraries, designed via traditional techniques, on numerous projects. They have generally been at only 35% to 40% of the coverage that would be achieved via Cause-Effect Graphing. The number of tests was also usually two to four times the number of graphing based tests. Synergy With Other Testing Tools The RBT process and the BenderRBT tool which supports it enhance your efforts with many of the other tools you may be using. Requirements Management tools (e.g. Borland s CaliberRM, Telelogic s DOORS, IBM/Rational s Requisite Pro) address the logistics of the managing the requirements objects. RBT addresses the quality of those objects. Capture/Playback Tools The RBT process stabilizes the functional definition and the user interface earlier in the life cycle, allowing timely implementation of stable test scripts. The BenderRBT tool minimizes the number of scripts required by calculating the minimum number of test cases. Code Coverage Analyzers The industry average for code coverage at test completion is under 50%. The RBT process results in between 70% and 90% code coverage by providing 100% functional coverage with tests designed before the first line of code was written. You then supplement the RBT tests with additional white box oriented test to get 100% code coverage. Another benefit is that, since the bulk of the tests are black box tests they are highly portable as the application is ported to new technologies. 14
16 Case Study Results We have been applying the RBT process since the early 1970 s. This has included just about every application type and technology including embedded to super-computer based applications and everything in between e.g. financial, military, scientific, manufacturing, operating systems, communications, medical, etc. Over the years we have naturally refined it and improved the automated support. The following are brief discussions of the typical results achieved via this process from recent projects in the last year or so. Manufacturing Support System System to manage design and production issues First iteration had >50% defect rate on initial test passes Second iteration switched to RBT Results: Over 250 specification defects identified and resolved prior to development < 10% initial defect rate All testing completed in a single test phase No functional defects escaped into User Acceptance Test All tests automated immediately following first test phase 20% reduction in rework Same results in iteration 3 except initial defect rate < 5% Hardware/Software Vendor Configurator This is a system that ensures the compatibility of any hardware and/or software being ordered with the other items being ordered and with what is already installed. 60,000 business rules 25% change each month New releases multiple times per week No room for error Prior to RBT numerous severity one and two defects per release Business rules modeled in RBT Updates to Configurator reflected in models RBT generates revised tests descriptions DTT add-on generates executable scripts Defect rate drops to near zero No severity ones or twos in last 3 years 12 people maintain 20,000 executable test scripts Note: DTT (Direct to Test) is an add-on to the BenderRBT Test Case Design Tool developed by one of our partners. It is a framework tool that generates executable test 15
17 scripts from the RBT output. It can generate those scripts to run on any of the major playback tools. Credit Card System Credit card processing system for a bank Major inventory of customizations, little documentation, limited SME (subject matter experts) availability 60 models and about 900 test cases No customization functional defects detected by the user Over 3000 bugs in the base technology detected i.e. the existing functionality that had to be traversed to get to the new functionality. Bank Credit System As stated earlier, most projects find that about 56% of the defects have their root cause in poor requirements. On this project the integration test effort found that only 2% of the defects could be traced back to requirements errors. This resulted in a huge savings in the time and costs to deliver the system. Once in production, no defects were encountered. 16
18 Summary In summary, the RBT methodology reduces the time to deliver by allowing testing to be performed concurrently with the rest of the development activities. It reduces the cost to deliver and time to deliver by minimizing scrap and rework. It delivers maximum coverage with the minimum number of test cases. RBT also provides quantitative test progress metrics within the 12 steps of the RBT methodology, ensuring that testing is being adequately performed and is no longer a bottleneck. Once you learn it you can apply it to any application written in any programming language, running on any machine in any operating system. 17
How Do You Know When You Are Done Testing? Richard Bender Bender RBT Inc. 17 Cardinale Lane Queensbury, NY 12804 518-743-8755 rbender@benderrbt.
How Do You Know When You Are Done Testing? By Richard Bender Bender RBT Inc. 17 Cardinale Lane Queensbury, NY 12804 518-743-8755 [email protected] How Do You Know When You Are Done Testing? You're
The Ambiguity Review Process. Richard Bender Bender RBT Inc. 17 Cardinale Lane Queensbury, NY 12804 518-743-8755 rbender@benderrbt.
The Ambiguity Review Process Richard Bender Bender RBT Inc. 17 Cardinale Lane Queensbury, NY 12804 518-743-8755 [email protected] The Ambiguity Review Process Purpose: An Ambiguity Review improves
Requirements-Based Testing Ambiguity Reviews. Gary E. Mogyorodi, B.Math., M.B.A. Principal Consultant
Requirements-Based Testing Ambiguity Reviews Gary E. Mogyorodi, B.Math., M.B.A. Principal Consultant Software Testing Services 1646 Coldwater Mews Mississauga, Ontario, Canada L5M4X3 Phone: (905) 813-7937
Requirements Management
REQUIREMENTS By Harold Halbleib Requirements Management Identify, Specify, Track and Control Requirements Using a Standard Process About the author... Harold Halbleib has a degree in Electrical Engineering
Requirements-Based Testing - Cause-Effect Graphing. Gary E. Mogyorodi, B.Math., M.B.A. Principal Consultant. Software Testing Services
Requirements-Based Testing - Cause-Effect Graphing Gary E. Mogyorodi, B.Math., M.B.A. Certified Tester, Foundation Level (CTFL) Certified Tester, Advanced Level Functional Tester (CTAL-FT) Certified Tester,
(Refer Slide Time: 01:52)
Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This
Benefits of Test Automation for Agile Testing
Benefits of Test Automation for Agile Testing Manu GV 1, Namratha M 2, Pradeep 3 1 Technical Lead-Testing Calsoft Labs, Bangalore, India 2 Assistant Professor, BMSCE, Bangalore, India 3 Software Engineer,
Process Models and Metrics
Process Models and Metrics PROCESS MODELS AND METRICS These models and metrics capture information about the processes being performed We can model and measure the definition of the process process performers
White Paper Performance Testing Methodology
White Paper Performance Testing Methodology by Johann du Plessis Introduction One of the main concerns with a performance testing project is how much value the testing adds. Is performance testing worth
Software Engineering: Analysis and Design - CSE3308
CSE3308/DMS/2004/25 Monash University - School of Computer Science and Software Engineering Software Engineering: Analysis and Design - CSE3308 Software Quality CSE3308 - Software Engineering: Analysis
CUT COSTS, NOT PROJECTS
CUT COSTS, NOT PROJECTS Understanding and Managing Software Development Costs A WEBINAR for State of Washington Agencies Critical Logic, Inc. July 9 2009 Starting at 3pm, Pacific Daylight Time Critical
Testing Metrics. Introduction
Introduction Why Measure? What to Measure? It is often said that if something cannot be measured, it cannot be managed or improved. There is immense value in measurement, but you should always make sure
Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction
Software Testing Rajat Kumar Bal Introduction In India itself, Software industry growth has been phenomenal. IT field has enormously grown in the past 50 years. IT industry in India is expected to touch
SOFTWARE DEVELOPMENT STANDARD FOR SPACECRAFT
SOFTWARE DEVELOPMENT STANDARD FOR SPACECRAFT Mar 31, 2014 Japan Aerospace Exploration Agency This is an English translation of JERG-2-610. Whenever there is anything ambiguous in this document, the original
Essentials of the Quality Assurance Practice Principles of Testing Test Documentation Techniques. Target Audience: Prerequisites:
Curriculum Certified Software Tester (CST) Common Body of Knowledge Control Procedures Problem Resolution Reports Requirements Test Builds Test Cases Test Execution Test Plans Test Planning Testing Concepts
Measurement Information Model
mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model provides
Software development life cycle. Software Engineering - II ITNP92 - Object Oriented Software Design. Requirements. Requirements. Dr Andrea Bracciali
Software development life cycle Software life cycle: Software Engineering - II ITNP92 - Object Oriented Software Design Dr Andrea Bracciali Module Co-ordinator 4B86 [email protected] Spring 2014 (elicitation)
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2006 Vol. 5. No. 8, November-December 2006 Requirements Engineering Tasks Donald Firesmith,
The role of integrated requirements management in software delivery.
Software development White paper October 2007 The role of integrated requirements Jim Heumann, requirements evangelist, IBM Rational 2 Contents 2 Introduction 2 What is integrated requirements management?
Ten steps to better requirements management.
White paper June 2009 Ten steps to better requirements management. Dominic Tavassoli, IBM Actionable enterprise architecture management Page 2 Contents 2 Introduction 2 Defining a good requirement 3 Ten
Analysis of Object Oriented Software by Using Software Modularization Matrix
Analysis of Object Oriented Software by Using Software Modularization Matrix Anup 1, Mahesh Kumar 2 1 M.Tech Student, 2 Assistant Professor, Department of Computer Science and Application, RPS College,
What do you think? Definitions of Quality
What do you think? What is your definition of Quality? Would you recognise good quality bad quality Does quality simple apply to a products or does it apply to services as well? Does any company epitomise
Test Plan Evaluation Model
Satisfice, Inc. http://www.satisfice.com James Bach, Principal [email protected] Version 1.12 9/25/99 Test Plan Evaluation Model The answer to the question How good is this test plan? can only be given
http://www.test-institute.org International Software Test Institute
THE ONLY BOOK CAN SIMPLY LEARN SOFTWARE TESTING! Page 1 Contents ABOUT THE AUTHOR... 3 1. Introduction To Software Testing... 4 2. What is Software Quality Assurance?... 7 3. What Is Software Testing?...
Latest Trends in Testing. Ajay K Chhokra
Latest Trends in Testing Ajay K Chhokra Introduction Software Testing is the last phase in software development lifecycle which has high impact on the quality of the final product delivered to the customer.
The Practical Organization of Automated Software Testing
The Practical Organization of Automated Software Testing Author: Herbert M. Isenberg Ph.D. Quality Assurance Architect Oacis Healthcare Systems PO Box 3178 Sausalito, CA. 94966 Type: Experience Report
Test management best practices
Test management best practices Introduction Purpose Few people can argue against the need for improved quality in software development. Users of technology that utilizes software have come to expect various
Requirements engineering
Learning Unit 2 Requirements engineering Contents Introduction............................................... 21 2.1 Important concepts........................................ 21 2.1.1 Stakeholders and
How To Develop Software
Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which
Darshan Institute of Engineering & Technology Unit : 7
1) Explain quality control and also explain cost of quality. Quality Control Quality control involves the series of inspections, reviews, and tests used throughout the software process to ensure each work
Software Engineering Introduction & Background. Complaints. General Problems. Department of Computer Science Kent State University
Software Engineering Introduction & Background Department of Computer Science Kent State University Complaints Software production is often done by amateurs Software development is done by tinkering or
Formal Software Testing. Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com
Formal Software Testing Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com Scope of Testing Find defects early Remove defects prior to production Identify Risks Unbiased opinion When Should Testing
SOFTWARE REQUIREMENTS
SOFTWARE REQUIREMENTS http://www.tutorialspoint.com/software_engineering/software_requirements.htm Copyright tutorialspoint.com The software requirements are description of features and functionalities
Testing Intelligent Device Communications in a Distributed System
Testing Intelligent Device Communications in a Distributed System David Goughnour (Triangle MicroWorks), Joe Stevens (Triangle MicroWorks) [email protected] United States Smart Grid systems
The Design and Improvement of a Software Project Management System Based on CMMI
Intelligent Information Management, 2012, 4, 330-337 http://dx.doi.org/10.4236/iim.2012.46037 Published Online November 2012 (http://www.scirp.org/journal/iim) The Design and Improvement of a Software
Levels of Software Testing. Functional Testing
Levels of Software Testing There are different levels during the process of Testing. In this chapter a brief description is provided about these levels. Levels of testing include the different methodologies
1. Introduction. Annex 7 Software Project Audit Process
Annex 7 Software Project Audit Process 1. Introduction 1.1 Purpose Purpose of this document is to describe the Software Project Audit Process which capable of capturing different different activities take
Testing, What is it Good For? Absolutely Everything!
Testing, What is it Good For? Absolutely Everything! An overview of software testing and why it s an essential step in building a good product Beth Schechner Elementool The content of this ebook is provided
User Stories Applied
User Stories Applied for Agile Software Development Mike Cohn Boston San Francisco New York Toronto Montreal London Munich Paris Madrid Capetown Sydney Tokyo Singapore Mexico City Chapter 2 Writing Stories
Standard for Software Component Testing
Standard for Software Component Testing Working Draft 3.4 Date: 27 April 2001 produced by the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST) Copyright Notice This document
Co-Presented by Mr. Bill Rinko-Gay and Dr. Constantin Stanca 9/28/2011
QAI /QAAM 2011 Conference Proven Practices For Managing and Testing IT Projects Co-Presented by Mr. Bill Rinko-Gay and Dr. Constantin Stanca 9/28/2011 Format This presentation is a journey When Bill and
IV. Software Lifecycles
IV. Software Lifecycles Software processes and lifecycles Relative costs of lifecycle phases Examples of lifecycles and processes Process maturity scale Information system development lifecycle Lifecycle
Detecting Defects in Object-Oriented Designs: Using Reading Techniques to Increase Software Quality
Detecting Defects in Object-Oriented Designs: Using Reading Techniques to Increase Software Quality Current Research Team: Prof. Victor R. Basili Forrest Shull, Ph.D. Guilherme H. Travassos, D.Sc. (1)
Requirements Definition and Management Processes
Software Engineering G22.2440-001 Session 1 Sub-Topic 1 Requirements Definition & Management Processes and Tools Dr. Jean-Claude Franchitti New York University Computer Science Department Courant Institute
Fundamentals of Measurements
Objective Software Project Measurements Slide 1 Fundamentals of Measurements Educational Objective: To review the fundamentals of software measurement, to illustrate that measurement plays a central role
Smarter Balanced Assessment Consortium. Recommendation
Smarter Balanced Assessment Consortium Recommendation Smarter Balanced Quality Assurance Approach Recommendation for the Smarter Balanced Assessment Consortium 20 July 2012 Summary When this document was
Carnegie Mellon University Master of Science in Information Technology Software Engineering (MSIT-SE) MSIT Project (17-677) Approval Form
Carnegie Mellon University Master of Science in Information Technology Software Engineering (MSIT-SE) MSIT Project (17-677) Approval Form Student Name: Jane Doe Date: 9/19/2002 Project Title: Re-Engineer
Introducing Formal Methods. Software Engineering and Formal Methods
Introducing Formal Methods Formal Methods for Software Specification and Analysis: An Overview 1 Software Engineering and Formal Methods Every Software engineering methodology is based on a recommended
Software Project Audit Process
Software Project Audit Process Version 1.2 Information and Communication Technology Agency of Sri Lanka July 2013 Copyright 2011 ICTA Software Project Audit Process-v-1.2 Revision History Date Version
Table of contents. Enterprise Resource Planning (ERP) functional testing best practices: Ten steps to ERP systems reliability
Enterprise Resource Planning (ERP) functional testing best practices: Ten steps to ERP systems reliability Table of contents Introduction.......................................................2 Step 1:
Chapter 17 Software Testing Strategies Slide Set to accompany Software Engineering: A Practitioner s Approach, 7/e by Roger S. Pressman Slides copyright 1996, 2001, 2005, 2009 by Roger S. Pressman For
CSTE Mock Test - Part I - Questions Along with Answers
Note: This material is for Evaluators reference only. Caters to answers of CSTE Mock Test - Part I paper. 1. A branch is (Ans: d) a. An unconditional transfer of control from any statement to any other
Software Development Life Cycle (SDLC)
Software Development Life Cycle (SDLC) Supriyo Bhattacharjee MOF Capability Maturity Model (CMM) A bench-mark for measuring the maturity of an organization s software process CMM defines 5 levels of process
Lowering business costs: Mitigating risk in the software delivery lifecycle
August 2009 Lowering business costs: Mitigating risk in the software delivery Roberto Argento IBM Rational Business Development Executive Valerie Hamilton IBM Rational Solution Marketing Manager and Certified
Bringing Value to the Organization with Performance Testing
Bringing Value to the Organization with Performance Testing Michael Lawler NueVista Group 1 Today s Agenda Explore the benefits of a properly performed performance test Understand the basic elements of
Foundations for Systems Development
Foundations for Systems Development ASSIGNMENT 1 Read this assignment introduction. Then, read Chapter 1, The Systems Development Environment, on pages 2 25 in your textbook. What Is Systems Analysis and
INTRUSION PREVENTION AND EXPERT SYSTEMS
INTRUSION PREVENTION AND EXPERT SYSTEMS By Avi Chesla [email protected] Introduction Over the past few years, the market has developed new expectations from the security industry, especially from the intrusion
Requirements Management In Action. A beginners guide to Requirements Management
Requirements Management In Action A beginners guide to Requirements Management Table of Contents Introduction How to Capture Requirements What is Traceability? Tips to Capture Better Requirements Conclusion
Staffing Your Test Automation Team
Staffing Your Test Automation Team 2002-2009, Mosaic, Inc. www.mosaicinc.com 205 N. Michigan Ave. Suite 2211 Chicago, IL 60601 312-819-2220 Automation_Staffing.PDF Page 1 2002-2009, Mosaic, Inc. INTRODUCTION
Software Metrics & Software Metrology. Alain Abran. Chapter 4 Quantification and Measurement are Not the Same!
Software Metrics & Software Metrology Alain Abran Chapter 4 Quantification and Measurement are Not the Same! 1 Agenda This chapter covers: The difference between a number & an analysis model. The Measurement
Software Engineering Compiled By: Roshani Ghimire Page 1
Unit 7: Metric for Process and Product 7.1 Software Measurement Measurement is the process by which numbers or symbols are assigned to the attributes of entities in the real world in such a way as to define
Verona: On-Time, On-Scope, On-Quality
Verona: On-Time, On-Scope, On-Quality All project teams struggle to meet the potentially conflicting objectives of delivering ontime, with all committed features and with the highest levels quality. And
Engineering Process Software Qualities Software Architectural Design
Engineering Process We need to understand the steps that take us from an idea to a product. What do we do? In what order do we do it? How do we know when we re finished each step? Production process Typical
Business Process Discovery
Sandeep Jadhav Introduction Well defined, organized, implemented, and managed Business Processes are very critical to the success of any organization that wants to operate efficiently. Business Process
Software Engineering. Session 3 Main Theme Requirements Definition & Management Processes and Tools Dr. Jean-Claude Franchitti
Software Engineering Session 3 Main Theme Requirements Definition & Management Processes and Tools Dr. Jean-Claude Franchitti New York University Computer Science Department Courant Institute of Mathematical
Software Engineering 1
THE BCS PROFESSIONAL EXAMINATIONS Diploma April 2006 EXAMINERS REPORT Software Engineering 1 General Comments Most of the scripts produced by candidates this year were well structured and readable, showing
A framework for creating custom rules for static analysis tools
A framework for creating custom rules for static analysis tools Eric Dalci John Steven Cigital Inc. 21351 Ridgetop Circle, Suite 400 Dulles VA 20166 (703) 404-9293 edalci,[email protected] Abstract Code
The Role of Requirements Traceability in System Development
The Role of Requirements Traceability in System Development by Dean Leffingwell Software Entrepreneur and Former Rational Software Executive Don Widrig Independent Technical Writer and Consultant In the
11 Tips to make the requirements definition process more effective and results more usable
1 11 Tips to make the s definition process more effective and results more usable This article discusses what I believe are the key techniques for making s definition process repeatable from project to
What Is Specific in Load Testing?
What Is Specific in Load Testing? Testing of multi-user applications under realistic and stress loads is really the only way to ensure appropriate performance and reliability in production. Load testing
Sample Exam. 2011 Syllabus
ISTQ Foundation Level 2011 Syllabus Version 2.3 Qualifications oard Release ate: 13 June 2015 ertified Tester Foundation Level Qualifications oard opyright 2015 Qualifications oard (hereinafter called
Software Engineering Best Practices. Christian Hartshorne Field Engineer Daniel Thomas Internal Sales Engineer
Software Engineering Best Practices Christian Hartshorne Field Engineer Daniel Thomas Internal Sales Engineer 2 3 4 Examples of Software Engineering Debt (just some of the most common LabVIEW development
Requirements-Based Testing: Encourage Collaboration Through Traceability
White Paper Requirements-Based Testing: Encourage Collaboration Through Traceability Executive Summary It is a well-documented fact that incomplete, poorly written or poorly communicated requirements are
Development Methodologies Compared
N CYCLES software solutions Development Methodologies Compared Why different projects require different development methodologies. December 2002 Dan Marks 65 Germantown Court 1616 West Gate Circle Suite
Custom Web Development Guidelines
Introduction Custom Web Development Guidelines Unlike shrink wrap software, custom software development involves a partnership between the architect/programmer/developer (SonicSpider) and the owner/testers/users
Business white paper. Best practices for implementing automated functional testing solutions
Business white paper Best practices for implementing automated functional testing solutions Table of contents Contents 3 Introduction 3 Functional testing versus unit testing 4 The pros and cons of manual
Agile Power Tools. Author: Damon Poole, Chief Technology Officer
Agile Power Tools Best Practices of Agile Tool Users Author: Damon Poole, Chief Technology Officer Best Practices of Agile Tool Users You ve decided to transition to Agile development. Everybody has been
Improved Software Testing Using McCabe IQ Coverage Analysis
White Paper Table of Contents Introduction...1 What is Coverage Analysis?...2 The McCabe IQ Approach to Coverage Analysis...3 The Importance of Coverage Analysis...4 Where Coverage Analysis Fits into your
The Classes P and NP
The Classes P and NP We now shift gears slightly and restrict our attention to the examination of two families of problems which are very important to computer scientists. These families constitute the
Software testing. Objectives
Software testing cmsc435-1 Objectives To discuss the distinctions between validation testing and defect testing To describe the principles of system and component testing To describe strategies for generating
Ensuring Reliability in Lean New Product Development. John J. Paschkewitz, P.E., CRE
Ensuring Reliability in Lean New Product Development John J. Paschkewitz, P.E., CRE Overview Introduction and Definitions Part 1: Lean Product Development Lean vs. Traditional Product Development Key Elements
CREDENTIALS & CERTIFICATIONS 2015
THE COMMUNITY FOR TECHNOLOGY LEADERS www.computer.org CREDENTIALS & CERTIFICATIONS 2015 KEYS TO PROFESSIONAL SUCCESS CONTENTS SWEBOK KNOWLEDGE AREA CERTIFICATES Software Requirements 3 Software Design
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at http://www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2006 Vol. 5, No. 6, July - August 2006 On Assuring Software Quality and Curbing Software
Certification Authorities Software Team (CAST) Position Paper CAST-26
Certification Authorities Software Team (CAST) Position Paper CAST-26 VERIFICATION INDEPENDENCE COMPLETED January 2006 (Rev 0) NOTE: This position paper has been coordinated among the software specialists
SA Tool Kit release life cycle
Release management Release management process is a software engineering process intended to oversee the development, testing, deployment and support of software releases. A release is usually a named collection
TECH. Requirements. Why are requirements important? The Requirements Process REQUIREMENTS ELICITATION AND ANALYSIS. Requirements vs.
CH04 Capturing the Requirements Understanding what the customers and users expect the system to do * The Requirements Process * Types of Requirements * Characteristics of Requirements * How to Express
Using TechExcel s DevSuite to Achieve FDA Software Validation Compliance For Medical Software Device Development
Using TechExcel s DevSuite to Achieve FDA Software Validation Compliance For Medical Software Device Development The FDA requires medical software development teams to comply with its standards for software
Project Risk Management: IV&V as Insurance for Project Success
Project Risk Management: IV&V as Insurance for Project Success Introduction Software development projects can be expensive and risky: Ever more complex mission-critical requirements lead to increasingly
Modeling Guidelines Manual
Modeling Guidelines Manual [Insert company name here] July 2014 Author: John Doe [email protected] Page 1 of 22 Table of Contents 1. Introduction... 3 2. Business Process Management (BPM)... 4 2.1.
Performance Testing Uncovered
Performance Testing Uncovered First Presented at: NobleStar Systems Corp. London, UK 26 Sept. 2003 Scott Barber Chief Technology Officer PerfTestPlus, Inc. Performance Testing Uncovered Page 1 Performance
each college c i C has a capacity q i - the maximum number of students it will admit
n colleges in a set C, m applicants in a set A, where m is much larger than n. each college c i C has a capacity q i - the maximum number of students it will admit each college c i has a strict order i
Outline. 1 Denitions. 2 Principles. 4 Implementation and Evaluation. 5 Debugging. 6 References
Outline Computer Science 331 Introduction to Testing of Programs Mike Jacobson Department of Computer Science University of Calgary Lecture #3-4 1 Denitions 2 3 4 Implementation and Evaluation 5 Debugging
Verification and Validation of Software Components and Component Based Software Systems
Chapter 5 29 Verification and Validation of Software Components and Component Based Christina Wallin Industrial Information Technology Software Engineering Processes ABB Corporate Research [email protected]
Business Process Modeling with Structured Scenarios
Business Process Modeling with Structured Scenarios Doug Rosenberg ICONIX Software Engineering, Inc. In 2008, based on our experience with a number of business process engineering projects over the last
Basic Testing Concepts and Terminology
T-76.5613 Software Testing and Quality Assurance Lecture 2, 13.9.2006 Basic Testing Concepts and Terminology Juha Itkonen SoberIT Contents Realities and principles of Testing terminology and basic concepts
