Common Testing Problems: Pitfalls to Prevent and Mitigate



Similar documents
Supply-Chain Risk Management Framework

Moving Target Reference Implementation

Applying Software Quality Models to Software Security

Assurance Cases for Design Analysis of Complex System of Systems Software

2012 CyberSecurity Watch Survey

Risk Management Framework

An Application of an Iterative Approach to DoD Software Migration Planning

Data Management Maturity (DMM) Model Update

Getting Started with Service- Oriented Architecture (SOA) Terminology

Architectural Implications of Cloud Computing

Contracting Officer s Representative (COR) Interactive SharePoint Wiki

JOURNAL OF OBJECT TECHNOLOGY

Exploring the Interactions Between Network Data Analysis and Security Information/Event Management

SOA for Healthcare: Promises and Pitfalls

UFO: Verification with Interpolants and Abstract Interpretation

Monitoring Trends in Network Flow for Situational Awareness

Copyright 2014 Carnegie Mellon University The Cyber Resilience Review is based on the Cyber Resilience Evaluation Method and the CERT Resilience

An Increase in Software Testing Robustness: Enhancing the Software Development Standard for Space Systems

Overview. CMU/SEI Cyber Innovation Center. Dynamic On-Demand High-Performance Computing System. KVM and Hypervisor Security.

Network Monitoring for Cyber Security

Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2) (Case Study) James Stevens Senior Member, Technical Staff - CERT Division

Network Analysis with isilk

How To Ensure Security In A System

Guidelines for Developing a Product Line Production Plan

A Systematic Method for Big Data Technology Selection

Oracle Insurance Policy Administration System Quality Assurance Testing Methodology. An Oracle White Paper August 2008

Service Measurement Index Framework Version 2.1

Agile Development and Software Architecture: Understanding Scale and Risk

A Study of Systems Engineering Effectiveness. Building a Business Case for Systems Engineering

Software testing. Objectives

Operationally Critical Threat, Asset, and Vulnerability Evaluation SM (OCTAVE SM ) Framework, Version 1.0

CMMI for SCAMPI SM Class A Appraisal Results 2011 End-Year Update

Software Configuration Management Plan

Interpreting Capability Maturity Model Integration (CMMI ) for Business Development Organizations in the Government and Industrial Business Sectors

Penetration Testing Tools

The Key to Successful Monitoring for Detection of Insider Attacks


Enhance visibility into and control over software projects IBM Rational change and release management software

Interpreting Capability Maturity Model Integration (CMMI ) for Service Organizations a Systems Engineering and Integration Services Example

Certified Software Quality Engineer (CSQE) Body of Knowledge

Department of Homeland Security Cyber Resilience Review (Case Study) Matthew Butkovic Technical Manager - Cybersecurity Assurance, CERT Division

Guidelines for Developing a Product Line Concept of Operations

Use service virtualization to remove testing bottlenecks

Copyright 2014 Carnegie Mellon University The Cyber Resilience Review is based on the Cyber Resilience Evaluation Method and the CERT Resilience

A Capability Maturity Model (CMM)

Deriving Software Security Measures from Information Security Standards of Practice

Introduction to the CMMI Acquisition Module (CMMI-AM)

Benefits of Test Automation for Agile Testing

ISTQB Certified Tester. Foundation Level. Sample Exam 1

FSW QA Testing Levels Definitions

Best Practices for Improving the Quality and Speed of Your Agile Testing

Arcade Game Maker Pedagogical Product Line: Marketing and Product Plan

Building Resilient Systems: The Secure Software Development Lifecycle

Buyer Beware: How To Be a Better Consumer of Security Maturity Models

Business white paper. Best practices for implementing automated functional testing solutions

Introduction to Software Engineering. 8. Software Quality

emontage: An Architecture for Rapid Integration of Situational Awareness Data at the Edge

Information Asset Profiling

Your Software Quality is Our Business. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc.

Extending AADL for Security Design Assurance of the Internet of Things

CMMI: What do we need to do in Requirements Management & Engineering?

Sustaining Operational Resiliency: A Process Improvement Approach to Security Management

NEOXEN MODUS METHODOLOGY

Software Development Life Cycle (SDLC)

Defect Tracking Best Practices

Sample Exam Syllabus

Testing Introduction. IEEE Definitions

Software Development Processes. Software Life-Cycle Models. Process Models in Other Fields. CIS 422/522 Spring

Peer Review Process Description

Metrics in Software Test Planning and Test Design Processes

Agile Development and Testing Practices highlighted by the case studies as being particularly valuable from a software quality perspective

Software Development Processes. Software Life-Cycle Models

Introduction to Automated Testing

Getting Things Done: Practical Web/e-Commerce Application Stress Testing

Using the Agile Methodology to Mitigate the Risks of Highly Adaptive Projects

Enterprise Test Management Standards

How to Write a Software Process Procedures and Policy Manual for YOUR COMPANY

Software Security Engineering: A Guide for Project Managers

Intel SSD 520 Series Specification Update

Systems Development Life Cycle (SDLC)

Ten steps to better requirements management.

Software Engineering: Analysis and Design - CSE3308

Establishing your Automation Development Lifecycle

Software Engineering. So(ware Evolu1on

Peer Review Process Description

Transcription:

: Pitfalls to Prevent and Mitigate AIAA Case Conference 12 September 2012 Donald Firesmith Software Engineering Institute (SEI) Carnegie Mellon University Pittsburgh, PA 15213

Clarification and Caveat Common) The following testing problems are common in the sense that they occur with such frequency that several are typically observed on most if not all programs. Not Prioritized) Given that all of these problems should be avoided or their negative consequences should be mitigated if they do occur, no attempt has been made to prioritize them in terms of either frequency or severity. Experienced Based) The list is not based on any rigorous study of specific programs. Rather, it is a compendium of observations from system/software engineers and testers based on both performing and assessing testing on real programs. 2

Topics Goal and Objectives General Testing s s by Source: Test Planning s Modern Lifecycles Testing s Requirements and Testing s s by Type of Testing: Unit Testing s Integration Testing s Specialty-Engineering Testing s System Testing s System of System (SoS) Testing s Regression Testing s Maintenance Testing s Recommendations 3

Goals and Objectives Overall Goal: To provide a high-level summary of commonly-occurring testing problems Supporting Objectives: To list the different types of commonly-occurring testing problems in an organized manner To finish by making highly general recommendations as to how to avoid/mitigate these problems To provide the information needed to generate testing oversight and assessment checklists 4

Source General Testing s 1 Wrong test mindset Inadequate testing expertise Over-reliance on COTS testing tools Inadequate CM Prove that system/software works rather than show how it fails => Test only nominal ( sunny day ) behavior rather than off-nominal behavior Test middle of the road rather than boundary values and corner cases Both contractors and Government There is more to testing than listing the different types of testing. Inadequate support for: Test case selection criteria Test case completion criteria Oracle validity Requirements, design, implementation, and testing work products 5

Source General Testing s 2 Political definition of defect Defects not detected prior to testing Unrealistic expectations Over reliance on testing False sense of security Administration of the defect database (trouble reports, etc.) can be political: What constitutes a bug or a defect worth reporting? Developers and testers are often at odds about this. Excessive requirements, architecture, and design defects are not detected and fixed prior to testing. Testing is not the only verification method. Testing only finds a fixed percentage of existing defects. Testing is not the most effective way of minimizing defects. Positive test results may mean good software or poor tests. A truly successful test run finds a high percentage of the remaining significant defects. 6

Source General Testing s 3 Testing not prioritized Inadequate testing not identified as a risk Poor communication Some testing my need to be sacrificed because the program is behind schedule and over budget. Tests that would find the most important bugs may be deleted. Inadequate testing is rarely an official risk in the program risk repository. Large programs with many teams Geographically distributed programs Contractually separated teams (prime vs. subcontractor, system of systems) Between testers and: Other developers (requirements engineers, architects, designers, and implementers) Other testers Customers and subject matter experts (SMEs) 7

Source Test Planning s 1 No test plan What, not how Incomplete test plans There is no separate Test and Evaluation Master Plan (TEMP): only incomplete high-level overviews in System Engineering Master Plans (SEMPs) and Software Development Plans (SDPs). Test plans do not specify test processes. They only list and briefly describe types of testing, not how to perform this testing. Test plans have no clear specific: Test objectives/no acceptance criteria Testing techniques (testing is ad hoc) Test case selection criteria (e.g., single test case vs. boundary value testing) Test completion criteria including nominal and off-nominal testing (except possibly at the unit test level) 8

Source Test Planning s 2 Inadequate planned test resources One size fits all The test plans provide inadequate test resources: Inadequate test time in schedule with inadequate schedule reserves Insufficient and inadequately trained and experienced testers and reviewers Inadequate funding Insufficient test environments (test beds) for integration testing Mission-, safety-, and security-critical software is not tested more completely and rigorously (except possibly unit testing). 9

Source Modern Life Cycle s 1 Evolutionary development cycle Iterative (fixing requirements, architecture, design, and implementation defects) Incremental (additional capabilities with each new increment) Parallel (requirements, architecture, design, implementation, integration, testing activities) Time-boxed (limited time in which to perform activities including testing) Long operational lifespans Regression Testing is Inadequate (amount needed is greatly increased). Automation of regression testing is inadequate. Regression tests are not developed iteratively and incrementally. Refactoring may inadvertently obsolete tests. Refactoring may delete test hooks. Testing assets (test documents, environments, and test cases) are not properly documented (Agile). Testing assets are not adequately maintained (fixed and updated). 10

Source Requirements & Testing s 1 Ambiguous requirements Missing requirements Poor quality requirements Ambiguous requirements cause incorrect test cases as testers misinterpret requirement s intent. Missing requirements cause incomplete testing (missing test cases): Over emphasis on nominal over off-nominal behavior Normal (sunny day) as opposed to fault tolerant and failure (rainy day) use case paths Lack of fault and failure detection and reaction requirements Vague, high-level architecturally-significant goals rather than verifiable requirements cause missing or inadequate specialty-engineering testing (discussed later). 11

Source Requirements & Testing s 2 s Poor safety requirements Unstable requirements Inadequate test oracles Inadequate derivation of system/software requirements from combined process/product standards as DO-178C and DO-254 can lead to missing requirements that are then not properly tested. Unstable requirements cause inadequate regression testing. This is worsened by inadequate automation of regression tests. Testing assumes the correctness of test oracles (i.e., requirements, architecture, and design). Building tests based on inadequate oracles provides false data. 12

Test Type Unit Testing s Inadequate low-level requirements Inadequate detailed design Wrong test mindset Lack of pre-test peer review Inadequate regression testing before integration Different test environment The derived requirement merely restates its parent requirement. Allocated requirements are not derived to be at the proper level of abstraction. There is insufficient detail to drive testing. An unstable design leads to obsolete testing. If the tester is the developer, testing will often be to show correctness, not find defects. There is no pre-test peer review of the test input, preconditions (pre-test state), and test oracle (expected test outputs and postconditions). Modified code (bug fixes and changed design) is not retested prior to integration testing. Unit is not compiled using the same compiler (version) and toolset as the actual product software. 13

Test Type Integration Testing s 1 Insufficient integration test environments (test beds) Insufficient schedule Poor test bed fidelity Poor test bed quality Unavailable components Poor code quality Lack of sufficient test beds due to cost or hardware needed elsewhere causes competition for limited resources among test teams. Insufficient time scheduled for adequate testing (especially regression testing). Insufficient software, hardware, and system fidelity to actual system (e.g., due to inadequate software simulations/drivers, prototype or wrong version). Test beds contain too many defects, slowing testing. Not all the components to be integrated are ready at the proper time. Code contains many defects that should have been found during lower-level testing, which unnecessarily slows integration testing. 14

Test Type Integration Testing s 2 Insufficient/inadequate test metrics Schedule conflicts Inadequate BIT and PHM Should be more than just defects found and closed. Also need estimated defects remaining. Inadequate scheduling of limited test environments. Inadequate Built-In Test (BIT) and Prognostics and Health Management (PHM) makes localizing defects more difficult. 15

Test Type Specialty Engineering Testing Inadequate capacity testing Inadequate reliability testing Inadequate robustness testing Inadequate safety testing Inadequate security testing Inadequate usability testing There is little or no testing to determine performance as capacity limits are approached, reached, and exceeded. There is little or no long duration testing under operational profiles. There is little or no testing of environmental, error, fault, and failure tolerance based on robustness analysis (fault, degraded mode, and failure use case paths as well as ETA, FTA, FMECA, RBD, etc.). There is little or no testing of safeguards (e.g., interlocks) and fail-safe behavior based on safety analysis (mishap cases). There is little or no penetration testing, security features, or testing of security controls and fail-secure behavior based on security analysis (e.g., attack trees, misuse cases). There is little or no explicit usability testing of human interfaces for user friendliness, learnability, etc. 16

Test Type System Testing s Ambiguous requirements Incomplete requirements Missing requirements System testing is performed too late Poor code quality Ambiguous requirements are easily misunderstood and difficult or impossible to verify. Many requirements are incomplete (e.g., missing preconditions and trigger events) making them difficult to verify. Many requirements are missing (e.g., quality requirements and requirements for off-nominal behavior) and therefore unverifiable. Testing is performed just before delivery when it is too late to provide adequate time for defect removal and regression testing. Code contains many defects that should have been found during lower-level (unit and integration) testing, which unnecessarily slows system testing. 17

Test Type SoS Testing s 1 Inadequate SoS planning Poor or missing SoS requirements Unclear testing responsibilities SoS not funded SoS not properly scheduled Inadequate test resources Test planning often does not exist at the SoS level. There are no clear test completion/acceptance criteria. Requirements only exist at the system level, leaving no clear and explicit requirements to be verified at the SoS level. No program is explicitly tasked with testing end-to-end SoS behavior. No program is funded to perform end-to-end SoS testing. SoS testing is not in the system integrated master schedules and depends on uncoordinated system schedules. Hard to obtain test resources (people, test beds) from individual programs 18

Test Type SoS Testing s 2 Poor code quality Poor bug tracking across systems Finger-pointing Code contains many defects that should have been found during lower-level (unit, integration, and system) testing, which unnecessarily slows SoS testing. Poor coordination of bug tracking and associated regression testing across multiple programs. Makes it hard to improve SoS testing process 19

Test Type Regression Testing s Insufficient test automation Insufficient schedule Regression tests not updated Documentation not maintained Regression tests not rerun Inadequate scope of regression testing Only low-level regression tests Sufficient regression testing will only be performed if it is relatively quick and painless. Iterative, incremental, and concurrent development/life cycle greatly increases amount of regression testing. Regression test suite (test inputs, test drivers, test stubs, oracles) is not updated for new/changed capabilities. Requirements, user documents, test plans, entry/exit criteria and errata lists are not updated to reflect changes. There was only one minor code change, so we didn t do the regression testing. There isn t enough time. Only test the changed code because the change can t effect the rest of the system. Only unit tests and some integration tests are rerun. System and/or the SoS tests are not rerun. 20

Test Type Maintenance Testing s Regression tests not maintained Test documentation not maintained Test work products not under CM Test work products inconsistent with SW Disagreements over funding Regression test suite (test inputs, test drivers, test stubs, oracles) is not updated for new/changed capabilities. Test plans, test procedures, and other documentation are not maintained as changes occur. The test documentation and software are not under configuration control The test work products become out of synch with the SW as it evolves over time. It is unclear where the funding should come from (development or sustainment funding). 21

Recommendations Ensure adequate testing expertise and experience within the PO: Program Office personnel Systems Engineering and Technical Assistance (SETA) contractors Consultants Use as a checklist: Require the prevention/mitigation of these problems in the RFPs Evaluate proposed solutions to these problems in contractor proposals Evaluate test documentation for solutions to these problems: Test program plans Testing sections of SEMPs and SDPs Do not accept inadequate contractor/subcontractor testing documents Evaluate contractor/subcontractor implementations of test plans and processes Push back against testing shortcuts late in the program. 22

Contact Information Slide Format Donald Firesmith Senior Member Technical Staff Acquisition Support Program Telephone: +1 412-268-6874 Email: dgf@sei.cmu.edu U.S. Mail Software Engineering Institute Customer Relations 4500 Fifth Avenue Pittsburgh, PA 15213-2612 USA Web www.sei.cmu.edu www.sei.cmu.edu/contact.cfm Customer Relations Email: info@sei.cmu.edu Telephone: +1 412-268-5800 SEI Phone: +1 412-268-5800 SEI Fax: +1 412-268-6257 23

NO WARRANTY THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN AS-IS" BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. Use of any trademarks in this presentation is not intended in any way to infringe on the rights of the trademark holder. This Presentation may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission. Permission is required for any other use. Requests for permission should be directed to the Software Engineering Institute at permission@sei.cmu.edu. This work was created in the performance of Federal Government Contract Number FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013. 24