About metrics and reporting in model-based robot assisted functional testing



Similar documents
Preface Agile Testing Review

Growing testing skills using the Agile Testing Ecosystem. Dr Lee Hawkins Principal Test Architect Dell Software, Melbourne

Emerging Trends in Software Testing Introduction to course

Testing of safety-critical software some principles

Agile development of safety-critical software while meetings standards' requirements

Elements of robot assisted test systems

Exploratory Testing in an Agile Context

RESTRICTED PARTY SCREENING USING SOFTWARE TO MANAGE TRADE COMPLIANCE

Agile Testing. What Students Learn

Basic Testing Concepts and Terminology

FSW QA Testing Levels Definitions

Agile Test Planning with the Agile Testing Quadrants

Keywords document, agile documentation, documentation, Techno functional expert, Team Collaboration, document selection;

Scrum and Testing The end of the test role Bryan Bakker 20 maart 2012

Agile Development and Testing Practices highlighted by the case studies as being particularly valuable from a software quality perspective

Software Development Lifecycle. Steve Macbeth Group Program Manager Search Technology Center Microsoft Research Asia

Cut. software development. Improve defect removal efficiency while simultaneously lowering costs and shortening schedules.

Agile Testing Principles and Best Practices. Progress Software, Hyderabad, India

Listening to the Customer s Voice 1

The Design and Improvement of a Software Project Management System Based on CMMI

Using Both Incremental and Iterative Development Dr. Alistair Cockburn, Humans and Technology

SEBA Solutions Inc Bellwind Circle Rockledge, Florida voice, fax

Certified Agile Software Test Professional (CASTP)

Tapping the Power of User Trials - The Story of Motorola's Web-Based TRYMEMOTO User Feedback System

QUICK AND EFFICIENT MOBILE TESTING STRATEGY

Delivering Customer Value Faster With Big Data Analytics

Key Benefits of Microsoft Visual Studio Team System

SPECIFICATION BY EXAMPLE. Gojko Adzic. How successful teams deliver the right software. MANNING Shelter Island

Automated Acceptance Testing of High Capacity Network Gateway

Using WebLOAD to Monitor Your Production Environment

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction

An Enterprise Framework for Evaluating and Improving Software Quality

Taking the First Steps in. Web Load Testing. Telerik

ARCHITECTURE FOR INTEGRATING A WEB-BASED IDE AND A PROJECT MANAGEMENT SYSTEM

Analytics For Everyone - Even You

The Ultimate Digital Marketing Solution Guide to SEO for 2015 onwards.

Return On Investment XpoLog Center

Agile Testing: The Agile Test Automation Pyramid

Application Test Management and Quality Assurance

Operations Management for Virtual and Cloud Infrastructures: A Best Practices Guide

THE THREE ASPECTS OF SOFTWARE QUALITY: FUNCTIONAL, STRUCTURAL, AND PROCESS

SharePoint Project Management: The Key to Successful User Adoption

Service Virtualization:

Forward Thinking for Tomorrow s Projects Requirements for Business Analytics

Making Model-Based Testing More Agile: a Use Case Driven Approach

The Recipe for Sarbanes-Oxley Compliance using Microsoft s SharePoint 2010 platform

Model-Based Spotify. Kristian Karl

BENEFITS REALIZATION ENSURES CHANGE DELIVERS GREATER BUSINESS VALUE

XP & Scrum. extreme Programming. XP Roles, cont!d. XP Roles. Functional Tests. project stays on course. about the stories

Introduction to Open Atrium s workflow

Best Practices of Project Management and Tracking Tools

Brillig Systems Making Projects Successful

Expert Services Group (Security Testing) Nilesh Dasharathi Sadaf Kazi Aztecsoft Limited

15 Principles of Project Management Success

How To Measure Quality

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

Fundamentals of Information Systems, Seventh Edition

Senior Business Intelligence/Engineering Analyst

Developing a Load Testing Strategy

Manual Tester s Guide to Automated Testing Contents

THE 120VC PORTFOLIO MANAGEMENT MODEL

Axe in the Agile World

Introduction site management software

Taking the pain out of Risk and Compliance Management Systems. Presented by Andrew Batten 23 April 2015

User experience and product-level performance testing in mobile devices.

Datamaker for Skytap. Provide full-sized environments filled with up-to-date test data in minutes

Exploratory Testing An Agile Approach STC Aman Arora. Xebia IT Architects India Pvt. Ltd. Sec-30, Gurgaon , Haryana

Transforming Software Quality Assurance &Testing

Software Engineering. So(ware Evolu1on

Baseline Code Analysis Using McCabe IQ

Fundamentals of Measurements

White Paper Combining Attitudinal Data and Behavioral Data for Meaningful Analysis

Service catalogue and SLM: eight steps to success

How To Improve The Performance Of Anatm

7 Steps to Successful Data Blending for Excel

Copyright 1

English as a Second Language Podcast ESL Podcast 216 Outsourcing Operations

Test Data Management. Services Catalog

The Age of the Customer: Focus on Retention

Microsoft Modern ALM. Gilad Levy Baruch Frei

Belatrix Software Factory Sample Automated Load/Stress Testing Success Cases

Driving Quality Improvement and Reducing Technical Debt with the Definition of Done

Sample Exam Foundation Level Syllabus. Mobile Tester

Evaluating Software Products - A Case Study

Managed Print Services

Transcription:

1 (13) Matti Vuori, 2014-01-10 RATA project report About metrics and reporting in model-based robot assisted functional testing Table of contents 1. Introduction... 1 2. Different tests have different purpose... 2 3. Information flow... 3 4. Goals for quality information... 3 5. Metrics usage mind maps... 5 6. Challenges and trends with metrics and reports... 7 7. The culture of done... 7 8. Metrics on various domains of activity... 8 9. Lessons for model-based testing... 9 10. What is different with robots?... 9 11. References... 10 APPENDIX: Some knowledge needs that testing should give information to:... 11 1. Introduction The goal of testing is to produce quality-related information for someone who needs it for something. Some make business decisions based on it; some improve the product based on it. Metrics are an important part of that, but sometimes it is not clear what metrics to use. Modelbased testing is one example of that. This short paper describes some of the issues related to metrics, reporting and communication of product quality and testing metrics when model-based testing is used.

Supporting the team Critique product 2 (13) 2. Different tests have different purpose Not all testing is equivalent. Each type testing should have a defined purpose. One view that has gained plenty of support lately and requires mentioning here is the agile testing quadrants by Brian Marick and thoroughly documented by Crispin & Gregory (2009). Automated & manual Q1 Functional tests Examples Story tests Prototypes Simulations Business facing Q2 Exploratory testing Scenarios Usability testing User acceptance testing Alpha / beta testing Manual Q3 Unit tests Component tests Q4 Performance & load testing Security testing ility testing Automated Technology facing Tools Figure 1. The (agile) testing quadrants by Brian Marick, described in Crispin & Gregory (2009).

3 (13) 3. Information flow Test system internal metrics Outputs, reports for tester Outputs, reports for team Outputs, reports for business Tester analysis, judgement Information, knowledge team for Information, knowledge business for Figure 2. Information flow from the test system to using the information. 4. Goals for quality information General information: Meets the receiver s needs. Supports goals. Different information may be needed for different parties support for team s work is different than support for business. Needs to be reliable or deficiencies know. Needs to be usable concise form, right concepts that are the same or map to receiver s concepts. Delivered at the right time or available at the right time. Need to suit the means of delivery: communication, reporting, documenting.

4 (13) Characteristics of good metrics: Interesting. Directly map to someone s quality related value (defect, risk, what money is made from). Make sense: engineering sense or business sense. Give a common talking point. Simple. Easy to create. Difficult to manipulate. Will not cause manipulation if set a goal (like adding bad test cases if the amount of passed test cases is a goal). Characteristics of a good test report: Immediately shows the overall situations. Designed to support someone s decision making. Shows data that relates to the decisions, using concepts that are meaningful in the decision making, for the decision maker. Different stakeholder with different needs should get different reports. Suits the company s culture in design, attitude. Includes tester s assessment, not just numbers. Clearly and strongly shows areas that most urgently need improvement. Is interesting and pleasant to read. Is layered short overall view allows going deeper into the data. Most of it created automatically. Is always available. Intranet / extranet reports work with mobile devices. Supports all needs of product, quality and risk management.

5 (13) 5. Metrics usage mind maps Figure 3. Different aspects of using empirical data and metrics.

Figure 4. Product metrics usage. 6 (13)

7 (13) 6. Challenges and trends with metrics and reports People measure what is easy to measure. Pass rate for test cases is a traditional metric, but what does it tell? Does the number of test cases tell something? Big numbers always look good, but what is the real situation. New test technologies (such as MBT) are developed by people who are too attached to the internal metrics of their system, but external metrics are needed for decision making. Testing scientists and engineers aim for exactness, but business decision making is always fuzzy. Non-exact information that makes people understand issues is more important than exact information that does not. While it is good to create metrics automatically, not all need to be created that way. Testers can synthetize higher level metrics using his/her intelligent mind. When there is an opportunity to think, it should be used! Many experts emphasize tester s feeling about the situation both regarding to has something been tested properly and is something ok based on tests. There is a trend of using colour in reporting green or blue, yellow, red. Many are using mind maps in reporting as well as test design. 7. The culture of done The agile development culture often neglects metric other than task logistics is something done or not? That applies to testing too. When a new feature has been tested, the tester makes the decision (based on personal feeling or perhaps low-level metrics) that the testing for that parts is declared as done and the feature can either be declared as done (after the corrections of course) or can move to a next workflow step. That reduces low-level metrics to something more personal, and the business level metrics to just information about some items having passed the development pipeline. That is the simplest way to do things. Of course organizations can add levels and phases above and after that as needed, where the metrics usage is different. For example, MBT may have a role not in the team, but as a simultaneous testing process (continuous or rhythmic) and its coverage and other metrics can and should often be more traditional. (Yet we need to remember, that traditional in practice does not always refer to something that has been a living culture, but something that should have been a living culture.

8 (13) 8. Metrics on various domains of activity Basic idea: defect related metrics depend on what the aim of testing is. Often we are in some clear working domain. Note: this applies to only functional characteristics of the system. Table 1. Some metrics at various domains of activity. Defect finding domain Maturity domain Reliability domain Customer acceptance domain (depends on system) Performance metrics Defect count By seriousness Defect count Defect count trend MTBF (accelerated) Failure rate Defect count By use case By priority Defect density Action success rate By risk By risk External control metrics - Requirement - Requirement - Functional - Requirement - Functional - Functional - Use case - Use case - Keyword (high level) - Keyword (high level) - Keyword (high level) - By component - By component - Use case / user story - Use case / user story Internal control metrics - Code - Code - Application switch - State - Keyword / action word - State transition - State - Application switch - State transition - Application switch Goals Internal standards Requirements Quality plan Internal standards Requirements Quality plan Internal standards Requirements Quality plan Development contract Requirement specification Usual accepted level

9 (13) 9. Lessons for model-based testing Nobody really cares (should care) about models, paths and states and such quality is the main thing. How do things work for their purpose? What is the value of the testing what does it reveal? Metrics are not all-purpose. Internal coverage metrics are important for those who develop models and testers who do lowlevel testing. Path coverage, state coverage, transition coverage, action word coverage, action-state proposition coverage etc. But more important that coverage, is knowing, what areas have been covered and what not. Visual mark-up may help in seeing the picture. Good testing is already focused and that reflects to the metrics. For product-level decision making, external coverage metrics are more useful. Requirement coverage. Functional coverage. Data domain coverage. Interaction coverage, including application switch coverage. MBT needs to show its unique value and focus, and when it is focused to something like testing interactions, the metrics should support that focusing. While it is good to create metrics automatically, not all need to be created that way. Testers can synthetize higher level metrics using his/her intelligent mind. Tester s subjective feeling is important to include in reports. Overall situation (green, yellow, red). Situation in product areas. 10. What is different with robots? A testing robot is just a tool so it should not have any influence on the product quality metrics. But robots are expensive workflow tools, so we should be interested in how well the investment is used. We have not even discussed any workflow metrics yet, but metrics such as this should be interested: Percentile of time the robot arm, finger is in action (and not waiting for, for example, the test system to do image processing). Percentile of time the robot is doing test runs and not just sitting idle waiting for a test to start. Mean time between failures (MTBF), divided into software and hardware failures. Availability. The time the system can be used for testing = (total time time spent in failures and on preventive maintenance) / total time.

10 (13) 11. References Crispin, Lisa & Gregory, Janet. 2009. Agile Testing. A Practical Guide For Testers and Agile Teams. Addison-Wesley. 554 p. ISO/IEC 25010, 2011. Systems and software engineering Systems and software Quality Requirements and Evaluation (SQuaRE) System and software quality models. 34 p. Nieminen, Antti; Jääskeläinen, Antti; Virtanen, Heikki & Katara, Mika. 2011. Model-Based System Level Robustness Testing: A Survey of Test Generation Algorithms. 11 p. (Unpublished manuscript.)

11 (13) APPENDIX: Some knowledge needs that testing should give information to: To the customer (of a tailored information system): Can we accept the delivery and pay the provider? Has the provider done their testing as promised? If we start using this system, will things work ok? (For end users, internally) Are the some business processes / features that will not work ok? What are the problems in them? Are all business processes tested? Are all end user use cases tested? Can we expect that our personnel will accept and even like the new system? Will the transition to the new system be easy? Will the new system handle the load of heavy usage? If we launch the new system, will we be in the news tomorrow because of problems? Will the new system work with our other 250 systems? Will the new system work with old data, from past 40 years? Is the new system secure? Etc...

12 (13) To product ownership / management: Can we ship? Is the product mature enough to ship? Do the features we have promised to work actually work (well enough)? Do the new exciting features (new technologies that we advertise) work great? Can we expect that customers will be satisfied & will not complain? Can we expect that customers will not need to call our support too often? If we ship, can I expect any trouble? What are the main risks, risky areas? Are there known hazards? How much effort have we put into testing? How can I prove to customer that the testing has been proper & according to contract? Have we done everything so that standards are satisfied (like safety standards)? How can I prove to government(s) that testing has been proper & meets laws & standards? How can I prove in court that testing has been proper (in case of problems)? Is there some quality related information that I should know about? Etc... To project management / project quality management: How mature is the product? What is the trend of maturing? When can we estimate it can ship? Is the current build solid enough for system / alpha / beta / customer testing? Which build is solid enough for system / alpha / beta / customer testing? What is the testers' opinion on the product's quality? If we ship, can I or the team expect any trouble? What components are not yet good enough? Do some feature / component / area have severe problems? Does some team or some developer have severe problems? Do we improve the product or move backward? What defects are there? What defects should be fixed? What defects should be fixed first? Has everything relevant been tested (covered)? (Requirements, features, use cases / user stories, interactions, environments...) Have the critical areas been tested properly?

13 (13) Does the testing meet requirements (like safety standards)? Has the team done a good, professional job? How can I prove to product management (or customer) that the testing has been proper? Etc... To a developer: Are my components ok? What defects are there? How serious are they? What defects should be fixed? What defects should be fixed first? Where are the defects (as exactly as possible)? How can I repeat them, to fix them? What environments, conditions do they apply? Have I done a good, professional job? (Plus many of the things others need to know -- but many of them as "nice to know") Etc...