[Handout for L9P1] Quality Assurance: Testing and Beyond

Similar documents
Software Testing Interview Questions

APPROACHES TO SOFTWARE TESTING PROGRAM VERIFICATION AND VALIDATION

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur

Software Testing. Quality & Testing. Software Testing

Chapter 8 Software Testing

Software Testing. Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program.

Software testing. Objectives

Software Testing & Analysis (F22ST3): Static Analysis Techniques 2. Andrew Ireland

Introduction to Software Testing Chapter 8.1 Building Testing Tools Instrumentation. Chapter 8 Outline

Formal Software Testing. Terri Grenda, CSTE IV&V Testing Solutions, LLC

Presentation: 1.1 Introduction to Software Testing

Standard for Software Component Testing

Smarter Balanced Assessment Consortium. Recommendation

SOFTWARE REQUIREMENTS

Oracle Insurance Policy Administration System Quality Assurance Testing Methodology. An Oracle White Paper August 2008

Improved Software Testing Using McCabe IQ Coverage Analysis

Latest Trends in Testing. Ajay K Chhokra

Chapter 5. Regression Testing of Web-Components

An Automated Testing Tool Using UI Structure

SECTION 4 TESTING & QUALITY CONTROL

Techniques and Tools for Rich Internet Applications Testing

Levels of Software Testing. Functional Testing

Software Testing Tutorial

Testing Introduction. IEEE Definitions

Developers and the Software Supply Chain. Andy Chou, PhD Chief Technology Officer Coverity, Inc.

DIFFERENT PRAGMATIC APPROACHES OF TESTING THE CLOUD APPLICATION USING SIMULATORS/EMULATORS

ISTQB Advanced Level. Certification Exam. Self Study E-Book

STUDY AND ANALYSIS OF AUTOMATION TESTING TECHNIQUES

Adaptive Automated GUI Testing Producing Test Frameworks to Withstand Change

An Introduction to. Metrics. used during. Software Development

Automation using Selenium

How to Choose Right Mobile Development Platform BROWSER, HYBRID, OR NATIVE

14 Model Validation and Verification

Tonight s Speaker. Life of a Tester at Microsoft Urvashi Tyagi Software Test Manager, Microsoft

Basic Testing Concepts and Terminology

Quality Assurance - Karthik

ISTQB Certified Tester. Foundation Level. Sample Exam 1

FSW QA Testing Levels Definitions

A Static Analyzer for Large Safety-Critical Software. Considered Programs and Semantics. Automatic Program Verification by Abstract Interpretation

Introduction to Computers and Programming. Testing

Designing with Exceptions. CSE219, Computer Science III Stony Brook University

Essentials of the Quality Assurance Practice Principles of Testing Test Documentation Techniques. Target Audience: Prerequisites:

Test Specification. Introduction

Data Model Bugs. Ivan Bocić and Tevfik Bultan

IEEE ComputerSociety 1 Software and Systems Engineering Vocabulary

1 White-Box Testing by Stubs and Drivers

Introduction to Automated Testing

Contents. Introduction and System Engineering 1. Introduction 2. Software Process and Methodology 16. System Engineering 53

Sample Exam Syllabus

Test Data Management Best Practice

Optimizations. Optimization Safety. Optimization Safety. Control Flow Graphs. Code transformations to improve program

Chapter 11, Testing, Part 2: Integration and System Testing

Data Center Virtualization and Cloud QA Expertise

HP ALM Best Practices Series

Test-Driven Development. SC12 Educator s Session November 13, 2012

Co-Presented by Mr. Bill Rinko-Gay and Dr. Constantin Stanca 9/28/2011

ISTQB Foundation Sample Question Paper No. 6

ecommerce and Retail Rainforest QA enables ecommerce companies to test highly visual user interfaces and customer experience and flow.

European Commission. <Project Name> Test Management Plan. Date: 23/10/2008 Version: Authors: Revised by: Approved by: Public: Reference Number:

BDD FOR AUTOMATING WEB APPLICATION TESTING. Stephen de Vries

Acceptance Criteria. Software Engineering Group 6. 0/3/2012: Acceptance Criteria, v2.0 March Second Deliverable

How To Improve Software Quality

Automated Testing Best Practices

SOFTWARE TESTING - QUICK GUIDE SOFTWARE TESTING - OVERVIEW

Software Testing. Motivation. People are not perfect We make errors in design and code

Alternate Data Streams in Forensic Investigations of File Systems Backups

Standard Glossary of Terms Used in Software Testing. Version 3.01

Essential Visual Studio Team System

Testing, Debugging, and Verification

HP ALM. Software Version: Tutorial

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Chapter 11: Integrationand System Testing

Critical Systems Validation. Objectives

Satisfying ASIL Requirements with Parasoft C++test Achieving Functional Safety in the Automotive Industry

Different Approaches to White Box Testing Technique for Finding Errors

TEST PLAN Issue Date: <dd/mm/yyyy> Revision Date: <dd/mm/yyyy>

Web Applications Testing

Towards a Next- Generation Inter-domain Routing Protocol. L. Subramanian, M. Caesar, C.T. Ee, M. Handley, Z. Mao, S. Shenker, and I.

SIEMENS. Teamcenter Change Manager PLM

Going Interactive: Combining Ad-Hoc and Regression Testing

Keywords: SQA,Black Box Testing( BBT), White Box testing(wbt).

Testing is the process of evaluating a system or its component(s) with the intent to find whether it satisfies the specified requirements or not.

The goal with this tutorial is to show how to implement and use the Selenium testing framework.

Minimizing code defects to improve software quality and lower development costs.

Static vs. Dynamic Testing How Static Analysis and Run-Time Testing Can Work Together. Outline

Know or Go Practical Quest for Reliable Software

Enabling Continuous Delivery by Leveraging the Deployment Pipeline

Choosing A Load Testing Strategy Why and How to Optimize Application Performance

What is the best automation testing approach?

Automated Software Testing by: Eli Janssen

Application Code Development Standards

Advantech WebAccess Device Driver Guide. BwSNMP Advantech WebAccess to SNMP Agent (Simple Network Management Protocol) Device Driver Guide

Library Management System

Integrated Error-Detection Techniques: Find More Bugs in Java Applications

Using TechExcel s DevSuite to Achieve FDA Software Validation Compliance For Medical Software Device Development

How To Test A Web Based System

CSTE Mock Test - Part III Questions Along with Answers

Coverity White Paper. Effective Management of Static Analysis Vulnerabilities and Defects

Automated Validation & Verification of Software Paper Presentation

Agile Development and Testing Practices highlighted by the case studies as being particularly valuable from a software quality perspective

Transcription:

CS0/-Aug0 [Handout for L9P] Quality Assurance: esting and Beyond Quality assurance (QA) is the process of ensuring that the software we build has the required quality levels. Quality Assurance = Validation + Verification QA involves checking of two aspects: a) Have we built the right system? i.e., Are the requirements correct? his is called validation. b) Have we built the system right? i.e., Have we implemented the requirements correctly? his is called verification. Whether something belongs under validation or verification is not that important. What is more important is we do both, instead of limiting to verification (i.e., remember that the requirements can be wrong too). In this handout, we first discuss testing the product as a whole, as opposed to developer testing that is performed on a partial system. esting is the most common way of assuring quality, however there are other complementary techniques such as formal verification. he second part of the handout gives a brief introduction to such QA techniques. Product testing System testing When we take a whole system instead of a part of the system and test it against the system specification, it is called system testing. System testing is typically done by a testing team (also called a QA team). We base the system test cases exclusively on the specified external behavior of the system. Sometimes, system tests go beyond the bounds defined in the specification. his is useful if we want to test whether the system fails gracefully when pushed beyond its limits. or example, let us assume the software under test (SU) is a browser and it should be able to handle web pages containing up to 5000 characters. One of our test cases can try to load a web page containing more than 5000 characters. he expected graceful behavior can be abort the loading of the page and show a meaningful error message. We can consider this test case as failed if the browser attempted to load the large file anyway and crashed. Note that system testing includes testing against non-functional requirements too. Here are some examples: Performance testing - to ensure the system responds quickly. Load testing (also called stress testing or scalability testing) - to ensure the system can work under heavy load. Security testing to test how secure is the system. Compatibility testing, interoperability testing to check whether the system can work with other systems. Usability testing to test how easy it is to use the system. Portability testing to test whether the system works on different platforms. 50

GUI CS0/-Aug0 Acceptance testing Acceptance testing, also called User Acceptance esting (UA), is a validation type testing carried out to show that the delivered system meets the requirements of the customer. his too, similar to the System testing, uses the whole system, but the testing is done against the requirements specification, not against the system specification. Note the two specifications need not be necessarily same. or example, requirements specification could be limited to how the system behaves in normal working conditions while the system specification can also include details on how it will fail gracefully when pushed beyond limits, how to recover, additional APIs not available for users (for the use of developers/testers), etc. Acceptance testing comes after system testing. It is usually done by a team that represents the customer, and it is usually done on the deployment site or on a close simulation of the deployment site. UA test cases are often defined at the beginning of the project, usually based on the use case specification. Successful completion of UA is often a prerequisite to the project signoff. Acceptance tests gives an assurance to the customer that system does what it is intended to do. Besides, acceptance testing is important because a system could work perfectly on the development environment, but fail in the deployment environment due to subtle differences between the two environments. Alpha and Beta testing Alpha testing is performed by the users, under controlled conditions set by the software development team. Beta testing is performed by a selected subset of target users of the system in their natural work setting. An open beta release is the release of not-yet-production-qualitybut-almost-there software to the general population. or example, Google s Gmail was in beta for years before the label was finally removed. GUI testing If a software product has a GUI, all product-level testing (i.e. the types of testing mentioned above) need to be done using the GUI. However, GUI testing is much harder than CLI (command line interface) testing or API testing, for the following reasons: Manual testing Automated API tester Most GUIs contain a large number of different operations, many of which can be performed in any order. GUI operations are more difficult to automate than API testing. Reliably automating GUI operations and automatically verifying whether the GUI behaved as expected is harder than calling an operation and comparing its return value with an expected value. herefore, automated regression testing of GUIs is rather difficult. However, there are testing tools that can automate GUI testing. or example, Selenium (http://seleniumhq.org/) can be used to automate testing of Web application UIs while certain editions for VisualStudio supports record replay type GUI test automation. he appearance of a GUI (and sometimes even behaviour) can be different across platforms and even environments. or example, a GUI can behave differently based on whether it is minimized or maximized, in focus or out of focus, and in a high resolution display or a low resolution display. Logic One approach to overcome the challenge of testing GUIs is to minimize logic aspects in the GUI. hen, we bypass the GUI to test the rest of the system using automated API testing. While this still requires the GUI to be tested manually, we can reduce the number of such manual test cases because most of the 5

CS0/-Aug0 system has already been tested using automated API testing. est coverage In the context of testing, coverage is a metric used to measure the extent to which testing exercises the code. Here are some examples of different coverage criteria: unction/method coverage measures the coverage in terms of functions executed e.g. testing executed 90 out of 00 functions. Statement coverage measures coverage in terms of the number of line of code executed e.g. testing executed k out of 5k LOC. Decision/branch coverage measures coverage in terms of decision points e.g. an if statement evaluated to both true and false. Condition coverage measures coverage in terms of boolean sub-expressions evaluated both to true and false. Condition coverage is not the same as the decision coverage; e.g. if(x> && x<) is considered one decision point but two conditions. or 00% branch or decision coverage, we will need test cases: (x> && x<) == true : [ e.g. x = ] (x> && x<) == false : [ e.g. x = 00] or 00% condition coverage, we will need test cases (x>) == true, (x<) == true : [e.g. x = ] (x<) == false : [ e.g. x = 00] (x>) == false : [ e.g. x = 0] Path coverage measures coverage in terms of possible paths through a given part of the code executed. 00% path coverage means all possible paths have been executed. A commonly used notation for path analysis is called Control low Graphs (CG). If you are not familiar with CGs, the side-note below introduces you to CGs. Entry/exit coverage measures coverage in terms of possible calls to and exits from the operations in the SU. Measuring coverage is often done using coverage analysis tools. We can use coverage measurements to improve E&E of our testing. or example, if a set of test cases does not achieve 00% branch coverage, we need to add more test cases to cover missed branches. [Side-Note] Control low Graphs (CG) CG is a graphical representation of the execution paths of a code fragment. A CG consists of: Nodes: Each node represents one or more sequential statements with no branches Directed Edges: Each edge represents a branch, a possible execution path Given below is the CG notation : A set of sequential statements (without any branches) is represented as a single node. E.g. x=; //node y=; //node z=x+y; //node print (z); //node 5

CS0/-Aug0 Conditional statements: E.g. if (x < 0) then //node z = x + y; //node else z = x y; //node z = z + ; //node Loops: E.g. x++; //node 0 while (x < 0) { //node 0 z = x+ y; //node x++; //node resetx(); //node Multi-way branching: E.g. x++; //node 0 switch (x){ //node case 0: z = x; break; //node case : case : 0 Note how the same edge represents both case and case. 5 z = y; break; //node default: z = x-y; //node z = x+y ; //node 5 he figure below shows the complete CG for the min function given below. void min(int[] A){ int min = A[0]; //node int i = ; //node int n = A.length; //node while (i < n){ //node if (A[i] < min) //node min = A[i]; //node 6 5 i++; //node 5 print(min); //node 6 5

CS0/-Aug0 It is recommended to have exactly one entry edge and exactly one exit edge for each CG. Sometimes we have to add a logical node (i.e. a node that does not represent an actual program statement) to enforce exactly one exit edge rule. Node 5 in the figure below is a logical node. void foo(){ int min = A[0]; //node if (A[i] < min) //node min = A[i]; //node else i++; //node 5 A path is a series of nodes that can be traversed from the entry edge to the exit edge in the direction of the edges that link them. or example, ---5 in the above CG is a path. Other QA techniques here are many QA techniques that do not involve executing the SU. Given next are some such techniques that can complement the testing techniques we have discussed so far. Inspections & reviews Inspections involve a group of people systematically examining a project artefact to discover defects. Members of the inspection team play various roles during the process, such as the author - the creator of the artefact, the moderator - the planner and executer of the inspection meeting, the secretary - the recorder of the findings of the inspection, and the inspector/reviewer - the one who inspects/reviews the artefact. All participants are expected to have adequate prior knowledge of the artefact inspected. An inspection often requires more than one meeting. or example, the first meeting is called to brief participants about the artefact to be inspected. he second meeting is called once the participants have studied the artefact. his is when the actual inspection is carried out. A third meeting could be called to re-inspect the artefact after the defects discovered as an outcome of the inspection are fixed. An advantage of inspections is that it can detect functionality defects as well as other problems such as coding standard violations. urthermore, inspections can verify non-code artefacts and incomplete code, and do not require test drivers or stubs. A disadvantage is that an inspection is a manual process and therefore, error prone. ormal verification ormal verification uses mathematical techniques to prove the correctness of a program. An advantage of this technique over testing is that it can prove the absence of errors. However, one of the disadvantages is that it only proves the compliance with the specification, but not the actual utility of the software. Another disadvantage is that it requires highly specialized notations and knowledge which makes it an expensive technique to administer. herefore, formal verifications are more commonly used in safety-critical software such as flight control systems. 5

CS0/-Aug0 Static analyzers hese are tools that automatically analyze the code to find anomalies such as unused variables and unhandled exceptions. Detection of anomalies helps in improving the code quality. Most modern IDEs come with some inbuilt static analysis capabilities. or example, an IDE will highlight unused variables as you type the code into the editor. Higher-end static analyzers can check for more complex (and sometimes user-defined) anomalies, such as overwriting a variable before its current value is used. ---End of Document--- 55