Introduction to Automated Testing



Similar documents
Basic Unix/Linux 1. Software Testing Interview Prep

Automating Security Testing. Mark Fallon Senior Release Manager Oracle

Testing Tools Content (Manual with Selenium) Levels of Testing

TEST PLAN Issue Date: <dd/mm/yyyy> Revision Date: <dd/mm/yyyy>

Business Application Services Testing

SOFTWARE TESTING TRAINING COURSES CONTENTS

GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES

Basic Testing Concepts and Terminology

Unit Testing webmethods Integrations using JUnit Practicing TDD for EAI projects

CS 451 Software Engineering Winter 2009

FSW QA Testing Levels Definitions

U.S. Navy Automated Software Testing

Latest Trends in Testing. Ajay K Chhokra

Software testing. Objectives

International Journal of Advanced Engineering Research and Science (IJAERS) Vol-2, Issue-11, Nov- 2015] ISSN:

Chapter 8 Software Testing

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction

Smarter Balanced Assessment Consortium. Recommendation

Example of Standard API

Revision History Revision Date Changes Initial version published to

Bringing Value to the Organization with Performance Testing

AUTOMATED TESTING and SPI. Brian Lynch

Ce document a été téléchargé depuis le site de Precilog. - Services de test SOA, - Intégration de solutions de test.

ibaan ERP 5.2a Configuration Guide for ibaan ERP Windows Client

Chap 1. Software Quality Management

Software Automated Testing

A Guide To Evaluating a Bug Tracking System

To install Multifront you need to have familiarity with Internet Information Services (IIS), Microsoft.NET Framework and SQL Server 2008.

Fundamentals of LoadRunner 9.0 (2 Days)

Web Application Testing. Web Performance Testing

05.0 Application Development

Load Testing with JMeter

How To Test For Performance

Quality Assurance Plan

Perfect Your Mobile App with Load Testing and Test Automation

Testing. Chapter. A Fresh Graduate s Guide to Software Development Tools and Technologies. CHAPTER AUTHORS Michael Atmadja Zhang Shuai Richard

COMMONWEALTH OF PENNSYLVANIA DEPARTMENT S OF PUBLIC WELFARE, INSURANCE, AND AGING

Agile Test Automation. James Bach, Satisfice, Inc.

Oracle Insurance Policy Administration System Quality Assurance Testing Methodology. An Oracle White Paper August 2008

The Process Guidelines should be used in conjunction with the standard OUM process guidelines when using Testing and Quality Management Tools.

Getting Things Done: Practical Web/e-Commerce Application Stress Testing

SECTION 4 TESTING & QUALITY CONTROL

Getting started with API testing

CSTE Mock Test - Part III Questions Along with Answers


Testhouse Training Portfolio

Levels of Software Testing. Functional Testing

SAS System and SAS Program Validation Techniques Sy Truong, Meta-Xceed, Inc., San Jose, CA

TESSY Automated dynamic module/unit and. CTE Classification Tree Editor. integration testing of embedded applications. for test case specifications

Finding Execution Faults in Dynamic Web Application

Formal Software Testing. Terri Grenda, CSTE IV&V Testing Solutions, LLC

How To Write Software

Web Applications Testing

About Network Data Collector

MANUAL TESTING. (Complete Package) We are ready to serve Latest Testing Trends, Are you ready to learn.?? New Batches Info

Chapter 13: Program Development and Programming Languages

QA Classroom and Online training from Yes-M Systems

Automation can dramatically increase product quality, leading to lower field service, product support and

In this Lecture you will Learn: Implementation. Software Implementation Tools. Software Implementation Tools

Sample Exam Foundation Level Syllabus. Mobile Tester

Software Testing. System, Acceptance and Regression Testing

Standard Glossary of Terms Used in Software Testing. Version 3.01

Achieving business benefits through automated software testing. By Dr. Mike Bartley, Founder and CEO, TVS

MCSA Security + Certification Program

Release Notes for Version

SysPatrol - Server Security Monitor

Security Testing & Load Testing for Online Document Management system

Testing Introduction. IEEE Definitions

CDC UNIFIED PROCESS JOB AID

Introduction site management software

Benefits of Test Automation for Agile Testing

Decomposition into Parts. Software Engineering, Lecture 4. Data and Function Cohesion. Allocation of Functions and Data. Component Interfaces

Basic Trends of Modern Software Development

This presentation explains how to integrate Microsoft Active Directory to enable LDAP authentication in the IBM InfoSphere Master Data Management

STUDY AND ANALYSIS OF AUTOMATION TESTING TECHNIQUES

DISCOVERY OF WEB-APPLICATION VULNERABILITIES USING FUZZING TECHNIQUES

OnCommand Performance Manager 1.1

ISTQB Certified Tester. Foundation Level. Sample Exam 1

Kaseya Server Instal ation User Guide June 6, 2008

Exploratory Testing in an Agile Context

How To Test For A Test On A Test Server

VIRTUAL LABORATORY: MULTI-STYLE CODE EDITOR

Oracle 11g Database Administration

Introduction to Functional Verification. Niels Burkhardt

Testing, Debugging, and Verification

Copyrighted , Address :- EH1-Infotech, SCF 69, Top Floor, Phase 3B-2, Sector 60, Mohali (Chandigarh),

APPLICATION MANAGEMENT SUITE FOR ORACLE E-BUSINESS SUITE APPLICATIONS

DiskBoss. File & Disk Manager. Version 2.0. Dec Flexense Ltd. info@flexense.com. File Integrity Monitor

GUI Test Automation How-To Tips

RELEASE NOTES. Release Notes. Introduction. Platform. Product/version/build: Remote Control ( ) ActiveX Guest 11.

Using TechExcel s DevSuite to Achieve FDA Software Validation Compliance For Medical Software Device Development

Codeless Test Automation for Web Apps

Source Code Review Using Static Analysis Tools

Transcription:

Introduction to Automated Testing

What is Software testing? Examination of a software unit, several integrated software units or an entire software package by running it. execution based on test cases expectation reveal faults as failures Failure incorrect execution of the system usually consequence of a fault Fault/defect/bug result of a human error

Objectives of testing To find defects before they cause a production system to fail. To bring the tested software, after correction of the identified defects and retesting, to an acceptable level of quality. To perform the required tests efficiently and effectively, within budgetary and scheduling limitation. To compile a record of software errors for use in error prevention (by corrective and preventive actions)

Software Testing Process Test Planning Test Design Test Implementation Test Execution Planning include completion criteria (coverage goal) Design - approaches for test case selection to achieve coverage goal Implementation - find for test cases input/output data state before/after test procedure Execution run tests Result verification pass or fail? Coverage? Test Library Management maintain relationships keeping track, etc. Results Verification Test Library Management

What is a Test Case? Test Case is a pair of <input, expected outcome> For state-less systems (e.g. a compiler) Test cases are very simple Outcome depends solely on the current input For state-oriented (e.g. ATM) Test cases are not that simple. A test case may consist of a sequences of <input, expected outcome> The outcome depends both on the current state of the system and the current input ATM example: < check balance, $500.00 >, < withdraw, amount? >, < $200.00, $200.00 >, < check balance, $300.00 > Various ways input may be specified

Expected Outcome An outcome of program execution may include Value produced by the program State Change A sequence of values which must be interpreted together for the outcome to be valid A test oracle is a mechanism that verifies the correctness of program outputs Generate expected results for the test inputs Compare the expected results with the actual results of execution of the IUT

Levels of Testing Unit testing Individual program units, such as procedure, methods in isolation Integration testing Modules are assembled to construct larger subsystem and tested System testing Includes wide spectrum of testing such as functionality, and load Acceptance testing Customer s expectations from the system

Levels of Testing New test cases are not designed Test are selected, prioritized and executed To ensure that nothing is broken in the new version of the software

When to automate testing? (1) The benefits of test automation need to be greater than the (expensive!) costs of automation. General rule of thumb: it is expected that tests will have to be run many times regression testing configuration testing conformance testing agile development process capacity / stress testing performance measurements

When to automate testing (2) Automated testing is especially beneficial if the tests need to be re-executed quickly Frequent recompiles Large number of tests Using an agile development process An automated test can be duplicated to create many instances for capacity / stress testing.

Example: Test-First process in XP

When NOT to automate Initial functional testing Automated testing is more likely to find bugs introduced by changes to code or the execution environment, rather than in new functionality. Automated test scripts may not be ready for first software release. Situations requiring human judgment to determine if system is functioning correctly.

Types of Testing Tools Test Planning and Management Create/maintain test plans integrate with project plan Maintain links to Requirements/Specification generate Requirements Test Matrix Reports and Metrics on test case execution Tracking of history/status of test cases defect tracking

Types of Testing Tools Test Design & Implementation Automatic creation of test cases Based on test design approaches graph based data flow analysis logic based... Very few concrete usable tools Random test data generator Stubs/Mocks

Types of Testing Tools Test Execution Test Drivers and Execution Frameworks Run test scripts and report result e.g. JUnit Runtime test execution assistance memory leak checkers comparators

Types of Testing Tools Test Performance assessment Analysis of the effectiveness of test cases for extend of system covered Coverage analyzers report on various levels of coverage Analysis of the effectiveness of test cases for bug detection mutation testing

Types of Testing Tools Specialized testing Security testing tools password crackers vulnerability scanners packet crafters... Performance / Load testing tools performance monitors load generators...

Types of Test Tools Capture and Replay For user interface testing, one approach to automating tests is, after the system is working, record the input supplied by the user, and capture the system responses. When the next version of the software needs to be tested, play back the recorded user input and check if the same responses are detected as are stored in the capture file. Benefits: relatively simple approach, easy to do Drawbacks: very difficult to maintain specific to one environment

Tool support at different levels Unit testing Tools such as JUnit Integration testing Stubs, mocks System testing Security, performance, load testers Regression testing Test Management tools (e.g. defect tracking,...)

What do we need to do automated testing? Test script Test Case Specification Actions to send to system under test (SUT). Responses expected from SUT. How to determine whether a test was successful or not? Test execution system Mechanism to read test script, and connect test case to SUT. Directed by a test controller.

Test Architecture (1) Includes defining the set of Points of Control and Observation (PCOs) Test controller Test script Test Execution System PCO SUT A PCO could be a particular method to call a device interface a network port etc.

Test Architecture (2) The test architecture will affect the test script because it may be significant as to which PCO is used for an action or response. Test controller Test controller m m PCO PCO 1 PCO 2 SUT SUT

Potential PCOs Determining the PCOs of an application can be a challenge. Potential PCOs: Direct method call (e.g. JUnit) User input / output Data file input / output Network ports / interfaces Windows registry / configuration files Log files Temporary files or network ports Pipes / shared memory

Potential PCOs (2) 3 rd party component interfaces: Lookup facilities: network: Domain Name Service (DNS), Lightweight Directory Access Protocol (LDAP), etc. local / server: database lookup, Java Naming and Directory Interface (JNDI), etc. Calls to: remote methods (e.g. RPC) Operating System For the purposes of security testing, all of these PCOs could be a point of attack.

Distributed Test Architecture (1) May require several local test controllers and a master test controller Master Test controller PCO Local Test controller SUT Component 1 Local Test controller PCO SUT Component 2

Distributed Test Architecture (2) Issues with distributed testing: Establishing connections at PCOs Synchronization Where are pass/fail decisions made? Communication among test controllers

Choosing a test architecture User Browser Web Server mouse clicks / keyboard HTTP / HTML SQL Data base

Choosing a Test Architecture Testing from the user s point of view: Need a test tool to simulate mouse events, or keyboard input Need to be able to recognize correct web pages Small web page changes might require large changes to test scripts. Testing without the browser: Test script would send HTTP commands to web server, and check HTTP messages or HTML pages that are returned. Easier to do, but not quite as realistic.

Test Scripts What should the format of a test script be? tool dependent? a standard test language? a programming language?

Test Script Development Creating test scripts follows a parallel development process, including: requirements creation debugging configuration management maintenance documentation Result: they are expensive to create and maintain

Making the automation decision (1) Will the user interface of the application be stable or not? To what extent are oracles available? To what extent are you looking for delayed-fuse bugs (memory leaks, wild pointers, etc.)? Does your management expect to recover its investment in automation within a certain period of time? How long is that period and how easily can you influence these expectations? Are you testing your own company s code or the code of a client? Does the client want (is the client willing to pay for) reusable test cases or will it be satisfied with bug reports and status reports? Do you expect this product to sell through multiple versions?

Making the automation decision (2) Do you anticipate that the product will be stable when released, or do you expect to have to test Release N.01, N.02, N.03 and other bug fix releases on an urgent basis after shipment? Do you anticipate that the product will be translated to other languages? Will it be recompiled or re-linked after translation (do you need to do a full test of the program after translation)? How many translations and localizations? Does your organization make several products that can be tested in similar ways? Is there an opportunity for amortizing the cost of tool development across several projects?

Making the automation decision (3) How varied are the configurations (combinations of operating system version, hardware, and drivers) in your market? (To what extent do you need to test compatibility with them?) What level of source control has been applied to the code under test? To what extent can old, defective code accidentally come back into a build? How frequently do you receive new builds of the software? Are new builds well tested (integration tests) by the developers before they get to the tester?

Making the automation decision (4) To what extent have the programming staff used custom controls? How likely is it that the next version of your testing tool will have changes in its command syntax and command set? What are the logging/reporting capabilities of your tool? Do you have to build these in?

Making the automation decision (5) To what extent does the tool make it easy for you to recover from errors (in the product under test), prepare the product for further testing, and re-synchronize the product and the test (get them operating at the same state in the same program). In general, what kind of functionality will you have to add to the tool to make it usable? Is the quality of your product driven primarily by regulatory or liability considerations or by market forces (competition)? Is your organization subject to a legal requirement that test cases be demonstrable?

Making the automation decision (6) Will you have to be able to trace test cases back to customer requirements and to show that each requirement has associated test cases? Is your company subject to audits or inspections by organizations that prefer to see extensive regression testing? If you are doing custom programming, is there a contract that specifies the acceptance tests? Can you automate these and use them as regression tests? What are the skills of your current staff?

Making the automation decision (7) Do you have to make it possible for non-programmers to create automated test cases? To what extent are cooperative programmers available within the programming team to provide automation support such as event logs, more unique or informative error messages, and hooks for making function calls below the UI level? What kinds of tests are really hard in your application? How would automation make these tests easier to conduct?

Suggested reading Henk Coetzee, Best Practices in Software Test Automation, (2005) on line at http://www.testfocus.co.za/feature%20articles/july2005.htm C. Kaner, Architectures of Test Automation. (2000). On line at: http://www.kaner.com/testarch.html C. Kaner, Improving the maintainability of automated test suites, Software QA, Vol. 4, No. 4 (1997). On line at: www.kaner.com/pdfs/autosqa.pdf J. Bach, Test automation snake oil, Proceedings of 14th Int l conference on Testing Computer Software (revised 1999). On line at: www.satisfice.com/articles/test_automation_snake_oil.pdf