E-vote 2011 Version: 1.0 Testing and Approval Date: 26/10/2009. E-vote 2011. SSA-U Appendix 5 Testing and Approval Project: E-vote 2011



Similar documents
SOA Solutions & Middleware Testing: White Paper

Testing Tools Content (Manual with Selenium) Levels of Testing

DRUPAL CONTINUOUS INTEGRATION. Part I - Introduction

Continuous Integration Multi-Stage Builds for Quality Assurance

webmunit for WebMethods Integration Server

Automated performance testing using Maven & JMeter. George Barnett, Atlassian Software

Basic Testing Concepts and Terminology

Software infrastructure for Java development projects

Change Management Best Practices

Suunto 2.0 web - Quality Assurance Plan

ISTQB Certified Tester. Foundation Level. Sample Exam 1

Continuous Delivery. Anatomy of the Deployment Pipeline (Free Chapter) by Jez Humble and David Farley

SOFTWARE ENGINEERING INTERVIEW QUESTIONS

Tools for Testing Software Architectures. Learning Objectives. Context

Software Engineering I: Software Technology WS 2008/09. Integration Testing and System Testing

FSW QA Testing Levels Definitions

Modellistica Medica. Maria Grazia Pia, INFN Genova. Scuola di Specializzazione in Fisica Sanitaria Genova Anno Accademico

Business Application Services Testing

The Role of the Software Architect

Best Practices for Web Application Load Testing

Th3 - Open Source Tools for Test Management

Introduction to Automated Testing

Chapter 11, Testing, Part 2: Integration and System Testing

Java Software Quality Tools and techniques

Web Application Testing. Web Performance Testing

Practicing Continuous Delivery using Hudson. Winston Prakash Oracle Corporation

Copyrighted , Address :- EH1-Infotech, SCF 69, Top Floor, Phase 3B-2, Sector 60, Mohali (Chandigarh),

1. Introduction. Annex 7 Software Project Audit Process

Global Software Change Management for PVCS Version Manager

Upping the game. Improving your software development process

Basic Unix/Linux 1. Software Testing Interview Prep

Recommendations for Performance Benchmarking

Benefits of Test Automation for Agile Testing

Your Software Quality is Our Business. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc.

Software Engineering/Courses Description Introduction to Software Engineering Credit Hours: 3 Prerequisite: (Computer Programming 2).

IT3205: Fundamentals of Software Engineering (Compulsory)

SOFTWARE PERFORMANCE TESTING SERVICE

What is a life cycle model?

Introduction to Programming Tools. Anjana & Shankar September,2010

Unit Testing webmethods Integrations using JUnit Practicing TDD for EAI projects

SOFTWARE TESTING PROCESSES PRESENTATION

Continuous Delivery for Alfresco Solutions. Satisfied customers and happy developers with!! Continuous Delivery!

IT3203 Fundamentals of Software Engineering (Compulsory) BIT 2 nd YEAR SEMESTER 3

R214 SPECIFIC REQUIREMENTS: INFORMATION TECHNOLOGY TESTING LABORATORY ACCREDITATION PROGRAM

SACM and CMDB Strategy and Roadmap. David Lowe ActionableITSM.com March 20, 2012

ájoƒ ùdg á«hô dg áµلªÿg Yesser Overall SDLC Process Definition

DESIGN OF AUTOMATION SCRIPTS EXECUTION APPLICATION FOR SELENIUM WEBDRIVER AND TestNG FRAMEWORK

Software Development Tools

Performance Testing Process A Whitepaper

ICT Competency Profiles framework Job Stream Descriptions

SoMA. Automated testing system of camera algorithms. Sofica Ltd

Essentials of the Quality Assurance Practice Principles of Testing Test Documentation Techniques. Target Audience: Prerequisites:

D6.1 GEEWHEZ Test plan

How To Write An Slcm Project Plan

Introducing Performance Engineering by means of Tools and Practical Exercises

Levels of Software Testing. Functional Testing

inspectit...because performance matters! The free Performance Management Solution

Applying 4+1 View Architecture with UML 2. White Paper

Outline. Lecture 13: Web Usability. Top Ten Web Design Mistakes. Web Usability Principles Usability Evaluations

Implementation of ANSI/AAMI/IEC Medical Device Software Lifecycle Processes.

Software Project Audit Process

Ce document a été téléchargé depuis le site de Precilog. - Services de test SOA, - Intégration de solutions de test.

Performance TesTing expertise in case studies a Q & ing T es T

SOFTWARE DEVELOPMENT BASICS SED

Testing. Chapter. A Fresh Graduate s Guide to Software Development Tools and Technologies. CHAPTER AUTHORS Michael Atmadja Zhang Shuai Richard

Surveying and evaluating tools for managing processes for software intensive systems

Engineering a EIA - 632

Performance Testing Process

A Case study based Software Engineering Education using Open Source Tools

unless the manufacturer upgrades the firmware, whereas the effort is repeated.

Appendix 2-A. Application and System Development Requirements

ITIL A guide to service asset and configuration management

Best Overall Use of Technology. Jaspersoft

Client Overview. Engagement Situation. Key Requirements

WebSphere Business Modeler

Example Software Development Process.

Performance Analysis of webmethods Integrations using Apache JMeter Information Guide for JMeter Adoption

Software Engineering talk

Program Lifecycle Methodology Version 1.7

Sistemi ICT per il Business Networking

Delivering Quality Software with Continuous Integration

Presentation: 1.1 Introduction to Software Testing

Continuous Integration

The role of integrated requirements management in software delivery.

Description of Services for A Quality Assurance Engineer for SQA Assignment for eservices Development Projects ICTA/CON/IC/P5/411B

SOFTWARE TESTING TRAINING COURSES CONTENTS

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction

Web Applications Testing

MANUAL TESTING. (Complete Package) We are ready to serve Latest Testing Trends, Are you ready to learn.?? New Batches Info

Qlik UKI Consulting Services Catalogue

How To Test For Elulla

Metrics in Software Test Planning and Test Design Processes

Kevin Lee Technical Consultant As part of a normal software build and release process

Performance Testing. Why is important? An introduction. Why is important? Delivering Excellence in Software Engineering

Request for Proposal for Application Development and Maintenance Services for XML Store platforms

Transcription:

E-vote 2011 SSA-U Appendix 5 Testing and Approval Project: E-vote 2011

Change log Version Date Author Description/changes 0.1 26.10.09 First version Page 1

CONTENT 1. INTRODUCTION 3 2. TESTING PROCESS OVERVIEW 3 3. UNIT TESTING 4 3.1. Purpose 4 3.2. Strategy 4 3.3. Tools 4 4. INTEGRATION TESTING 5 4.1. Purpose 5 4.2. Strategy 5 4.3. Tools 5 5. SYSTEM TESTING 5 5.1. Purpose 5 5.2. Strategy 5 5.3. Tools 6 6. ACCESSIBILITY AND USABILITY TESTING 6 6.1. User s Tests 7 7. ACCEPTANCE TESTING 7 8. BUG REPORTING AND CHANGE MANAGEMENT 8 8.1. Bug Reporting 8 8.2. Change Management 8 Page 2

1. Introduction This document elaborates Requirement ST4.4. The document describes the testing strategy to be applied in this project. It covers the usability testing, unit testing, accessibility testing, system testing, regression testing, volume testing, performance testing and security testing. Acceptance testing is thoroughly described in section 2.5 of the SSA-U Software Development Agreement. Indra will set up the testing environment to carry out all the tests described below. Indra will also set up the necessary infrastructure to carry out the Customer Acceptance Tests. All testing activities will be documented as agreed with the Customer. 2. Testing Process Overview The testing process is composed of a set of tasks, related to the development tasks. The usual way of showing these relationships is through the following V-Model: In this V-Model, the left side shows the development tasks, while the right side displays the corresponding testing tasks. This V-Model will be used throughout this document in order to describe testing activities and deliverables in detail. Page 3

3. Unit Testing 3.1. Purpose The main purpose of Unit Testing is to validate the construction of the software components/modules. 3.2. Strategy Common strategies used for unit testing cover Black-box testing and White-box testing. In Black-box testing, design of the test case is based in exercising the interfaces provided by the module or component, without knowing how the component is built or designed. In White-box testing, design of the test case is based in exercising the internal mechanisms on which the module or component under test is based. Knowledge of the internal design and implementation of the component is a must. Independently of the chosen strategy (White-box or Black-box testing), from a operational point of view, unit testing is performed designing a data set that is input to the component/module and its outputs are compared with the expected outputs; if expected and real outputs match, the test-case passes; otherwise, the test fails. Unit testing is closely related to the building process (its purpose it s validate construction), so it usually is automated in the software building process (compile-test automated cycle), as in a continuous integration environment. 3.3. Tools Several tools are available for helping in the execution of unit tests, depending on the programming language and technology used in the software component implementation. For example, if Java programming is used in the implementation of the component, these are some of the tools available for unit testing: JUnit: This tool allows defining test-cases as Java classes that can be used to invoke the methods provided for the component (java library) under test and make assertions on the values returned by those method invocations. DBUnit: This tool is a particularization of the JUnit tool when persistence (database) is involved in testing; this tools allows pre-loading the database with known data in order to make assertions on the test-case, and cleaning the database after each test-case (Java class) is run. TestNG: is an alternative functionally equivalent to JUnit These tools can be integrated in a continuous integration environment. This continuous integration environment can be implemented, for example, using the following tools: CruiseControl: This tool is responsible for monitoring the activity (commits) on the software repository (CVS, SVN, ). When certain conditions are met, an automated build is triggered by this tool. Maven: This tool is responsible for running the particular building process, including execution of the testing classes (JUnit, ) All the tools named in this section are freely available, and are the product of open source projects. Page 4

4. Integration Testing 4.1. Purpose The main purpose of the integration testing is validation of the design and architecture chosen to implement the software solution. 4.2. Strategy Integration testing is mainly focused on Black-box testing, as the focus in this kind of testing is on assembly of the inter-dependent modules/components in which the architecture of the solution is based, in order to check that the whole subsystem complies with the functional and performance requirements. 4.3. Tools Several tools are available, depending on the technology used in the solution. For example, for web based systems, the following tools can be used: JMeter: This is a tool aimed at performance testing. Several technologies can be tested with this tools: web, web-services, database, The tool provides facilities for load generation, SUT (system under test) monitoring and results reporting. Canoo Web Test: This is tool aimed at functional testing. Basically, invokes web pages through HTTP and checks the expected result with the real result. Some automation of the functional testing process can be achieved with this tool, making it very interesting for regression testing. All the tools named in this section are freely available, and some are the product of open source projects. 5. System Testing 5.1. Purpose The purpose of system testing is requirements validation, as functional as non-functional. 5.2. Strategy System testing is mainly twofold: End-to-end testing: The use-cases describing the system functionality are taken as a base for the testcase specification. All the layers of the system (user interface, business logic, database, external systems, ) need to be available (built, tested and deployed) to run these kind of test-cases. Performance testing: This kind of testing is aimed at validating system behavior (in terms of response time, maximum concurrency, throughput, ) under several load conditions (test scenarios). In this kind of testing, load characterization if a central concept: load characterization means specifying in the testcase the load contribution for every user type (role) or external system (for example, in terms of hits or invocations per time-unit) present in the test scenario, and the way its load changes during the test execution (initial ramp until steady load is reached, steady load level and duration, and final ramp). Typically, various kinds of performance testing can be designed; for example: Load testing, aimed at understanding system behavior under a expected load. Used mainly in bottleneck detection. Page 5

Stress testing, aimed at breaking the system; the load offered to the system is well over the expected/specified by the requirements and helps determine the administrators to know the maximum load the system will be able to handle before serious problems arise. Endurance/Soak testing, aimed at ensuring the system will be able to sustain the continuous expected load. 5.3. Tools In this type of testing, the same tools used for integration testing are usually available. Automation of performance tests can be done, usually, through scripting in order to achieve the loads specified in the test-case scenario. 6. Accessibility and Usability Testing All analysis and solutions referring to the implementation of accessibility will be documented in detail within the group of previously listed deliverables. Among these descriptions, the section that stands out is dedicated to defining the accessibility tests in all aspects, from the procedures to the tools used to conclude with the analysis of the obtained results. If applicable, the corrective actions will be documented to solve the possible errors detected in the tests. Specifically, in the correction tests of accessibility we will have the following tasks: - Definition of Tests: once there is a system developed with the objective of fulfilling accessibility rules, the experts in this subject will define in detail a plan of tests based on automatic activities, expert revisions and laboratories with disabled users. Likewise, the metrics will be defined that will generate the subsequent reports and analysis after executing the tests. - Execution of Tests: after defining the tests in detail, they will be taken with the participation of accessibility experts. In this way, we can distinguish 3 different test types: - Automatic Tests: by using automatic accessibility verification tools, such as TAWDIS (Spain) or AChecker (Norway), in a way that the possible shortfalls can be identified that could be found by a computer program. This software keeps improving its degree of precision, but it still requires the revision of an accessibility specialist before ruling out the correct testing of some non-testable standards by a computer program. - Expert Tests: Expert product validations are conducted precisely for those standards that cannot be 100% checked by the automatic validation programs. In this regard, all the specifications of use and system coding are revised in order to verify their complete correction according to all the recommendations for an accessible product. These validations are done by using the current Norwegian rules as reference, and the internationally accepted ones as standard WAI rules, in their specifications 1.0 and 2.0. Page 6

- Laboratories for Disabled Users: in which, beyond the expert verifications, the system will be subjected to the tests of real users that present different types of disabilities. This type of test is usually definitive for determining the real degree of the developments accessibility, since fundamental aspects are shown in these that go beyond the good standards generally covered by the rules. -Analysis of Results: finally the analytical tasks are done that correspond to the results from the previous tests, in a way that reports can be issued about the accessibility status, identifying the strengths, shortfalls and the necessary corrective actions to carry out. With these results, an error correction phase is carried out, after which the product is subjected to the same complete tests again, as done in the first iteration. 6.1. User s Tests In a similar way to that which was explained with the accessibility tests, the execution of tests on users is proposed, which would allow us to verify that the proposal of the prototype and design phase works as expected with real users. In this case, the following test types are defined: - Expert Analysis: in which the product is subjected to the expert analysis from the usability point of view, to identify the good practices or the cases that are not adequately resolved, so that they facilitate the use of the application. - Heuristic Analysis: the heuristic evaluation or evaluation by criteria is conducted by specialized evaluators based on principles established by the discipline of the HCI. The criteria recommended by authorities in the subject such as Jacob Nielsen (Visibility of the System status, Adaptation between the system and the real world, Control and freedom of the user, Consistency and standards, Preventing errors, Recognizing is better than remembering, Flexibility and efficiency of use, Minimalist aesthetics and design, Helping users recognize, diagnose and solve errors, Help and documentation), is used as the basis for those that add their own criteria based on the experience of our experts. - User Laboratories: finally, in a similar way as the case of accessibility, the product will be subjected to the tests with real objective users, which will prove the reality of the application s use. The user conduct, reactions and opinions can become known through the laboratory tests. This way, the detected problems will be analyzed in depth, as well as the expectations and suggestions of these users. The results include ratios of effectiveness, efficiency, satisfaction, comments, tendencies, opinions, etc. The test results come with recommendations for improvement. This entire procedure will be properly defined and the results will be recorded. In the same regard, the obtained results will be analyzed, and their conclusions will become feedback in a cyclical process of improvement for the user interfaces from all possible points of view. 7. Acceptance Testing As described in section 2.5 of the SSA-U Software Development Agreement. Page 7

8. Bug Reporting and Change Management Bug reporting and Change management are the two basic mechanisms to keep the errors detected on the system (through failing tests) identified and controlled. 8.1. Bug Reporting Bug reporting is usually supported by an application providing functions similar to the following: Bug registration: For every bug detected, information describing the bug, detection date, screen captures or log files, or any other information that can be useful for bug-fixing is entered in the application by the reporting user. Bug assignment: Allowed software development personnel can receive the new bug registrations and assign them to developers in order to be solved; this change in the status of the bug is visible to the reporting users. Bug solving: When the software system is modified by the developer in order to solve the bug, a new change in the state of the bug is visible to the reporting users. Bug delivery: When the modification to the software is scheduled for delivery, a new change in the bug state is visible to the reporting users. Bug correction/rejection: When the modification to the software is delivered (deployed) as part of a system upgrade, and the reporting user can check the bug is really solved, the result of that check (bug corrected/ not-corrected) can be recorded in the system, thus closing the bug report or starting a new correction cycle. Depending on the complexity of the bug life-cycle to implement, several tools can be used. Typical, freely available, open source tools include Bugzilla, JTrac and Scarab. 8.2. Change Management Change management is usually supported by a software repository supporting versioning of the different software artifacts (source code files, configuration files, user documents, models, etc.). Every time a software artifact is changed and committed to the repository, the artifact is stored in the repository with a new version number. The different versions of every artifact can be inspected and compared. A set of artifacts, each in a given version, can be tagged as a release, in order to be exported from the repository as a unit when needed. One of the reasons a software artifact is changed is bug correction. A set of bug corrections (plus, eventually, new functionality) can be tagged as a release to be delivered and deployed (at that moment, users will be able to check bug correction and report results of the checking to the bug reporting application) Page 8