Quality Assurance Plan



Similar documents
Smarter Balanced Assessment Consortium. Recommendation

Introduction to Automated Testing

Benefits of Test Automation for Agile Testing

Latest Trends in Testing. Ajay K Chhokra

Suunto 2.0 web - Quality Assurance Plan

Testing Lifecycle: Don t be a fool, use a proper tool.

Agile Software Development Methodologies and Its Quality Assurance

AGILE SOFTWARE TESTING

What is a life cycle model?

ASSURING SOFTWARE QUALITY USING VISUAL STUDIO 2010

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction

Upping the game. Improving your software development process

a new generation software test automation framework - CIVIM

Peer Review Process Description

Terrace Consulting Services

Key Benefits of Microsoft Visual Studio Team System

Application Lifecycle Management Using Visual Studio 2013 (SCRUM)

A Practical Guide to implementing Agile QA process on Scrum Projects

Nova Software Quality Assurance Process

Peer Review Process Description

A Guide To Evaluating a Bug Tracking System

Good Agile Testing Practices and Traits How does Agile Testing work?

THE THREE ASPECTS OF SOFTWARE QUALITY: FUNCTIONAL, STRUCTURAL, AND PROCESS

Software Requirements. Specification. Day Health Manager. for. Version 1.1. Prepared by 4yourhealth 2/10/2015

Project Lifecycle Management (PLM)

ALM2013VS_ACC: Application Lifecycle Management Using Visual Studio 2013

Basic Testing Concepts and Terminology

SOFTWARE TESTING TRAINING COURSES CONTENTS

Enabling Continuous Delivery by Leveraging the Deployment Pipeline

Di 3.1. A Test Design Poster for Testers, Developers, and Architects. Peter Zimmerer

Good Software. Lecture 6 GSL Peru 2014

2015 IBM Continuous Engineering Open Labs Target to better LEARNING

Best Overall Use of Technology. Jaspersoft

Implementing Continuous Integration Testing Prepared by:

Open Source Tools. The Magazine for Professional Testers. December 2010

Software Test Plan (STP) Template

Basic Unix/Linux 1. Software Testing Interview Prep

HP Application Lifecycle Management

An introduction to the benefits of Application Lifecycle Management

XP & Scrum. extreme Programming. XP Roles, cont!d. XP Roles. Functional Tests. project stays on course. about the stories

Source Code Review Using Static Analysis Tools

Custom Software Development Approach

From Traditional Functional Testing to Enabling Continuous Quality in Mobile App Development

Introduction to Agile Software Development Process. Software Development Life Cycles

Keywords: SQA,Black Box Testing( BBT), White Box testing(wbt).

GECKO Software. Introducing FACTORY SCHEMES. Adaptable software factory Patterns

ISTQB Certified Tester. Foundation Level. Sample Exam 1

Secure Code Development

Codeless Test Automation for Web Apps

Agile and Secure: Can We Be Both?

Techniques and Tools for Rich Internet Applications Testing

Skynax. Mobility Management System. System Manual

Oracle Insurance Policy Administration System Quality Assurance Testing Methodology. An Oracle White Paper August 2008

GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES

Essentials of the Quality Assurance Practice Principles of Testing Test Documentation Techniques. Target Audience: Prerequisites:

Agile and Secure Can We Be Both? Chicago OWASP. June 20 th, 2007

Test-Driven Development

Effektiver Tool-Einsatz

TESTING FRAMEWORKS. Gayatri Ghanakota

Agile Scrum Workshop

Levels of Software Testing. Functional Testing

BDD FOR AUTOMATING WEB APPLICATION TESTING. Stephen de Vries

Service Virtualization:

Lecture Slides for Managing and Leading Software Projects. Chapter 2: Process Models for Software Development

CSUSB Web Application Security Standard CSUSB, Information Security & Emerging Technologies Office

Course Title: Penetration Testing: Network Threat Testing, 1st Edition

Software Development Tools

SmartBear Software Pragmatic Agile Development (PAD) Conceptual Framework

Increasing Development Knowledge with EPFC

<name of project> Software Project Management Plan

Infor CRM Education on Infor Campus

Java Software Quality Tools and techniques

How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip

ScrumDesk Quick Start

Chap 1. Software Quality Management

Software Engineering Question Bank

Load testing with. WAPT Cloud. Quick Start Guide

Perfect Your Mobile App with Load Testing and Test Automation

Testing. Chapter. A Fresh Graduate s Guide to Software Development Tools and Technologies. CHAPTER AUTHORS Michael Atmadja Zhang Shuai Richard

Agile & Scrum: What are these methodologies and how will they impact QA/testing roles? Marina Gil Santamaria Summer 2007

Agile Processes and Methodologies: A Conceptual Study

Agile and Secure: OWASP AppSec Seattle Oct The OWASP Foundation

Complete Web Application Security. Phase1-Building Web Application Security into Your Development Process

A Pythonic Approach to Continuous Delivery

Contents. -Testing as a Services - TaaS 4. -Staffing Strategies 4. -Testing as a Managed Services - TaaMS 5. -Services 6.

Requirements engineering

Testhouse Training Portfolio

Certified Software Quality Engineer (CSQE) Body of Knowledge

Requirements Definition and Management Processes

Business Application Services Testing

Microsoft Modern ALM. Gilad Levy Baruch Frei

Presentation: 1.1 Introduction to Software Testing

Transcription:

CloudSizzle : Quality Assurance Plan Quality Assurance Plan General info Changelog 1. Introduction 2. Quality goals and risks 3. Quality Assurance practices 3.1 Testing levels 3.2 Testing - 3.2.1 Test case based - 3.2.2 Eploratory - 3.2.3 Automated unit - 3.2.4 Performance and load - 3.2.5 Regression - 3.2.6 Test suites - 3.2.7 Pass and fail criteria for test cases - 3.2.8 Definition of done 3.3 Other quality assurance practices - 3.3.1 Code and document reviews - 3.3.2 Coding and code documentation standards - 3.3.3 Static code analysis methods - 3.3.4 Defect tracking - 3.3.5 Pair programming - 3.3.6 Refactoring - 3.3.7 Unit test and code coverage - 3.3.8 Quality checklist - 3.3.9 Collecting feedback from the customer - 3.3.10 Automated acceptance 3.4 Tracing quality goals to QA practices 4. Schedule 4.1 Iteration 1 schedule regarding QA 4.2 Iteration 2 schedule regarding QA 5. Resources, tools and environments 5.1 Quality assurance roles and responsibilities 5.2 Tools 5.3 Testing environments and test data - 5.3.1 Testing environments - 5.3.2 Test data 6. Deliverables 6.1 Quality assurance plan 6.2 Quality assurance report 6.3 Test cases 6.4 Test session charters 6.4 Defect reports 6.5 Summary of peer results 6.6 Quality checklist 7. Evaluation and feedback References General info Responsible for document Status Proposal Page 1

Changelog Version Date Change description 0.1 27.10.2009 Created the initial template for the QA plan 0.2 27.10.2009 Updated the introduction and quality goals 0.3 27.10.2009 Updated the quality assurance practices and attached the V- model of picture 0.4 27.10.2009 Updated the section on deliverables 0.5 27.10.2009 Updated the resources, tools and environments 0.6 27.10.2009 Updated the evaluation and feedback section 0.7 28.10.2009 Updated the QA practices and section 0.8 28.10.2009 Updated the schedule 0.9 28.10.2009 Correcting typos and making small modifications where needed 1.0 28.10.2009 Changed the status of the document to proposal Author Reviewer 1. Introduction This document describes the most important quality goals, used quality assurance practices, environments, produced QA deliverables and the evaluation of the used QA practices for the CloudSizzle system. A brief schedule will also be presented that describes when the different QA practices are performed in the project. This document will be updated during the project as feedback from the customer regarding quality assurance is received and as new information from the performed QA practices are gained. Table 1.1 displays the intended audience of this document. It is worth to note that one of the business goals of the CloudSizzle system is to get an understanding of the Smart-M3 RDF Store. Thus, the system will be developed as a skeleton system that can be used to Page 2

validate the architecture and developed further, if the customer sees it necessary. As such, the system will not be developed as a production quality service in the beginning. Table 1.1. Intended audience of this document. Group of the readers The customer System architect Project manager Quality assurance manager Developers Testers Mentor Peer group Reasons for reading To be able to see that the planned quality goals and practices are sufficient and meets the needs of the customer. To be able to give feedback about the quality goals and performed quality assurance practices. To use this document as a basis for the architecture description of the system, especially for the most important quality properties of the system. To be able to see the status of the quality goals, number of defects and performed quality assurance practices. To use this document as a way to communicate the planned quality goals and practices. To understand the quality goals and communicate the quality assurance practices. To learn about the quality goals and quality assurance practices. To get an understanding of the quality goals and quality assurance practices of the project and the status of these goals and practices. To give an understanding of how the peer evaluation is performed 2. Quality goals and risks Table 2.1 lists the most important quality goals of the system and deliverables as well as their verification criteria. These goals will and their status wil be updated throughout the project in order to reflect the most important quality goals of the system. The quality goals and their priority is updated in the iteration planning meetings by the customer. Table 2.1. List of quality goals ID Quality goal Description Verification Priority QG-1 Maintainability The system shall be easy to maintain so that it is possible to test, fi defects, make changes and improve the code without huge amounts of effort. All methods and functions must have comments The system shall follow the DRY principle The code compleity of different Page 3

parts of the system shall be at maimum between 7 and 9 It must be possible to write unit tests to all critical functionality QG-2 Etensibility The system shall be easy to etend so that new features and functionality can be implemented. QG-3 Documentation The documentation of code, user manuals and other documents must be done well. The system architecture must be modular There shall not eist too many dependencies between classes The interfaces must be well defined and documented in the code The documents should be type checked so that they do not contain too many type errors The tet must be understandable by a person that has a basic understanding of software engineering and computer science The documents shall follow the DRY principle QG-4 Security and privacy The system must be secure so that it prevents information leak, information loss and illegal use. The system must prevent the most usual web attacks such as SQL injections Page 4

Best practices according to security must be followed Users' passwords to other services such as Noppa and Oodi must be securely handled so that they are not compromised to third parties Users shall be able to change their privacy settings QG-5 Internationalization The system must support internationalization without the need to modify the architecture. The system must support language files that enable the customer to easily add new languages The system must support Finnish and English QG-6 Performance and scalability The overall performance of the system must be sufficient and the system must be scalable without the need to modify the architecture. The system must perform it's most important requirements within the time requirements defined in the architecture description The system must be scalable to 20.000 users without the need to modify the architecture QG-7 Interoperability The system must be interoperable so that it is easy It must be possible to connect new Page 5

to link more systems to it, if needed. QG-8 Completeness The degree to which the system possesses the necessary and sufficient functions to satisfy the user needs. QG-9 Correctness The degree to which the system performs its required functions. services to the system without the need to modify the architecture The system must satisfy the most important requirements defined and verified by the customer at the moment of development All critical defects must be fied at the end of the project so that the system can perform its required functions Table 2.2 list some of the quality risks for the system. This list will be updated throughout the project. Table 2.2. List of quality risks ID QR-1 QR-2 QR-3 QR-4 QR-5 Quality risk Group members are not familiar with Python Code documentation is not sufficient The performance of the system suffers from the many components The system does not satisfy the customer needs The system is difficult to maintain 3. Quality Assurance practices This chapter presents the quality assurance practices that are performed during the project. 3.1 Testing levels During the project various types of will be performed in order to understand and improve the overall quality of the system. Testing will be performed at different levels as Figure 1 shows. Unit will be the first stage in the process. Testing at this stage is done with whitebo methods by small units of the system. A unit is the smallest possible testable software component, that is in this case classes and their methods. Faults discovered at this stage are usually much cheaper to fi than in the later stages or when the system has already been delivered to the customer. Page 6

Integration is the net phase in the process. The purpose of integration is to discover faults between the interfaces of different components of the system. Continuous integration and automated unit tests are somewhat helpful for integration. However, integration is usually performed with black-bo methods such as equivalence partitioning, boundary value analysis and pairwise. System is the third stage of in the V-model of. Its purpose is to test the integrated system to evaluate it's compliance with the requirements. It is done using black-bo methods and requires no knowledge of the inner design of the system. Some types of system are for eample performance, security, eploratory and regression. Acceptance is the final phase of. It determines whether the system satisfies the requirements that have been specified in the requirements document. It is also done using black-bo methods. Testing in this phase can be distinguished by that is done by the project team and that is done by customers. The project team can use for eample the Selenium framework as a method of acceptance. The acceptance that is done by the customer on the other hand is done in a real environment where the system will be used and the customer determines whether the requirements are satisfied or not. Figure 3.1. The V-model of. 3.2 Testing This chapter describes the different methods that will be used during the project. Generally activities will be carried out throughout the project. 3.2.1 Test case based Test case based is a form of functional. During the project test cases will be written for all use cases that will be developed in the iterations. These test cases will be documented in a spreadsheet in Google docs. Each test case will have a unique identifier, a short name, a description, related requirements or use cases, a priority, input values, steps, epected results and notes. The test logs that are documented on the same spreadsheet consist of a date, name of tester, test environment, list of eecuted tests, their status (passed or failed) and defect identifiers. Page 7

The of test cases is performed both manually and automatically. Manual is performed on functionality that is of low importance, while automatic is performed using the Selenium framework on requirements that are of high importance and are changing frequently. In addition, some of the pairwise test cases will be generated with a tool called Allpairs. 3.2.2 Eploratory Eploratory is a form of functional. The distinction between eploratory and other types of functional is that is designed and eecuted at the same time. Eploratory and finding defects with the help of it is very much dependent on the eperience of tester. During the project Session-based Test Management will be used as the eploratory approach. These sessions will be conducted both in the I1 and I2 iterations in the system phase of functionality. A session will last approimately 60-90 minutes and during the session the tester will produce a test charter as is described in section 6.4 Test session charters. The tester can use mindmapping software during the session to help quickly document the different decisions and found defects of the session. 3.2.3 Automated unit When delevopers implement classes and methods for these classes, the developers are also responsible for writing unit tests on the implemented functionality. These unit tests will be grouped together as automated test suites that are eecuted by the continuous integration system on a regular basis and when a developer commits his work to the repository. The unit tests will be written with the unittest (sometimes referred to as PyUnit) library that is included with Python. 3.2.4 Performance and load Performance is carried out on some of the most critical functionality. A tool called pyunitperf is used for measuring the performance and scalability of functionality contained within eisting PyUnit tests. JMeter is another tool that can be used during the project to measure load of the system. 3.2.5 Regression Regression is performed during the project with the help of the continuous integration tool and manual. The continuous integration tool eecutes the automated unit tests and can be used to easily discover, if something has broken as a results of a change. Manual may also be performed on some critical functionality that has changed in order to ensure that everything is working as specified in the requirements document. 3.2.6 Test suites The different test cases will be grouped to test suites in order to ease the management of tests. The tests will generally be grouped by functionality, but depending on the situation another approach can also be used. The different test suites will also be prioritized according to customer value and the criticality or compleity of the component. 3.2.7 Pass and fail criteria for test cases A test has passed, if the actual result is the same as the epected result. If the epected result is not the same as the actual result defined in the test case, then the test has failed. Test suites have passed, if all the tests contained in the test suite have passed the tests. In other words, all tests inside test suites have to pass in order for the test suite to pass. 3.2.8 Definition of done All requirements that are implemented, but not tested are not considered ready. Only the requirements that have been tested and have passed all the tests that are related to the requirement can be considered to be done. Page 8

3.3 Other quality assurance practices This section describes other quality assurance practices that will be performed during the project. 3.3.1 Code and document reviews Code reviews Code reviews will be organized twice in every iteration. The purpose of code reviews is to communicate best practices regarding programming and find possible coding standard violations and defects in the code. Thus, it functions both as a way to educate programmers and improve the quality of the system. The code reviews will be held in the group meeting room in the CS building. The group members, whose code is being reviewed must attend the code review session in order to present their code. The architect and other developers are also required to attend the code review and their role is to function as reviewers. The reviewers will receive the code that is reviewed before the meeting so that they can prepare in time. The group members, who are attending the code review will evaluate and review the code based on the quality checklist that is described in more detail in chapter 3.2.8. The most critical parts of the system will be reviewed in the code reviews. Participants may also suggest other noteworthy or otherwise important parts of the system for subjects of code reviews. These suggestions must, however, be sent in advance to the other participants. Document reviews During the iterations the project team will have internal deadlines for the deliverables in order to ensure that there is enough time to write them well. Document reviews will be carried out before these internal deadlines. The process of document reviews is similar to code reviews above. 3.3.2 Coding and code documentation standards As the system is developed with the Python programming language, the coding standard that is followed is the Python Enhancement Proposal 8: Style Guide for Python Code (http://www.python.org/dev/ peps/pep-0008/). It can be regarded as the official coding standard for Python and it gives a detailed description on how to write the code. The Django MVC framework that is used in the project also has a coding convention that is based on PEP 8 (http://docs.djangoproject.com/en/dev/internals/contributing/#coding-style). Accordingly, when writing code in Django, the coding conventions for the framework must be used. Furthermore, there is a Python Enhancement Proposal that describes some best practices when programming with Python. PEP 20: The Zen of Python (http://www.python.org/dev/peps/pep-0020/) describes these best practices as aphorisms. The code needs to be documented in order to improve maintainability and refactoring of the code. Thus, it is important to follow a standard for code documentation. The Python Enhancement Proposal 275: Docstring Conventions (http://www.python.org/dev/peps/pep-0257/) will be used as the standard for writing docstring in Python. One advantage of using a standard for code documentation is that it is then possible to generate the documentation from the source code. In this case the Python module pydoc will be used to generate the documentation. 3.3.3 Static code analysis methods Code review is a form of static code analysis. However, this chapter will describe how automated static code analysis methods will be used to find coding standard violations and bugs in the system. Pylint (http://www.logilab.org/857/) is a tool that can be used to find bugs and signs of poor quality by eamining the source code against coding standards. Pylint can be used together with a continuous integration tool to generate reports about code quality. As Pylint is bundled with Pydev, a plugin for Eclipse that enables users to use Eclipse for programming Python, the developers can easily check their source code for problems before commiting it to the repository. RATS (Rough Auditing Tool for Security, http://www.fortify.com/security-resources/rats.jsp) is a tool that can be used to scan Python source code in order to find security issues. This tool should be used Page 9

to complement code reviews and other manual static code analysis methods to find issues related to security. There is also one tool called pygenie (http://www.traceback.org/2008/03/31/measuring-cyclomaticcompleity-of-python-code/) that measures the cyclomatic compleity of Python code. Comple programs are generally more difficult to unit test and maintain. Thus, this tool can be useful for measuring how comple the code in the system is and where it should be improved. If the code compleity increases to more that seven, there is a need for refactoring the code. Another tool called PyMetrics (http:// sourceforge.net/projects/pymetrics/) also calculates the cyclomatic compleity, LoC and the percent of comments in the code. At the moment of writing these tools have not been tested by the members of the group. However, most likely only one of these tools will be used in the project. Snakefood (http://furius.ca/snakefood/) is a useful tool that can be used to generate dependency graphs from Python code. Controlling the depencencies between various parts of code is good way to increase the reusability of code. In other words, the less dependencies a part of code has, the greater the possibility to reuse that code in the future. CloneDigger (http://clonedigger.sourceforge.net/) is a tool that will be used to find similar or duplicate code in Python programs. This tool can be very useful for measuring how well the DRY principle is followed. 3.3.4 Defect tracking Defect tracking is an important part of quality assurance. It also provides useful metrics for evaluating the quality of the system. In the project Trac is used for defect tracking as it is also used for project management. All of the stakeholders will be provided access to Trac so that they can report and monitor defects and change requests. Stakeholders can report defects, change requests and other issues as tickets in Trac. The user can choose the type of ticket from the available options. These tickets are then reviewed in the sprint planning meeting according to their severity and priority. In the sprint planning meeting the stakeholders also decide, which tickets will be implemented in the upcoming sprint. The QA manager is responsible for the of defects and implementation of changes and delegates these tasks to testers and developers as necessary. The defects are assigned to developers based on the defects. Table 3.1 lists the fields used in defect reports: Table 3.1. List of fields used in defect reports Field ID Name Description Test case ID Reproducibility Steps to reproduce Description A unique identifier for the defect A short name for the defect that helps developers to quickly distinguish the defect from other defects. A more detailed description about the defect and how it affects the system. The unique identifier of the test case that revealed the defect. The field describes the reproducibility of the defect. The options are as follows: always, sometimes, random, have not tried, unable to reproduce. A description of steps that help to reproduce the defect. Page 10

Actual result Epected result Component / module Severity / impact Priority Time Reporter Build Environment Related requirements / use cases Related defects Related test cases Attachments Additional information The result that was achieved with the step-bystep instructions. The results that was epected when performing the step-by-step instructions. The component or module that the defect concerns. The severity of the defect. The priority of the defect. The higher the priority the sooner the defect should be fied. The time when the defect was submitted. The reporter of the defect. The build version, in which the defect was found. The environment that was used when the defect was found. A reference to a related requirement / use case that describes how the feature should work. A reference to other similar or related defects. A reference to test cases that are not working, because of this defect. Attachments that can help in identifying or fiing the defect. For eample screenshots, logs or videos. If there is some other information, for which there is no field. 3.3.5 Pair programming Pair programming is a lightweight version of code reviews. It is useful for distributing knowledge between developers and to improve the quality of the system. Generally, senior or more eperienced developers should be paired with less eperienced developers. This way junior developers are able to quickly learn the best practices and to adapt to new technologies. In order to increase the knowledge distribution of the system within the group, the pairs will be changed regularly. In pair programming the other developer works as the so called driver, while the other developer is the navigator. The driver is the person, who writes the code. The navigator on the other hand reviews each line of code as it is typed. The two developers should switch roles frequently, for eample every 30 minutes. Because of the nature of the project, it may be difficult to organize sessions, in which several developers can attend at the same time. Nonetheless, pair programming will be used as much as possible. Developers may also discuss with other developers on their own in order to organize a small pair programming session with another developer. If possible, remote pair programming sessions may also be organized. 3.3.6 Refactoring Refactoring is an important part in the evolution of a system in order to improve it's internal quality attributes. Refactoring occurs by modifying the internal structures of a system without changing the eternal behaviour of it. The main advantages of code refactoring are the improved readability, Page 11

maintainability, performance and etensibility that is gained with it. As the development process is iterative, it is important to refactor the code regularly. Refactoring is something that all of the developers are responsible for. The needs for refactoring a part of code may be discovered at code reviews, pair programming sessions or by some other automated static code analysis tool. Automatic unit tests ensure that refactoring does not break eisting functionality. 3.3.7 Unit test and code coverage There are some code coverage analysis tools that can be used to measure, which parts of programs are eecuted while running them. This can be useful, when writing unit tests and understanding what parts of programs are really tested. Code that is not covered by unit tests is not tested. 3.3.8 Quality checklist A quality checklist will be available for the developers. This quality checklist lists different best practices and tests that the code has to conform to. This checklist can be used when reviewing code in code reviews or before commiting changes to the repository. 3.3.9 Collecting feedback from the customer The development process will use an iterative development model. The project will be split up into two implementation iterations and these iterations will further be split up into shorter sprints. At the beginning of the iterations the customer will prioritize the quality goals. Also, at the end of each sprint an increment of functionality should be ready. This functionality can then be demonstrated to the customer. In addition, at the end of each iteration there will be an iteration demo at which some functionality will be demonstrated. This demonstration serves as an event where the customer can give feedback on the quality goals and implemented deliverables. Moreover, the customer or some customer representative can be invited to code reviews in order to get feedback about the code quality and documentation. 3.3.10 Automated acceptance The Selenium framework will be used for automated acceptance. It can also be used to test the web application with modern browsers, eliminating the need to test that manually. 3.4 Tracing quality goals to QA practices This section describes how the quality goals and risk trace back to the used quality practices. Table 3.2. Summary of QA practices and quality goals Practice QG-1 QG-2 QG-3 QG-4 QG-5 QG-6 QG-7 QG-8 QG-9 Test case based Eploratory Automated unit Performance and load Page 12

Regression Code reviews Document reviews Coding and code documentation standards Static code analysis methods Defect tracking Pair programming Refactoring Unit test coverage Quality checklist Collecting feedback from the customer Automated acceptance Table 3.3. Summary of QA practices and quality risks Practice QR-1 QR-2 QR-3 QR-4 QR-5 Test case based Eploratory Automated unit Performance and load Page 13

Regression Code reviews Document reviews Coding and code documentation standards Static code analysis methods Defect tracking Pair programming Refactoring Unit test coverage Quality checklist Collecting feedback from the customer Automated acceptance 4. Schedule Quality assurance will be carried out troughout the project. The schedules will also be updated as soon as more information is available regarding the iteration meetings etc. 4.1 Iteration 1 schedule regarding QA Date Time Event Description Participants / responsibles Status 28.10.2009 13.00 Deadline: QA plan Done Unscheduled events Event Description Participants Code review session 1 The first code review in the iteration All group members Page 14

Code review session 2 Document review session 1 Document review session 2 The second code review in the iteration The first document review session in the iteration The second document review session in the iteration All group members All group members All group members Eploratory sessions Eploratory sessions Testers Writing the test cases for use cases Writing Selenium test cases Writing QA report Writing test cases for the requirements that are implemented in the iteration Writing Selenium test cases that can be used for automatic acceptance Writing the QA report that is delivered at the end of the iteration Testers Testers 4.2 Iteration 2 schedule regarding QA This chapter will be updated at the beginning of I2 iteration. 5. Resources, tools and environments This chapter describes some of the resources that are needed for quality assurance as well as roles and responsibilities regarding quality assurance and different tools that are used for quality assurance. 5.1 Quality assurance roles and responsibilities Table 5.1. List of roles and responsibilities regarding QA Role QA manager Project manager Software architect Testers Responsibilities Responsible for all quality assurance documents and updating them Assigning quality assurance related tasks to other group members Present the quality report at the iteration demo Oversee that the quality assurance practices are performed To steer the project according to the quality metrics To take into consideration the most important quality goals in the architecture To see that the most important parts of the system are tested sufficiently Write test cases Perform eploratory and other test cases Page 15

Developers Peer group Follow the quality assurance practices written in the QA plan Write unit tests Report found defects Fi defects Perform on the system 5.2 Tools Table 5.2. List of tools needed for QA Name unittest (PyUnit) Pylint Clone Digger Trac Bitten RATS Snakefood PyMetrics pyunitperf Selenium JMeter Google docs Freemind Allpairs Description A unit framework for Python. A source code analyzer that looks for defects and poor signs of quality. A tool that can be used to find similar or duplicate code in Python programs. A project management and defect tracking system. A continuous integration plugin for Trac. RATS is a tool that can be used to scan Python source code in order to find security issues. Snakefood is a tool that can be used to generate dependency graphs from Python code. A tool that calculates the cyclomatic compleity and lines of code in source code. A tool that is used for measuring the performance and scalability of functionality contained within eisting PyUnit tests. A software framework. Load tool that can be used to measure the performance of web applications. A tool that enables collaborative editing of documents and spreadsheets. A mindmapping tool. A tool that can be used to generate pairwise test cases given some predefined inputs. 5.3 Testing environments and test data This chapter describes the test environments that will be used and how the test data is produced. Page 16

5.3.1 Testing environments The continuous integration and automated tests will be eecuted on the server provided by the customer. The server runs Fedora Core 11 and the continuous integration software is complemented with different software metrics tools. The developers will also test the functionality on their own computers before committing the changes to the repository. The Noppa portal provides a test system that can be used during the project to test integration to that service. Developers are able to login with their weblogin password that they have received from HUT ITservices. Thus, there is no need for test accounts for Noppa. There are currently also talks about getting access to a test version of WebOodi or to get test accounts for the live version. 5.3.2 Test data Test data for the Noppa portal is available in the test version. Thus, it is not necessary to write test data for that service. The Django web framework that is used also contains a fiture component that can be used to generate test data. 6. Deliverables This chapter describes the deliverables that are produced as a result of quality assurance. 6.1 Quality assurance plan The quality assurance plan describes the quality goals and quality assurance practices used in the project. It is updated throughout the project in order to display the current situation of the system quality. 6.2 Quality assurance report The quality assurance report is used to communicate the status of the project's quality goals as well as the quality of different parts of the system to all stakeholders. Different types of quality metrics will be used to will be used to assess the status of quality. A quality assurance report will be produced at the end of both I1 and I2 iterations. The report will describe what quality assurance practices have been used and how they correspond to the planned quality assurance practices. The QA reports will also describe the quality status of different parts of the system with the help of a quality dashboard. In addition, the quality goals and their status is reported as well as relevant quality metrics as described above. 6.3 Test cases Test cases will be documented in a spreadsheet in Google docs as well as test logs related to the test cases. 6.4 Test session charters As a result of eploratory test session charters will be produced. The test session charters describe what is tested, the goals of, the approach and eploration logs. Some test session charters will also be produced as a results of peer for the peer group. 6.4 Defect reports Trac is used to for tracking defect reports and change requests. A more detailed description on defect tracking can be found in section 3.2.4 Defect tracking. 6.5 Summary of peer results During the project peer will be performed for the peer group. The results of peer test must be written in a peer document that describes the activities and findings of peer. The peer Page 17

group will also produce a peer summary that describes the results of their of the CloudSizzle system. 6.6 Quality checklist A general checklist that can be used in code reviews to review the code against. 7. Evaluation and feedback The information gained from using QA practices is used to steer the project to the right direction according to the most important quality goals. The quality status of the system will be evaluated after each sprint and also in the iteration demo at the end of the iterations. The quality status will then be presented in the quality report that is delivered as well as in the iteration demo. The interaction with the customer is important in order to achieve the quality goals. Thus, the quality goals along with the metrics will presented to the customer during the iteration in order to get feedback. The quality practices will be evaluated according to how they manage to accomplish the quality goals. If the defined QA practices are not sufficient or do not accomplish the quality goals, the QA practices may have to be changed. References Page 18