Quality Assurance version 1
Introduction Quality assurance (QA) is a standardised method that ensures that everything works as it was intended to work and looks as it was intended to look. It should force all stakeholders (agency, us, technical partners etc.) to focus on the user of the site, what we d like them to achieve and how. Thorough QA and testing should be done throughout the production life cycle. No website should go live to the public without having gone through the mandatory testing processes as specified in this document. It is worth noting that the objective of QA and testing is not to eliminate all errors at all times, this is an impossible task. More that we ensure the major risks are mitigated and that the website performs as expected for the intended target audience. We recommend the following process as a baseline for testing a website: Depending on your project management methodology (waterfall, agile etc.) the above can be approached either as a linear or iterative process. 1
1. Planning, budgeting & roles It is essential that QA and testing are embedded in the project plan for the website from the outset, in terms of timing, budget allocation and roles. Failure to plan correctly may result in deadlines being missed or websites going live with errors that could undermine the investment made. 1.1 Planning and budgeting As testing is the last phase before a website goes live this area always comes under pressure when other elements of the project overrun, especially when there is a fixed deadline. We must maintain this testing period at all times to ensure the investment made is not compromised by errors or a poor user experience. There are no hard and fast rules about the amount of testing that should be done on a website. This varies according to the value of the site to the business (i.e. an ecommerce site would go through a very rigorous process as security and user experience are of paramount importance), whereas a microsite to support a tactical event would require less testing. Nevertheless we would envisage that at least 10% of the total time spent on the project should be allocated to QA and testing. It follows that this should also equate to around 10% of the budget for the website. 1.2 Roles All stakeholders should understand the value of testing and their role in the process. The agency should lead this process using this document as a guide. They should ensure that sufficient time is allocated and booked into the diaries of the correct stakeholders within SABMIller to review and feed back, so that the website is delivered on time and meets expectations. Checklist No. Title Icon Measured 1.1 At least 10% of the time of a project is dedicated to QA & testing 1.2 At least 10% of the budget of a project is dedicated to QA & testing 1.3 Allocate specific testing roles to all stakeholders and schedule in diaries 2
Further reading 1. The Open Web Application Security Project 3
2. Test plans Test plans are documents that systematically test defined variables and situations with the aim of delivering a website that is fit for purpose and meets our and the audiences expectations. They aim to replicate real life situations with the website and test performance based on objective measures, removing subjectivity from the testing process. 2.1 Creating test plans Test plans are created from three main inputs: 1. Functional specification - what should the site do according to the briefing (e.g. collect data, display video etc.) 2. Audience - who will interact with the site (e.g. what are their capabilities, what technology will they be using etc.) 3. Objectives of the website - what the site needs to achieve for the business (e.g. reposition brand, increase awareness etc.) 2.2 Test cases From these inputs a series of test cases should be created that interrogate the basics of the website (e.g. do all the links work?) and its critical functions (e.g. does the sign up form work?). A test case should set some criteria for the environment; state the inputs and the expected outputs. Below is an example of a test case for link checking[1]: 1. In Windows 8, load Internet Explorer 8. Clear browser caches, and clear history. 2. In the URL field type in testurl.com and hit the ENTER key. 3. On the home page, for each link: a. examine the link text b. click on the link, and verify that the link works c. verify that the new page is the correct page, as indicated by the link text d. verify that the little black arrow correctly indicates the current page e. click on the browser's BACK button f. verify that the link's colour changed to the vlink color 4. Once all links on the home page have been tested, click on the first link in the left navigation column. 5. For this page, repeat steps 2a - 2f. 6. Repeat steps 3 and 4 until every page has been tested. 4
When creating test cases it is crucial that you fully understand the user, especially in regards to how the website will be accessed (device, operating system, browser etc.) and the user s digital competency and expectations. Refer back to research when planning the site and overlay this onto the test cases to ensure that the intended user s exact persona is catered for during testing. Checklist No. Title Icon Measured 2.1 Create a test plan for the website 2.2 Test cases should replicate basic and critical functions of the website 2.3 All test cases should incorporate the users means of access (device, browser, o/s etc.) and their digital competency / expectations References 1. http://www.philosophe.com/testing/testcases/ Further reading 1. Applied Software Project Management - Test plans and test cases 2. IEEE 829 5
3. Types of testing There are many different types of testing and each type should be used in conjunction with others to fit exactly the requirements of the particular website. Below we have explained what we expect as a base level of mandatory testing for any website to guarantee it meets the minimum requirements with regards to accessibility, usability and security. 3.1 Capture environmental factors When completing any type of testing you should also record browser types and versions, operating system, machine platforms, connection speeds etc. In short, record any parameter that would affect the ability to reproduce the results or could aid in troubleshooting any defects found by testing. 3.2 Device The website should be checked on all significant devices that the audience might use to access the site. This is to ensure a site displays and functions across desktops, tablets and mobiles (and, to a lesser degree, large screens such as televisions which are sometimes used for displaying sites). 3.3 Browser compatibility The website must be checked for compatibility (function and display) across all supported browsers. For mobile devices it is recommended to test on the actual device as opposed to a simulator to ensure proper performance. You can also make use of online services to help with cross-browser testing: Browser Stack - http://www.browserstack.com/start Browsershots - http://browsershots.org/ 3.4 Performance 3.4.1 Unit Automated testing using unit tests for both front and back-end code will greatly reduce your testing time and also limit the chance of adding regressions during maintenance or updates to the site. The unit tests could be run manually or as part of the build/deploy process. The tool you use for your back-end code will differ depending on the language you use, but for the front-end code you could use tools like Selenium [1] or QUnit [2]. 3.4.2 Load In load testing it is recommended to gradually increase the number of virtual users, loading at the beginning and throughout the test. Increasing the load gradually means that the site s performance can be measured at different load levels. It is also vital in identifying performance bottlenecks and breaking points of an application. During load 6
testing you should emulate typical user behaviour to see how well your website handles large numbers of users. There are many load testing tools available like Load Impact [3] or JMeter [4]. 3.5 Accessibility Accessibility is the practice of making sites available and usable to people of all abilities / disabilities across all devices and platforms. In many countries there is now legislation which lays out the minimum legal requirements for websites. Your local or regional legal counsel will be able to advise if you are unsure. How to code for good accessiblity is covered in the Web Development section and this area is also covered in more depth in the User Experience section. 3.6 Code validation As the project progresses code should be validated as it is written. However, it should also have one final run through the W3C Mark-up Validation Service (http://validator.w3.org/about.html) to remove as many errors as possible. 3.7 Security It is essential that websites are designed, coded and hosted to operate at a level of security that is consistent with the potential harm that could result from the loss, inaccuracy, alteration, unavailability, or misuse of the data and resources that it uses, controls, and protects. As a minimum requirement all wesites which collect personal information need to be security tested as part of their introduction, or when there is significant changes to the design of the site. The Open Web Application Security Project has compiled the top 10 website and application security risks. As a minimum we need to ensure the website is protected against these risks: 1. Injection Injection flaws such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker s hostile data can trick the interpreter into executing unintended commands or accessing unauthorised data. 2. Cross-site scripting (XSS) XSS flaws occur when an application takes untrusted data and sends it to a web browser without proper validation and escaping. XSS allows attackers to execute scripts in the victim s browser which can hijack user sessions, deface websites or redirect the user to malicious sites. 7
3. Broken authentication and session management Application functions relating to authentication and session management are often not implemented correctly. This can allow attackers to compromise passwords, keys, session tokens, or exploit other implementation flaws to assume other users identities. 4. Insecure direct object references A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorised data. 5. Cross-site request forgery (CSRF) A CSRF attack forces a logged-on victim s browser to send a forged HTTP request, including the victim s session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim s browser to generate requests the vulnerable application thinks are legitimate requests from the victim. 6. Security misconfiguration Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server and platform. All these settings should be defined, implemented and maintained because many are not shipped with secure defaults. This includes keeping all software up to date, including all code libraries used by the application. See the Hosting section for more details on how the environment should be set up. 7. Insecure cryptographic storage Many web applications do not properly protect sensitive data (eg, credit cards, SSNs and authentication credentials) with appropriate encryption or hashing. Attackers may steal or modify such weakly protected data to conduct identity theft, credit card fraud and other crimes. 8. Failure to restrict URL access Many web applications check URL access rights before rendering protected links and buttons. However, applications need to perform similar access control checks each time these pages are accessed, or attackers will be able to forge URLs to access these hidden pages anyway. 9. Insufficient transport layer protection Applications frequently fail to authenticate, encrypt and protect the confidentiality and integrity of sensitive network traffic. When they do, they sometimes support weak algorithms, use expired or invalid certificates, or just use them incorrectly. 8
10. Unvalidated redirects and forwards Web applications frequently redirect and forward users to other pages and websites using untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorised pages. There is a set of paid and free tools here that you can test your site against the above potential weaknesses: https://www.owasp.org/index.php/appendix_a:_testing_tools If your website is to have an ecommerce function or hold any sensitive data (national identity numbers, passport numbers, credit card details etc.) then the potential risks associated with the website are much higher. It is essential that you inform your local or regional security officer at the very start of the project so that the additional checks and security that will be needed can be put in place. They need to be involved in the project from the outset to ensure the website conforms to the standards required to mimimise the increased risks. 3.8 Regression Regression testing is used when significant changes (new code, patches, confirgurations etc.) have been added to a website. It aims to test the previously existing code and functions to ensure the new changes have not caused any new errors or issues. Typically you would run similar tests to the original ones completed on the website to see if any new faults occurr after the changes. 3.9 User acceptance This shoud be the final stage of testing before the website goes live and the objective is to obtain confirmation by all stakeholders that the website meets mutually agreed-upon requirements. Essentially, it is to answer the question, is it fit for the business? A link to the website (hosted in a staging environment) is usually sent to the relevant stakeholders and they are asked to assess it against the functional specification, the objectives and their expectations. It is common for the agency to set the stakeholders real life tasks to replicate the audience s use of the website. The purpose of User Acceptance Testing (UAT) is not usually to locate minor issues (e.g. spelling errors) but more to test the major functional aspects of the site from a technocal and usability perspective. When all stakeholders have approved the website then UAT is completed. UAT is usually the final set of tests before a website is moved from the staging environment to live and so released to the public. 9
Checklist No. Title Icon Measured 3.1 Device testing carried out on all siginificant devices for the audience 3.2 Cross-browser compatibility testing carried out on all browsers the audience might use 3.3 Performance unit and load testing completed 3.4 Accessibility must meet minimum legal requirements for your country 3.5 Code should be run through a final W3C validation 3.6 Security - If your website is collecting personal information then inform your local or regional security officer who will arrange for the appropriate testing before the website goes live 3.7 Security the website must pass all 10 of the common security issues 3.8 Security if your website has ecommerce functions or collects sensitive data inform your local or regional security officer at the start of the project 3.9 Regression tests should be carried out if there has been a major code or functional upgrade Best Practice 3.10 UAT should be carried out by all relevant stakeholders to ensure the website is fit for purpose References 1. http://seleniumhq.org/ 2. http://qunitjs.com 3. http://loadimpact.com/ 4. http://jmeter.apache.org/ 10
Further reading General 1. Testing tools 2. Links about web usability 3. Usability and web design Browser compatibility 1. A dozen cross browser testing tools Performance 1. JUnit 2. PHPUnit 3. NUnit 4. Performance testing traps 5. Performance testing tools 6. Performance test tools Accessibility 1. Web content accessibility guidelines 2. Legally required web accessibility 3. Web accessibility testing Code validation 1. Markup Validation Service 2. 10 reasons why your code won't validate and how to fix it Security 1. Penetration testing guide 2. Security test tools 3. Software security testing Regression 1. Regression testing tools and methods 2. Regression testing UAT 1. User acceptance testing a business analyst s perspective 11
2. User acceptance testing 3. What is user acceptance testing? 12
4. Issues and feedback During the testing process it is crucial that all issues and feedback are captured, categorised and acted upon, where appropriate. The approach to capture and assessment needs to be thorough and systematic to ensure that the website is tested suitably. 4.1 Logging issues and feedback All issues and feedback should be logged in a database of some kind. This could be something as simple as an Excel spreadsheet or you might be using some project management software such as Base Camp which has these functions built in. Each item should be assigned properties such as the priority and scope (in or out), as well as recording such attributes as description, error message, affected functionality, etc. In addition, you should assign and track ownership of the problem and the progress made towards resolution. All entries need to be reviewed, commented and, if appropriate, acted upon so they are resolved to all stakeholders satisfaction. 4.2 Prioritising issues and feedback Sometimes the number of issues and amount of feedback can, initially at least, be overwhelming. Therefore, a framework is usually required in order to categorise and prioritise them and, especially, to remove subjectivity from the process. For example, a brand manager may think that the slight colour deviation in the logo is of paramount importance. However, if the website s database remains open and consumer data exposed then this has much more of an impact for the brand. Below is a basic framework of how to categorise issues and feedback with some examples[1]: Critical infrastructure has failed (a server has crashed, the network is down, etc.) functionality critical to the purpose of the website is broken, such as the search or commerce engine on a commerce site security of data is compromised the site does not function on a desired device High a major functionality is broken or misbehaving one or more pages is missing a link on a major page is broken 13
a graphic on a major page is missing Medium data transfer problems (like an include file error) browser inconsistencies, such as table rendering or protocol handling page formatting problems, including slow pages and graphics broken links on minor pages user interface problems (users don t understand which button to click to accomplish an action, or don t understand the navigation in a subsection, etc.) Low display issues, like font inconsistencies or color choice text issues, like typos, word choice, or grammar mistakes page layout issues, like alignment or text spacing There is a temptation when dealing with issues and feedback to action the easy and simple ones first, as this makes the list shorter. This should be resisted and each problem should be dealt with according to priority and logic. Checklist No. Title Icon Measured 4.1 Create a log for all issues and feedback 4.2 Categorise issues and feedback and prioritise 4.3 Action issues and feedback in priority order 4.4 Work through the log until all issues and feedback are resolved and approved by all stakeholders References 1. http://philosophe.com/qa/priority/ Further reading 1. How to prioritise usability problems 14
5. Live When all issues and feedback have been resolved to all stakeholders satisfaction then the website can be set live for consumers to interact with. 5.1 Deployment If transferred from a staging environment then in real terms little should change with the site. However, at the point a new website is set live it is prudent to re-check the main pages and functions of the site. At this point any housekeeping URLs, for example to clear test submissions from a database, must be disabled. 5.2 Ongoing monitoring and maintenance As the website is tested in the live environment it is crucial that all user initiated issues and feedback are captured, prioritised and resolved. There is no substitute for the real life environment and voice of the actual user, so feedback should be encouraged, acted upon and reflected back to the user where possible. Use a syetm and log similar to that in the testing phase to guarantee that this is worked through systematically. Additionally, the website s performance should be monitored and periodic tests should be carried out. The nature and frequency of these tests will depend on the website. Repeating the tests before setting the site live but with new and more stringent parameters will increase performance and user experience by making marginal improvements over time. All security patches and any other relevant software update to the website must be made as soon as possible to ensure the highest level of security and performance. Checklist No. Title Icon Measured 5.1 Re-check the websites main pages and functions once live 5.2 Disable all housekkeping URL s and other test related functions 5.3 Continue to gather issues and feedback in a log, prioritise and amend as appropriate 15
5.4 Put performance monitoring in place to enable ongoing analysis 5.5 Carry out periodic tests of website functionality and performance to make marginal gains Best Practice 5.6 Ensure all security patches and other updates are made in a timely fashion 16