Test Management Overview Fabrizio Morando Application Development Manager martedì 20 novembre 2012
Challenges to Software Quality
Evolution of tools and processes Sketch Flow C/C++ VB COM Waterfall Java/C#/managed Web Services Agile/TDD RIA Cloud Lean UI Automation Hyper-V
Sources for Test Cases by Test Type Business Focus Business Reqs User Reqs Validates Acceptance Testing System Reqs Technical Architecture Validates System Testing Hardware Specs Software Specs Validates Integration Testing Detailed Design Dev Standards Validates Unit Testing Regression Testing Technical Focus Produce Code Reference: Learning Tree International Course 316 Software Testing and Inspection Methods
Definizioni di Test In carico al TEAM di SVILUPPO (White-Box testing) Unit Test (test unitari creati dal singolo developer) Integration Test (test funzionali per verificare l integrazione completa del sistema) In carico al TEAM di TEST (Black-Box testing) Functional Test Smoke Test (o System Test, per verificare testabilita del sistema) Regression Test (Functional Test non oggetto di rilascio) Functional Test (Functional Test oggetto di rilascio) Performance Test (Non functional) Stability Test (Load Test, Stress Test) Scalability Test (tecniche di TCA Transaction Cost Analisys)
Black-box Test Management Prerequisiti: Comprendere il sistema: requirement funzionali e non funzionali. Phased solution: comprendere le multiple fasi di rilascio per pianificare la corretta Test Strategy Fortemente Consigliato: Iniziare Test Design durante analisi/sviluppo. Pianificare il Test Schedule in dettaglio, per maggiore riproducibilità delle sessioni.
Test Management Funzionale Le attivita di Test Management Funzionale sono volte a dare una risposta alle seguenti domande: What (quali aree testare) How (come testare) When (quando organizzare le sessioni) Who (chi deve testare)
Test Management Funzionale Queste sono principalmente le fasi delle attivita di test funzionale: Test Planning (when) Test Design (what) Test Development (how) Test Execution (who)
Test Planning Determinare metodo di Design Determinare test tools Eventualmente Test Harness in-house Determinare Test Engineers Determinare Quality Gates Livello di completezza test Livelli di qualità adeguati I Quality gates in SCRUM sono chiamati DoD (Definition of Done)
Test Design Prioritizzazione dei test case: Sulla base della criticità (risk) Sulla base della frequenza di utilizzo (usage) Sulla base della complessità (complexity) Priorità ai Defects: considerare maggiormente Quelli che emergono dai BVT Quelli oggetto di Patching, Hotfixing
Test Development Le sequenze di esecuzione determinate nei Test Set dovrebbe guidare l organizzazione dei Test Data. Sempre nella fase di test development vi e la Data Preparation: organizzazione dei dati per i vari test case. La fase di test development puo includere lo sviluppo dei test case (nel caso non siano tutti ad esecuzione manuale)
Test Execution Utilizzare Test Procedures Templates Implementare Automated Builds e Smoke Test Implementare un Regression Test Set costante L ordine corretto di Test Execution Session e : Automated Build + Packaging System Setup from Build Smoke Test (subset dei regression, system test) Regression Test (subset dei functional, critical area) Functional Test Gestire il Defect Tracking system
Smoke Tests Basic Verification Tests Su ogni env, verifica di: Connettivita Database Risposta Web App Attivazione Servizi Presenza files necessari Tools Visual Studio Unit Tests (Microsoft) \ Ant-Maven (open source) Custom Scripts
Non Functional testing
Ciclo di miglioramento delle prestazioni Definire requisiti (Sviluppare) Modificare l applicazione Esecuzione dei test di performance Analizzare i dati raccolti Identificare i colli di bottiglia Soddisfa i requisiti? No Sì Finito 16
Load test Si dividono in Stability Tests Hanno obbiettivo evidenziare eventuali criticita del sistema Stress Tests Hanno obbiettivo stabilire i limiti del sistema togliendo risorse al sistema stesso Capacity Tests Verifica della scalabilita dei sistemi TCA Transaction Cost Analisys Performance Tests Misura prestazioni relative e/o assolute Tools (a parte Visual Studio) ACT application server test (web apps) LoadRunner, WinRunner
Load test Primary goal of a Load Test is to simulate many users accessing server at the same time By adding Web Tests to a Load Test you can, - Simulate multiple users opening simultaneous connections to a server - Make multiple HTTP requests - Set other properties that apply to individual Web Tests Also runs Unit Tests Collects performance metrics on client and server - Remote Registry service must be running - account in Performance Monitor Users group requires a course on its own 18
Generating load Rig a group of agents coordinated by a controller Agent runs test Controller administer the agents and collect test results Client to develop tests - select tests to run - view test results 19
Its Time For A Positive Change
Test Automation Pyramid also Business Layer Business Logic Source: Mike Cohn 2010
Visual StudioTest Capabilities generalist specialist coded ui test test runner web performance test unit testing load test test case management virtual lab management data diagnostic adapters (video, action log, event log etc) team foundation server with reporting (bugs, requirements, user stories, source control, build)
Visual Studio Test Types Unit Test Database Unit Test Web Performance Test Load Test Generic Test Ordered Test Manual Test (Test Case) Coded UI Test 23
Some definitions A unit test is a piece of a code (usually a method) that invokes another piece of code and checks the correctness of some assumptions afterward. If the assumptions turn out to be wrong, the unit test has failed. A unit is a method or function. When we test something, we refer to the thing we re testing as the system under test (SUT). An external dependency is an object in your system that your code under test interacts with, and over which you have no control. A stub is a controllable replacement for an existing dependency in the system. A mock object is a fake object in the system that decides whether the unit test has passed or failed. It does so by verifying whether the object under test interacted as expected with the fake object. There s usually no more than one mock per test. Source: Roy Osherove 2009
Unit testing Definitions & suggestions Creating tests Data driven Test impact analysis Mocking Database tests 25 Microsoft Confidential
What is / isn t a unit test? Simple. Quick to run. Independent and selfcontained: Should not rely on any previous test results or any particular execution order. Doesn t cross any unit test boundaries Easy to maintain over time. Documented. Not archaic. Lacking variability: a valid unit test should always consistently pass. Positive or negative: They can validate exceptions and error conditions as well as expected results and states. Not a test that validates integration (your database and your class library). Not a test that interacts with UI, network resources, or the file system. Is not something that takes a long while to run. Not an end-to-end test of a user scenario. Not a performance, stress, load test, security, or other advanced test. Is not complex. 26
Web test Uses a recorder for test creation: - Records web requests using Internet Explorer - Analyzes the HTML traffic: each response can be validated for expected behavior - Ordered series of HTTP requests against a target web applications Support for: - HTTPS - AJAX Correlation helper Data seeding Beware - Does not run through a browser - Not a UI automation tool 27
Web Test Extensibility Web test plug-ins - Hook before and after a test or request Custom extraction rules - Grab custom data from the response - Manipulate that data Custom validation rules - Set additional requirements for a Web Test to pass Extract web test - Build reusable pieces 28
Test Impact Analysis 29 Drives quality upstream by preventing bugs from getting into the system earlier in the life cycle. Developers know the right tests impacted by their code change Testers know the right tests to verify for a given build Enhance development process by requiring verification of impacted automated tests before developers check changes into Source Control System
Analyze Impacted Tests During Build Enable Test Impact Analysis Build report lists impacted tests and associated code changes 30
Code Coverage Permette di verificare i code path toccati dai test case Serve per misurare l efficacia del testing Permette il drill-down sui dati: Assembly, Classi, Metodi Singole linee di codice Molto utile soprattutto se usato in combinazione con lo Unit Testing Gli assembly vanno instrumentati Il Visual Studio lo fa automaticamente quando si selezionano le opzioni di Code Coverage Da riga di comando bisogna usare vsinstr /coverage MyAssembly.dll
Code Coverage A Quantitative Measure of how much of the Application has been tested. Quantitative Measure: % of the application covered during testing Blocks (% Blocks) Covered Blocks (% Blocks) Not Covered
Cosa succede dietro le quinte Cosa fa Visual Studio quando esegue i Test con Code Coverage Se il Code Coverage richiede l In-place Instrumentation Instrumentargli assembly Ri-firma se necessario Crea il folder di deployment Copia i file Lancia eventuali script di setup Lancia i test Lancia gli script di CleanUp Se il Code Coverage richiede l Intrumentation solo dopo il deployment Crea il folder di deployment Copiare i file Instrumenta gli assembly Ri-firma se necessario Lancia eventuali script di setup Lancia i test Lancia gli script di CleanUp
Data-driven testing 34 Microsoft Confidential
Note sui test Data Driven I dati sono caricati in memoria per non rallentare i test Evitate troppi dati Il framework di test carica solo le colonne necessarie La cosa migliore comunque è di usare delle View apposite, e di decidere il numero di righe da utilizzare nei test Data Driven in funzione del singolo test e delle sue performance
Load testing Web tests Load tests 36 Microsoft Confidential
Load Test Simulate many users accessing a server at the same time Are a series of Web tests or unit tests which operate under multiple simulated users over a period of time Scenario is the container within a load test where you specify load pattern, test mix, browser mix, and network mix Allow for simulation of complex, realistic work loads Load tests can be used in different scenarios of testing. Stress testing Smoke testing Performance testing Capacity Planning test 37 Microsoft Confidential
Load Testing: Results and Reports
Manual Tests 39 Microsoft Confidential
Manage your tests in MTM 40 Microsoft Confidential
Feedback loop 41 Microsoft Confidential
Select what to collect 42 Microsoft Confidential
Test Planning & Strategies Test Strategies Master Plan Planning Iterations 43 Microsoft Confidential
Release Planning Iteration Planning Iteration Execution Iteration Retrospective Release Iteration Test Strategy Inception Construction Release Set Test Strategy Define Done, Done Establish Environments Review configurations Add stories to plan Define acceptance Select regression tests Define Test Settings Author tests Run tests File bugs Verify fixes Automate tests Update master plan Identify product debt Identify test debt Select test for automation Regression testing Release doneness testing Release sign off Create plans Generate Data Multiple Iterations 44 Microsoft Confidential
Test Strategy Unit Testing Iteration 1 Iteration 2 Release Iteration Feature A Feature B Feature C Feature D Manual Testing Regression Testing Performance Testing 45 Microsoft Confidential
Master plan Master Plan Doneness Tests Done Test 1 Regression Tests Area 1 Critical Tests Area 1 Done Test 2 Area 2 Area 2 46 Microsoft Confidential
Iteration 1 Planning Game Master Plan Doneness Tests Done Test 1 Done Test 1 1 Done Test 2 Done Test 2 2 Regression Tests Area 1 Area 2 Critical Tests Area 1 Area 2 Iteration 1 Test Plan User Story 1 Acceptance Test 1 Acceptance Test 2 Done Test 1 Done Test 2 User Story 2 Acceptance Test 3 Acceptance Test 4 Done Test 1 Done Test 2 Regression Area 1 Critical Tests 47 Microsoft Confidential
Iteration 1 Retrospective Master Plan Doneness Tests Done Test 1 Regression Tests Area 1 Critical Tests Area 1 Done Test 2 Area 2 Area 2 Iteration 1 Test Plan User Story 1 Acceptance Test 1 Acceptance Test 2 Done Test 1 Done Test 2 User Story 2 Acceptance Test 3 Acceptance Test Test 44 Done Test 1 Done Test 2 Regression Area 1 Critical Tests 48 Microsoft Confidential
Iteration 2 Master Plan Doneness Tests Done Test 1 Regression Tests Area 1 Critical Tests Area 1 Done Test 2 Area 2 Area 2 Iteration 1 Test Plan Iteration 2 Test Plan User Story 1 User Story 3 Acceptance Test 1 Acceptance Test 5 Acceptance Test 2 Acceptance Test 6 Done Test 1 Done Test 1 Done Test 2 Done Test 2 User Story 2 User Story 4 Acceptance Test 3 Acceptance Test 7 Acceptance Test 4 Acceptance Test 8 Done Test 1 Done Test 1 Done Test 2 Done Test 2 Regression Area 1 Critical Tests Regression Area 2 Critical Tests 49 Microsoft Confidential
Iterations 3..n Release Iteration Planning Master Plan Doneness Tests Done Test 1 Regression Tests Area 1 Critical Tests Area 1 Done Test 2 Area 2 Area 2 Iteration 1 Test Plan Iteration 2 Test Plan Release Test Plan Critical Tests User Story 1 User Story 3 Area 1 Acceptance Test 1 Acceptance Test 5 Area 2 Acceptance Test 2 Acceptance Test 6 Acceptance Test 4 Done Test 1 Done Test 1 Acceptance Test 5 Done Test 2 Done Test 2 Acceptance Test 6 User Story 2 Acceptance Test 3 Acceptance Test 4 Done Test 1 Done Test 2 Regression Area 1 Critical Tests User Story 4 Acceptance Test 7 Acceptance Test 8 Done Test 1 Done Test 2 Regression Area 2 Critical Tests Regression Tests Area 1 Acceptance Test 1 Acceptance Test 25 Area 2 Acceptance Test 43 Acceptance Test 87 50 Microsoft Confidential
Anatomy of an Iteration D E V Sprint plan Implement User Story 1 (US1) Implement US2 US2 Fix bugs Fix bugs Build 1 Build 2 Build 3 Build 4 Build 5 Build 6 Build 7 T E S T Sprint plan Write tests for US1 Write tests for US2 Test US1 & file bugs Verify fixes Test US2 & file bugs Regress impacted tests Verify fixes 51 Microsoft Confidential
Automated Testing Automation Supported technologies Test Configurations Test Settings 52 Microsoft Confidential
Automated testing Can this be automated? What is the application type? Was it built with testability in mind? If not suitable for full automation, what about partial automation? What is the cost of automation? What is the cost of maintaining this test automation? 53 Microsoft Confidential
Automated testing What is the benefit of automating this test? Should we design big end-to-end tests, or more modular tests? What is the best way to automate this test case? How do I make this test case robust? What else can / should we automate? 54 Microsoft Confidential
Test Configurations What configurations do we need to support? What configurations are likely to surface problems? What configurations are likely to yield the same results? 55 Microsoft Confidential
Test Settings What information do we need? For each type of test For each machine in a test environment What is the overhead associated with collecting various pieces of information? Are there custom data diagnostic adapters we should invest in authoring? 56 Microsoft Confidential
Coded UI Tests (CUIT) Architecture Search & Filter Databinding Platform support 57 Microsoft Confidential
Architecture / Data Flow Coded UI Test Runner Playback & API Code/XML Recorder Technology Abstraction Layer (TAL) Low Level Hooks MSAA/UIA Web Third Party 58 Microsoft Confidential
Lab Management Architecture Lab Environments Network Isolated Environments Snapshosts Lab Workflow 59 Microsoft Confidential
Lab Management Architecture 61 Microsoft Confidential
Lab Environment Create an environment from virtual machines in library Refresh the list of environments Create an active environment by deploying an environment in library Operations on selected environment List of environments in team project Virtual machines in selected environment 62 Microsoft Confidential
Thank You