This presentation was made for FiSTB Testing Assembly 2014 by Ismo Paukamainen.



Similar documents
Automated Acceptance Testing of High Capacity Network Gateway

Virtual CPE and Software Defined Networking

app coverage applied EXTRACT FROM THE ERICSSON MOBILITY REPORT

DATA-DRIVEN EFFICIENCY

How Silk Central brings flexibility to agile development

Enabling Continuous Delivery by Leveraging the Deployment Pipeline

How To Understand The Benefits Of Mobility In An Enterprise

HSPA, LTE and beyond. HSPA going strong. PRESS INFORMATION February 11, 2011

Smart MOBILE DEVICES and app COVERAGE

Migrating from Circuit to Packet: The Business Case for IP Telephony. Or What s In it for Me?

Network functions virtualization and software management

CONSUMERLAB INTERNET GOES MOBILE. Regional report South Africa

Fault Slip Through Measurement in Software Development Process

CONSUMERLAB. sharing information. The rise of consumer influence

W hitepapers. Delighting Vodafone Turkey s Customers via Agile Transformation

How To Understand And Understand The Power Of An All Communicating World

DATA-ENHANCED CUSTOMER EXPERIENCE

consumerlab UNLOCKING CONSUMER VALUE Identifying the needs of today s smartphone and mobile internet users

Fast, Flexible & In Control MEET THE AGILE OPERATOR

CONSUMERLAB. The Indoor Influence. Regional report Europe

CONSUMERLAB. Performance shapes smartphone behavior Understanding mobile broadband user expectations in India

Introduction to Automated Testing

Continuous Delivery. Anatomy of the Deployment Pipeline (Free Chapter) by Jez Humble and David Farley

Session Border Controllers: Addressing Tomorrow s Requirements

The Role of Feedback in Continuous Integration, Continuous Delivery and Agile ALM

HP Application Lifecycle Management

Virtual Platforms Addressing challenges in telecom product development

Public Ericsson AB Page 1 (15) The Cloud and Software Defined Networking Ulf Ewaldsson, CTO, Ericsson

Transforming industries: energy and utilities. How the Internet of Things will transform the utilities industry

CONSUMERLAB. INTERNET GOES MOBILE Country report Nigeria

CONSUMERLAB. Mobile commerce in Emerging Asia

Delivery. Continuous. Jez Humble and David Farley. AAddison-Wesley. Upper Saddle River, NJ Boston Indianapolis San Francisco

Product Virtualization in Large Scale Development

Modern practices TIE-21100/

EUROPE ERICSSON MOBILITY REPORT APPENDIX JUNE

Reuse and Capitalization of Software Components in the GSN Project

Institutionen för datavetenskap Department of Computer and Information Science

Continuous Integration Processes and SCM To Support Test Automation

The Design and Improvement of a Software Project Management System Based on CMMI

Customer centric managed services. Helping businesses thrive through joint strategic partnerships

VoIP Conformance Labs

Continuous Integration, Delivery and Deployment. Eero Laukkanen T Software Testing and Quality Assurance P

Sreerupa Sen Senior Technical Staff Member, IBM December 15, 2013

A WIDER SHARING ECOSYSTEM. The pivotal role of data in transport solutions

Is SIP Trunking on Your Horizon? Sue Bradshaw, Technology Writer

Voice and Video over IP: Leveraging Network Convergence for Collaboration

Voice and Data Convergence

CS 389 Software Engineering. Lecture 2 Chapter 2 Software Processes. Adapted from: Chap 1. Sommerville 9 th ed. Chap 1. Pressman 6 th ed.

KEEPING THE internet OPEN FOR INNOVATION Our perspective on the net neutrality debate

Efficient evolution to all-ip

VOXOX 5BENEFITS OF A. HOSTED VoIP SOLUTION FOR MULTI-OFFICE BUSINESSES. a VOXOX ebook. Communications to the Cloud:

Recommended IP Telephony Architecture

Title: Continuous Delivery and Continuous Integration. Conference: 13 th Annual Software Testing Conference 2013

FT networks, services and IS evolution perspectives towards convergence Patrice COLLET France Télécom Network, Carrier and IT Division

How To Get A Phone Service For Free

Nokia Siemens Networks Flexi Network Server

The Importance of Continuous Integration for Quality Assurance Teams

Hosted PBX Platform-asa-Service. Offering

Glossary of Telco Terms

How To Write Unit Tests In A Continuous Integration

Table of contents. Performance testing in Agile environments. Deliver quality software in less time. Business white paper

An Introduction to Continuous Delivery

Patterns to Introduce Continuous Integration to Organizations

UDC IN A BOX. A complete User Data Management Solution to meet different business needs

Best-Practice Software Engineering: Software Processes to Support Project Success. Dietmar Winkler

The Business Value of SIP Trunking

Agile extreme Development & Project Management Strategy Mentored/Component-based Workshop Series

F15. Towards a More Mature Test Process. Anne Mette-Hass. P r e s e n t a t i o n

DevOps for the Mainframe

10 Key Things Your VoIP Firewall Should Do. When voice joins applications and data on your network

Utilizing big data to bring about innovative offerings and new revenue streams DATA-DERIVED GROWTH

Software Development In the Cloud Cloud management and ALM

Achieving Service Quality and Availability Using Cisco Unified Communications Management Suite

My DevOps Journey by Billy Foss, Engineering Services Architect, CA Technologies

NORTH AMERICA ERICSSON MOBILITY REPORT APPENDIX JUNE

Increasing frequency of releases to every week down from quarterly major releases

SDN Orchestration Explained. A Deep Dive into a Crucial Component of Software-Defined Cloud Exchange Networks

TSG Quick Reference Guide to Agile Development & Testing Enabling Successful Business Outcomes

Application Notes Rev. 1.0 Last Updated: February 3, 2015

ABC SBC: Software Defined Communication Networks. FRAFOS GmbH

Agile Austin Dev SIG. June Continuous Integration (CI)

Michelle Wallace, Product Marketing. The Simple Data Strategy that Helped LinkedIn Boost Business- Services Revenue by 85%

ALM Solutions using Visual Studio TFS 2013 ALMI13; 5 Days, Instructor-led

Global Services strengthening operator competitiveness

NORTH AMERICA ERICSSON MOBILITY REPORT APPENDIX NOVEMBER

Is SIP Trunking on Your Horizon?

Whitepaper : Cloud Based Backup for Mobile Users and Remote Sites

When agile is not enough

Canvas VAS Transformation & Consolidation. Whitepaper. info@telenity.com

LATIN AMERICA AND THE CARIBBEAN ERICSSON MOBILITY REPORT APPENDIX NOVEMBER

Continuous integration End of the big bang integration era

2015 IBM Continuous Engineering Open Labs Target to better LEARNING

The Tester's Role in Continuous Integration

OpenScape Session Border Controller Delivering security, interoperability and cost savings to the enterprise network border

Software Engineering. Software Processes. Based on Software Engineering, 7 th Edition by Ian Sommerville

SPECIFICATION BY EXAMPLE. Gojko Adzic. How successful teams deliver the right software. MANNING Shelter Island

A SOA visualisation for the Business

Virtualization, SDN and NFV

Best Practices in Release and Deployment Management

Your Voice is Critical. OpenScape Enterprise voice solutions gives power to voice

Transcription:

This presentation was made for FiSTB Testing Assembly 2014 by Ismo Paukamainen. About Ismo Ismo started at Ericsson R&D in Jorvas June, 1990. He has been working in development projects for different mobile networks (NMT900, GSM, PDC, DCS1800, 3G, 4G,) and node products (Mobile Service Center and Media Gateway for Mobile Networks). Ismo has participated in different test activities (e.g. function test, system test, process methods and tools, test strategy work) in various roles (e.g. as systems tester, test project manager, test coordinator, test analyst, section manager). Today he is working 50% in a test team making robustness and stability tests and other 50% on corporate level in a role of Knowledge Area Driver for the Knowledge Area Test. That is, a corporate wide community for testing. 1

1. About the Context 1.1. Ericsson 1.2. Media Gateway 1.3. Agile Transformation 1.4. Cycles in Development 2. Test Strategy, Test Analysis and Test Pyramid 2.1. Test Strategy 2.2. Test Analysis 2.3. Test Pyramid 3. Continuous Integration 3.1. Continuous Integration 4. Continuous Assurance of the System Quality 4.1. Continuous Assurance of the System Quality 4.2. System Quality Visualization 5. Test Automation 5.1. Test Automation Setup 5.2. Issues in Test Automation 6. Problems in Transformation and Achieved Benefits 6.1. Problems in Transformation 6.2. Benefits of Transformation 7. What Next 8. Five Takeaways 2

3

Company Ericsson is the driving force behind the Networked Society a world leader in communications technology and services. Our long-term relationships with every major telecom operator in the world allow people, businesses and societies to fulfill their potential and create a more sustainable future. Our services, software and infrastructure especially in mobility, broadband and the cloud are enabling the telecom industry and other sectors to do better business, increase efficiency, improve the user experience and capture new opportunities. With more than 110,000 professionals and customers in 180 countries, we combine global scale with technology and services leadership. We support networks that connect more than 2.5 billion subscribers. Forty percent of the world s mobile traffic is carried over Ericsson networks. And our investments in research and development ensure that our solutions and our customers stay in front. Founded in 1876, Ericsson has its headquarters in Stockholm, Sweden. Net sales in 2013 were SEK 227.4 billion (USD 34.9 billion). Ericsson is listed on NASDAQ OMX stock exchange in Stockholm and the NASDAQ in New York. Ericsson is a large SW company. It was rated on the 5 th place on the list of the top 100 largest SW companies globally in year 2011: http://www.softwaretop100.org/software-industry-trends-2011 1000 employees (over 600 in R&D) in Finland on three sites: Kirkkonummi, Turku and Oulu. 4

Quality of Media Gateway is benchmarked constantly Wide deployment, many operators, many nodes with several configurations => Different kinds of node upgrades and many deliveries of software packages SW is loaded on Software gateway where it is down loadable for all customers => Possible quality problems will be experienced very quickly in production ISP (In Service Performance) requirements are high due high economical and brand losses for the operators. Telecommunication network disturbances usually end up in the daily paper/news. A media gateway is a translation device or service that converts digital media streams between disparate telecommunications networks such as PSTN, SS7, Next Generation Networks (2G, 2.5G and 3G radio access networks) or PBX. Media gateways enable multimedia communications across Next Generation Networks over multiple transport protocols such as Asynchronous Transfer Mode (ATM) and Internet Protocol (IP).[wikipedia] The Media Resource Function (MRF) provides media related functions such as media manipulation (e.g. voice stream mixing) and playing of tones and announcements. [wikipedia] Border Gateway Functionality (BGF) is a media function located at edges of the service providers IP network. The control functionality is e.g. IPv4/IPv6 conversion, Network Access Translation (NAT) traversal, traffic screening, topology hiding at the IP packet level in the access, core, or inter-core network. 5

Transformation Media plane development moved from incremental development, Rational Unified Process (RUP) to Agile In organization point of view two organizations, design department and Integration & Verification Department, were merged into one development unit There are 36 teams in total in Finland, Hungary and US for Media plane development with approximately 5-8 persons per team Cross functional feature teams Independent test teams supporting in system level tests Test tools and tool framework development teams. 6

- Official testing is always done on the Latest System Version, LSV - Benefits from One track development: - Makes it possible to release SW package once per week if needed - Correction mapping (of defects) between releases is minimized in one track development delivery strategy - Number of supported upgrade paths smaller Definitions: - Red dot means that all Quality Areas are not release-able (see chapter 4) - Blue dot is a snapshot of the main track SW. It means that the final compile has been made and no changes in SW are possible anymore in the release branch. - The date/week when the release SW branch is made is dynamic. It s more effective to fix the code in main branch and take a new snapshot than fixing the defect in both main and release branch. (Otherwise in practice it would mean two tracks open also for development all the time. Now the extra track is limited on test activities only.) - At black dot the developed SW is put on the SW gateway and is access-able to all customers 7

8

Test Strategy Testing quadrants were used as a communication mean when creating a new test strategy and agreeing about the scope of the testing. The idea of these quadrants was easily bought by all involved because it was written outside the organization not by the development or integration & verification organization. The aim is that Cross functional teams would cover all the quadrants, but in practice they mainly cover Q1 and Q2 including functional tests and short load test in production environment Continuous integration machinery covers mainly functional tests Q2 and small amount of system tests in production like environment Q4 The rest is done by independent test teams. That is, non-functional tests in production like environment Q3 & Q4. 9

Test Analysis in Early Phase Program (EPP) Investigations are triggered by various sources, predominantly by Product Management and carried out by EPP. The result of this investigation process is one pager of a feature + possible additional material. Verification Analysis (VA) = verification impact, tool and test environment analysis: cost estimate and HW forecast, effort & lead time estimate as a part of one pager Resp: Participants from cross functional teams and Independent test teams in Product Owner function Test Analysis in Feature Concept Study (FCS) Investigation results will be collected into FCS. It will contain the user stories (requirements), overall verification analysis of feature and cost in terms of story points. Enhanced VA. Features will be analyzed in more details, possible Change requests, external impacts e.g. platform impacts will be triggered and processed forward together with product management. Also tool, environment and HW impacts can be recorded as new user stories =>test user stories to product backlog, HW orders, work orders for test tools & Network Plan tasks. Basis for Overall test planning - Timing of features (e.g. feature dependencies taking into account verification and platform dependancies, tool&hw lead times) - Requirement coverage, who covers what (coordination between crossfunctional teams and Independent test teams) Resp: Product Owner function (participation from cross functional teams and Independent test teams) Test Analysis in cross functional teams Contents of the sprints are analysed by the cross functional teams, Updates in wiki and white boards Test Analysis in independent test teams Contents of LSV s (sometimes also LLV s e.g. in a case of the platform changes) are analyzed by the independent test teams, Updates on white boards. This is because the scope is mainly in system legacy functionality to see that nothing is broken. 10

Floors of the pyramid from the bottom: 1 st Floor: Development work is done on various local software versions. Developers test their own code 2 nd Floor: Cross Functional Teams may have their own team versions, but they are supposed to commit to main branch continuously after team tests and static analyses. 3 rd Floor: Automated tests are run (= washing machines ) 4 times a day on the main branch for all committed code. The latest run of the day will be a new common LLV. Top Floor: When there is a new common LLV (Latest Local Version), automated system level tests are then run on it (=once per day). Also cross functional teams perform testing of new functionality on common LLV. Once per week one of the LLV s is selected to be an LSV. Selection is done by Delivery Manager together with Product owners based on the input from the teams. Independent test teams may start testing already on LLV. They continue system level testing on LSV. 11

12

Development work is done on various local software versions. Developers test their own code. Teams may have their own team versions, but they are supposed to commit to main branch continuously. The most crucial factor in the success of the Continuous Integration is human, not machine! If the developers do not commit frequently, there is nothing to build, nothing to integrate, nothing to test. That is, there is nothing continuous. Frequent commits must be part of the working culture. How the CI Automation works Continuous Integration work cycle consists of following basic steps: 1. Checking in, committing new source file content into version control system 2. Compiling new source code into target binaries 3. Building an installable upgrade package from new version of target binaries 4. Upgrading the test node with the newly created package 5. Executing test campaign on new software, to verify its legacy and newly added functionality working As said the first step is a "human interaction" the upcoming steps are automatically initiated by trigger from previous ones. CI Automation is continuously polling for commits. Commit triggers target build which is followed by creation of an upgrade package. The upgrade package for Latest Local Version (LLV) is then installed in to the production like test environment where automated tests are run (= washing machines ). This is done 4 times per day on the main branch, i.e. for all committed code. The test covers functional tests, short stability and also some selected non-functional system test cases. The latest run of the day will be a new common LLV. Teams will perform functional tests on the common LLV. Automated system level tests are run once per day on this common LLV. Once per week one of the LLV s is selected to be an Latest System Version (LSV). Independent test teams perform their tests on the LSV, but they are also asked/allowed to take daily LLV e.g. in a case of platform changes. This is just to get fast feedback about the changes. 13

14

Background Before agile the system test was a very late activity, having a long lead-time. It was often hard to convince project about the needs for system tests requiring resources, human and machine for many weeks. This was because the requirements for the product are most often for the new functionality not for non-functional system functions which are in a scope of system tests. The fact is that only ~5% of the faults found after a release are in the new features, the rest are in the customer perceived quality area. From system test phase to continuous assurance of the system quality When moving to agile we skipped the system test as a test phase. This was solved by setting up independent test teams covering the following areas (as we call them Release Areas) Upgrade Operation & Maintenance Signaling Single Traffic & Features Media Quality & Security Stability Robustness Characteristics & In Service Performance DEATH TO SYSTEM TEST PHASE!!! These areas belong to third and fourth testing quadrants i.e. the quadrants critique the product. Testing is done on weekly LSV s (sometimes also on LLV s) Why independent test teams It s not efficient that all/many cross functional teams do the same tests. It would require more test environments for the cross functional teams and also special test competence. Now the test environments and competence is concentrated in independent test teams specialized on their own areas. 15

What information about the progress/quality is really needed? What does it mean if you have run 500 test cases and found 8 defects? What about the severity of the defects? What can you tell about the system readiness or quality based on the information you have? Do you know if the product is releasable? Visualization Easy to understand radiator view instead of test progress follow-up, showing graphs about the number of run test cases and number of written defect reports Trust on people and their judgment about the quality! Everyone in the organization should be aware of the status of the product in real time. 16

17

The Agile transformation was a good motivator and a business case to get more focus on (read: to get money for) test automation. That is because agile ways of working is not possible without proper test automation based on the need for frequent re-testing in teams and as a part of continuous integration. So, test automation has taken a huge leap since the transformation was started. Common Test Automation Framework Media plane development organization has a common test automation framework which is developed and maintained by a tool support team (~4 persons). This team also developes and maintains system robustness test cases used by the independent test teams. That is because the cases are for nonfunctional legacy requirements of the system and they do not tend to change often. The same test framework is used by the cross functional teams, CI and independent test teams. Same test cases for cross functional teams and CI Test cases for the functional tests are developed and maintained by the cross functional teams. Teams maintain test suites for all the features. These cases are ear marked (in meta data) for continuous integration automation for 2 hour, 4 hour or 6 hour test run groups. Teams may use also other tags to group test cases can be used to make e.g. Test run covering some specific functionality or signalling standard etc. When automated test run for CI is started it uses the cases directly from the teams test suites. So, there is no need to maintain separate test set for CI. This of course saves resources. 18

Issues in test automation 1) Not everything is possible or reasonable to automate. Select carefully what to automate. How often the case is executed and how much time manual execution takes compared to developing/maintaining automated case Try to avoid areas that change often. Frequent changes may cause heavy maintenance workload. Consider the oracle problem especially in system tests. For example, in robustness area it is not easy to define the expected result. That is, to put verdicts requires manual analysis of logs after the run and this might be time consuming task. 2) Test code quality should equal to the product code quality. When automating more and more tests the amount of test code will grow, e.g. comparing the figures of the code in the product and test code may look like the following figures from an Ericsson organization: Metrics Source Code Test Code Size (kloc) 853,98 1023,7 #files 3378 1379 #functions 8802 11744 #constants/kloc 82,53 290,98 Maintainability index(* 51,1 39,1 *) J.Novak, & G.Rakic: Comparison of Software metrics tools for: net Proc. Of 13 th Int. Multiconference Information Society-IS, Vol A pp. 231-234 (2010) This comparison makes it painfully visible that the quality of the test code is maybe not as good as it should. The usage of constants tells about hard coding of parameter values meaning that the test code is not very well re-usable, or that it requires some work whenever the values are changed. 19

20

Unequally spread test competence The biggest problem was related to the test competence and how it was spread in the beginning. When the testers from the original test organization were spread over the new cross functional teams the competence was not spread evenly. Some teams got not so experienced tester(s). This led to the situation where some test activities requiring more experience were not done too well. The consequence was that in some teams testing was seen more a bottleneck. Too much test related tasks to do on two weeks sprint. Before Agile the teams were product teams having lots of lower level automated test cases. Now when moving to feature teams all the existing lower level test cases (2000-3000 cases) were still usable. The knowledge about the cases was in the original product teams. Now not all of the feature teams were aware about the cases. What the cases were supposed to test? So, when selecting test cases some cross functional teams did not dare to leave cases out, but tend to maintain and run all of the existing cases and even create some more. The problem why the teams did not dare to leave cases out was two folded. First the teams did not have enough information about the existing cases and secondly they could not analyze what is needed to execute. The teams had an option also to run functional tests in the target test environment (i.e. test environment using production like hardware), within the same time frame than these lower level tests. If they had more test competence they could have been able to cover a set of old automated lower level test cases by couple functional test cases on the target environment (node level). Competence to make test analysis is needed when analyzing testing needs in sprint planning. In many cases the need for testing was estimated too low. One factor in this equation was also a pressure to be more productable within a sprint. Testing time was minimized. Ability to estimate testing needs was visible especially when the impacts were on the system level. The view was not always holistic enouch in testing point of view. Big picture was missing. Also experienced testers claimed that it is difficult to see the big picture when everything was done in two weeks sprints. Shortly to say, if the testing responsibility is shared among the team members and the team does not have a dedicated professional tester many test activities will most probably suffer, for example, repeatability and reuse of test cases, test automation, test analysis (including risk analysis), just to mention couple of the activities. 21

The biggest benefits in Agile trasformation were in the product quality and in the increased focus on testing, especially on system test. Collaboration with designers and testers has becomen a daily business in the cross functional teams. This makes it easier to solve problems in SW. Support is always near to you. In the teams new functionality implemented considering early start of testing the basic functionality. This makes it possible to get the basic functionality already working before starting tests on production like test environment. Test automation has evolved a lot due to a good reason/business case - without Agile transformation this would not have happened. Test automation has also effected on the need for test management system. Now when tests are in automated test suites, there is no need for a heavy test management system. Traceability to requirements is from test case to user story and from user story to main requirement. When a user story is done, the requirement is considered as tested. In cross functional teams there is no need for followup of the executed test cases. In independent test teams the tests do not change much between releases in a sense of new cases, so the followup is build in the wiki. Traceability to requirements in test teams is applicable only in some cases, e.g. for characteristics, but often there is no requirement, e.g. on robustness area. Better view of the product quality. Product quality is visible all the time. Not after everything is implemented, like it was earlier in system test phase. The whole development has focused on fast feedback for the quality. Static code analysis in the teams as a part of the commit process. 1) Gerrit(* is used to show the changes, 2) 2-3 team members are to be review the changes before the commit is allowed. This is done before triggering the CI automation. *) Gerrit is a free, web-based team software code review tool. Experienced quality. The quality is also experienced to be better than ealier. Before moving to agile it might take the whole increment (= many weeks) before getting the system in a state that starting of the system testing was possible. Now there is a new working package even daily. There is no quantitative evidence about the better quality, but only statements from those who have years experience in testing. It is quite hard to find good measures to compare, but the number of defect reports on the first six Months in service for the few latest releases is showing now 80% decrease. But there are differences between releasest. Some might contain new HW, some big new features whereas some might contain only small system improvements or features that are not used by all the customers. The maintenance hours have halved in the same period of time when the amount of delivered nodes have doubled. This is, of course, also because of the one track development, since there are no separate maintenace tracks. Would it been possible to start one track development without agile? 22

Some thoughts what could be done next: There is a need to adding more non-functional test cases (e.g. robustness cases) in CI test automation to get daily feedback also for the system features Test result analysis is one bottleneck for test automation. It requires often manual analysis to put verdicts for test cases. This is time consuming and limits the amount of tests to automate. If this manual work cannot be automated it needs to be organized so that it won t take too much time. In practice this could be e.g. using colors to highlight words, like example, error, warning, etc. There is a good practice to use static code analysis and reviews for production code in the teams. This needs to be expanded also for test code. 1) To use Gerrit(* to visualize the changes in test code (new/modified), 2) Team to review of the changes. *) Gerrit is a free, web-based team software code review tool. 23

Test competence: If not spread equally, think about other ways to support teams e.g. In test analysis and sprint planning. Product owners should take responsibility to check there is enough tests for a sprint. A dedicated testing professinal position in a cross functional team is recommended. Fast feedback: In waterfall ways of working the aim was to make as much testing as possible in lower integration level. That was because then the testing was timely earlier and it was easier to find (and fix ) faults closer to designer. In Agile the aim is to get feedback as fast as possible. It means that the strategy is no more to run mass of tests in the lower level, but to run it on a level that gives the fastest feedback. So, it might be that running tests on target environment (= production like) may serve better in the sence of feedback and the lower level is needed only to verify some special error cases that are maybe not possible to execute on target. Test Automation is a must in Agile. Use common frameworks (and test cases) as much as possible. Try to avoid extra maintenance work around automation, for example Continuous Integration. Independent Test TeamsTeams is a good way to support cross functional teams especially to cover agile testing quadrats 3 & 4. To make non-functional system tests in cross functional teams would mean: 1) Possible overlapping testing, 2) Need for more test tools and test environments. 3) It is also a competence issue and 4) Maybe too much to do within sprints. Independent test teams need to be co-located with cross functional teams and have a good communication with them. A sense of community! Raise Your Organizational Awareness of the Product Quality Monitor the system quality (Robustness, Characteristics, Upgrade ) and make it visible through the whole organization. 24

25

26

27