Automated Module Testing of Embedded Software Systems

Size: px
Start display at page:

Download "Automated Module Testing of Embedded Software Systems"

Transcription

1 Automated Module Testing of Embedded Software Systems Master s Thesis Fredrik Olsson Henrik Lundberg Supervisors Thomas Thelin, LTH Michael Rosenberg, EMP Nicklas Olofsson, EMP

2 II

3 Abstract When designing and implementing large-scale software one of the most important issues in order to reduce the cost and improve the quality is testing. In this master thesis a thorough background to testing with the focus on low-level testing is given as an introduction. Ideas and theory for different kinds of testing as well as available tools are described. The thesis focus on low-level testing and this is important because it saves both time and costs in the later phases of software development. Today s development teams become more and more aware of the importance of low-level testing. Unfortunately low-level testing is often inefficiently deployed as it is poorly structured and there are very few tools. To facilitate and raise the quality of the low-level testing this thesis presents a process for module testing as well as describes the development of an automated testing tool. The process should ensure that all module tests follow the same structure and that all of the modules get the same type of low-level testing. The automated testing tool is a prototype that makes the procedure automated and saves time. The thesis also shows that the developed process and tool greatly enhanced and improved the software quality in the projects using it. III

4 Acknowledgements First of all we would like to thank our supervisors Nicklas Olofsson and Michael Rosenberg at Ericsson Mobile Platforms AB for helping us through this master thesis. We would also like to thank our supervisor Thomas Thelin at the Department of Communication Systems at Lund Institute of Technology for the help of giving our report a professional and correct appearance. Finally, we would like to thank everybody at Ericsson Mobile Platforms AB that helped us by answering our questions and being supportive and encouraging throughout this thesis. Henrik Lundberg & Fredrik Olsson Lund, November 2003 IV

5 V

6 Table of Contents Table of contents ABSTRACT... III ACKNOWLEDGEMENTS...IV TABLE OF CONTENTS...VI 1. INTRODUCTION SOFTWARE ENGINEERING INTRODUCTION SOFTWARE PROCESS Software Specification Software Development Software Validation and Software Verification Software Evolution SOFTWARE PROCESS MODEL Waterfall Model Evolutionary Model DOCUMENTS SRS Software/System Requirements Specification SDD Software Design Description SVVS Software Verification and Validation Specification SVVI Software Verification and Validation Instruction Other Documents TESTING A GENERAL BACKGROUND INTRODUCTION PLANNING AND EXECUTION Planning The Plan Test Report VALIDATION AND VERIFICATION Validation Verification IV & V - Independent Verification and Validation TESTING METHODS Static and Dynamic Testing Black-Box Testing White-Box Testing Regression Testing LOW-LEVEL AND HIGH-LEVEL TESTING Low-Level Testing Unit Testing High-Level Testing Integration Testing Incremental Testing Top - Down Incremental Testing Bottom - Up Big Bang System Testing Load Test Acceptance Testing CLEANROOM SOFTWARE DEVELOPMENT TESTING TOOLS When to Use Testing Tools Tools for Inspections and Reviews...32 VI

7 Table of Contents Test Execution and Evaluation Tools Capture/Playback Simulators Coverage Analysis Memory Testing Load Tests PRESENT AND FUTURE MODULE TESTING AT EMP TERMINOLOGY BACKGROUND CONTROL MODULE (ARM-MODULE) AUDIO PROCESSING MODULE (DSP-MODULE) AMR CODEC (DSP-MODULE) BASEBAND DSP (DSP-MODULE) DESIGN DISCUSSION SOFTWARE TOP LEVEL DESIGN DOCUMENT INTRODUCTION Terminology HIGH LEVEL REQUIREMENTS PHONE PERSPECTIVE System Architecture Detailed High Level Design PC Detailed High Level Design Phone HIGH LEVEL REQUIREMENTS PC PERSPECTIVE System Architecture Detailed High Level Design PC Detailed High Level Design Module Tester Detailed High Level Design Phone Detailed High Level Design Host Detailed High Level Design DSP VALIDATION AND VERIFICATION THE ARM AND DSP PROCESSES THE GRAPHICAL USER INTERFACE THE DEVELOPED C-CODE REQUIREMENTS VERIFICATION PRODUCT EVALUATION LEFT TO DO IN THE PRODUCT DISCUSSION AND CONCLUSIONS REFERENCES...70 APPENDIX A - SOFTWARE REQUIREMENTS SPECIFICATION (SRS)..72 APPENDIX B MODULE TESTER CLASSES...74 APPENDIX C CLASS DIAGRAM FOR THE USER-INTERFACE...79 APPENDIX D DETAILED HIGH LEVEL DESIGN...80 APPENDIX E TEST PROCESS HOST APPENDIX F - TEST PROCESS DSP APPENDIX G: MODULE TESTER USERS MANUAL VII

8 Table of Contents VIII

9 Chapter 1 Introduction 1. Introduction When testing software today most of the resources available to the companies are spent on system testing in which the entire product is tested to see if the product meets the requirements. Companies have developed largescale strategies for this type of testing as well as clearly designed processes for facilitating the work with the tests. As the demands on reliability and fault-free software increases more and more people are assigned to testing. However as the demands on the product increases so does the costs for the product. More testing requires more resources and resources cost money. When a product is released there are three main concerns, the cost of the product, the quality of the product and the time in which the product was produced i.e. is the product still desired on the market? Unfortunately most of the times this creates a paradox as these three demands cannot be combined. If the product should have a high quality it has to be tested properly to remove all faults and this type of extensive testing cost money and take time which leads to a higher cost for the customer as well as a longer development time. Of course the developing companies want to keep the costs as low as possible and at the same time release the product as quickly as possible. But to be able to sell the product, it has to have quality that requires testing. Most companies have found some compromise in which they balance the quality, the costs and the time in which the product is released. The first step in improving the quality is often to start a special testing group that designs and performs the tests. This means that the tests are done more professionally and more independently which increases the quality of the product. However as the software gets more and more extensive the workload on the testing group and the time spent on testing increase and it will soon become both too expensive and too time consuming to find and correct all of the faults. This is where the next step comes in. To reduce the time and resources spent by the testing group on system-testing low-level testing is given more focus. Low-level testing confirms the quality in small parts (modules) of the software before they are put together to a complete software product. This type of testing is often not as well structured as it could have been and there are only few commercial tools to help. However if this type of testing is performed on all of the modules in a product the time and resources spent on system-testing are greatly reduced and in large software systems this means that the quality of the product increases at the same time as the time and money spent on the product decreases. This might seem like a paradox but in the studies made on this area it has clearly been showed that for large scale software development low-level testing in a structured and automated fashion greatly enhances the quality of the product. At the same time it saves time in the system-testing as the product have fewer faults. Ericsson Mobile Platforms has realised this and in their large-scale software development they wish to enhance their already extensive low-level testing for mainly three reasons. They want to unify the testing to make sure that all of the modules in a product receive the same type of low-level testing and 1

10 Chapter 1 Introduction thereby have the same quality which makes it easier to plan the work with the product. They also want to try to refine the quality in their already high quality products which is done by more extensive and thorough testing. Finally they want to reduce the time spent on resource demanding rework in late phases. This work is greatly reduced if low-level testing is performed properly as fewer faults are due to faults in the logic of the module code. To be able to introduce low-level testing into the development phases at Ericsson Mobile Platforms two main things have to be done. Firstly a new process for module testing has to be developed. This process should contain careful instruction for how to design and perform module tests. This process has to be designed on the processes currently used to facilitate the incorporation. The process should also guarantee that all module tests are performed accordingly to the same standards which give all the modules roughly the same quality. The process also helps when the module tests are monitored. It is much easier to verify that the module has been tested and that enough testing has been performed. This verification can be done relatively easy and can be understood by all engineers and not only the developers. Secondly a tool to facilitate and automate module testing has to be developed. The tool must provide the assistance necessary for the testers to be able to perform unified module testing on as many of the available modules as possible. The tool also has to be very easy to use and understand to be accepted by the personnel and to facilitate the incorporation at Ericsson Mobile Platforms. The process and the tool should be developed in unison to provide a complete testing environment for low-level testing. This master thesis describes the work with the process and the prototype of the automated testing tool. In chapter two a brief description of software development is given to get a better understanding of how the prototype was developed and how software in general is designed. To be able to understand why and how the tool was constructed and on which principles the process is built chapter three gives a background description on testing in general in which low-level testing in particular are discussed before the description of the development of the process and the tool are discussed. As the process and the tool introduces a new way of performing and looking at low-level testing among a lot of engineers it is of the uttermost importance that the background to the process and the tools are given a lot of time in this thesis. Therefore not only a theoretical background is given but also a more practical background as a lot of the process and tool is built on the requirements at Ericsson Mobile Platforms and the processes available today. Chapter four describes some of currently available testing methods at Ericsson Mobile Platforms. It also contains a discussion of how to improve the module testing by keeping the best parts of the currently available techniques. In chapter five the implementation of the automated testing tool is described. The description continues in appendix where a more thorough design is presented. In chapter six different evaluations of the developed process and tool are described. In this evaluation engineers at Ericsson Mobile Platforms have tested the product in the sharp environment and faults and improvements on the process and tool are discussed. In a lot of cases the improvements is also incorporated into the prototype or the process but in other cases only solutions to the improvements are discussed as the time for this master thesis did not include these parts of the product. In the last 2

11 Chapter 1 Introduction chapter the work is summarized and the result and experiences from the master thesis are discussed. 3

12 Chapter 2 Software Engineering 2 Software Engineering 2.1 Introduction In the early time of software development, the programmers got a specification from the customer and directly started to produce the code. This approach was good enough in the beginning but when the software systems became larger and more complex it wasn t possible anymore. [19] In the middle of the 1970s the maintenance cost of the software was higher than the developing costs and it was rising. Another trend was that the hardware costs decreased in opposite to the software costs that continued to increase. The available software developing techniques did not comply with the rising demands. This is what was called the software crisis. [3] The solution to the problem was Software Engineering. Applying an engineering approach during all phases of the software development made it possible to create larger and more complex software systems. Every phase, from the requirement specification to maintenance of the system, should be subject of thorough and systematic engineering work. That includes not only using existing theories and methods but also inventing new ones wherever it can support the development process. With the help of software engineering it is possible to produce high-quality software and adherent documentation. The definition of high-quality software differs from program to program and reflects the behaviour of the software during executing rather than what the software does. For example it could be the response time or the reliability of the software. [7] 2.2 Software Process A software process is needed during development. A process consists of a step-by-step description of how to manage development activities in every phase. A high-class software process is a pre-requisite for high-class software. [7] The software process activities differ from project to project but mainly consist of the following four activities: Software Specification During this phase the definition and all the constraints of the system is written on paper. It is important that everything is thought of since correction in a later phase is much more expensive considering both time and money. Not just the developers are in need for a good software specification, but also the salesmen and lawyers of the company since the contracts with customers often rely upon the software specification. The developers usually have a more detailed specification than the customers. The activities in this phase should be described in a SRS, Software Requirements Specification (see 2.4.1). This document defines what the 4

13 Chapter 2 Software Engineering customer and supplier have agreed on and because of that it is the most important document of all Software Development The software development phase consists of a design part and an implementation part. During the design phase all the major issues are decided, e.g. system structure, data structures and interfaces between parts of the system. Sometimes what algorithms to use is decided during this part and sometimes that is decided during the implementation part. The design is developed in an iterate manner. It starts on a low level of details and gradually transforms into a higher level of details. If an evolutionary process is used the software development phase could also include returning to the previous phase, to change the specification. The documents needed in the software development phase could be created next after the specification phase but before this phase. They consist of the planning and design of the development. The development should be planned and described in a SDD, Software Design Description (see 2.4.2). Sometimes the SDD is replaced with two documents. The first is the STLDD, Software Top Level Design Document (see 2.4.2), which describes the design in a higher level, e.g. what data structures to use, and the second is the SDDD, Software Detailed Design Document (see 2.4.2), which describes the design in a level closer to the implementation Software Validation and Software Verification Software validation is when the software is compared with the customer s requirements and demands. This is a very important activity since if the customer is not satisfied with the result the product cannot be considered correct (see Validation). Software verification is when the software is compared with the requirements on the software. This is also an important activity since a product that does not comply with the requirements can neither be considered to be correct. The verification of the code is usually known as testing (see Verification). The validation and verification activities should be performed during and after the implementation in every phase. When testing extensive software systems it is important to test the different parts isolated to make clear that the tested part is correct and works as its interface to the environment and the rest of the system specifies. This isolated testing is performed at different levels (see Figure 2.1: The testing process and 3.5 Low-level and high-level testing). 5

14 Chapter 2 Software Engineering Figure 2.1 The testing process The documents in this phase are created even before the code writing has started. The first activities consist of creating a SVVS (see 2.4.3), which e.g. specifies what type of tests to perform when the code is written and the actual testing starts. The details of the testing are described in the SVVI (see 2.4.4). That document shows among other things what to test and what the outcome of the tests is supposed to be Software Evolution The software evolution is the correcting of the system after that the product is considered done. This includes both updates and expansion of the functionality in the developed software. That is usually many times more expensive than the changes being made during the development phase. The software evolution costs are steadily increasing and are often a part of the total work. The systems delivered today are hence not as static as earlier and changes can be made due to changing requirements and customer needs. 2.3 Software Process Model A software process model is an abstraction of a software process. This means that a software model is a condensed software process enlightened from a specific point of view. [7] There are a number of different software process models. Two of the most common models are the waterfall model and the evolutionary model; these are described in more detail below Waterfall Model The waterfall model was first presented in Every phase shown in figure below is only active one time during every iteration. When the activities in the iteration are finished the work continuous to the next phase and the work will not return to the previous phase during that iteration. In reality the work starts with the next phase before the previous one is finished. After a small number of iterations a part of the process is 6

15 Chapter 2 Software Engineering considered finished for the moment. The next iteration then starts at the next phase in the process. All the problems that may arise are left for a later solution. [7] The result of every phase is one or more document. Each one of these should be approved. Figure 2.2 The waterfall model Using the waterfall method (see Figure 2.2: The waterfall model), the design and functionality analysis are made in the beginning of the development. At that time the engineers have little knowledge about the project and its constraints. They do not know which part that is hard to implement and test. This can have impact on the development in a later phase. [6] Evolutionary Model Using the evolutionary model a first version of the program is developed, and then given to the customer for comments. Depending on the comments, the program is changed and new features are added. This continues until the program is considered finished. Using this model the development phases is carried out concurrently and not one after another as with the waterfall model (see above). This makes it feasible to produce the specification during the development. [7] One of the risks with this model is when not enough resources are assigned for testing in an early stage. In the waterfall model testing is done in the late part of development, but in the evolutionary model testing starts in the beginning and continues through the entire project. The testing costs sometimes seem to be large in the beginning but that is evened up later, as not so much testing has to be done in the end. [6] 2.4 Documents It is important to produce documentation, during development, which describes the software system. The different parts of the development have their own documentation, i.e. the specification is thoroughly described in a Software Requirements Specification document. 7

16 Chapter 2 Software Engineering SRS Software/System Requirements Specification This is the most important document and the specification should be designed very carefully. The document defines the criteria that the customer and supplier have agreed on. It is therefore important to involve the customer in this document or at least to get his approval of the finished document. The SRS is the base for both the design and the testing and therefore faults in the document are very costly, especially if they are found after the approval of the document. [2, 3] The SRS should involve a product overview that puts the product into perspective with regard to other products. If the product have any dependencies with other products it is also important to clearly state it in this document. The product should also be briefly described and the functions should be categorised to get a better understanding of the product to develop. Specific features should also be included and a plan for when each feature can be released. [3] All limitations to the product must be described. This includes for instance hardware limitations, interface requirements, communication protocols that must be supported. All assumptions made must be clearly stated so that the customer can see how and where the assumptions impact the product. [3] The user interface must be described in detail. This includes pictures of all expected screens and careful descriptions of all user interactions. This is often a very important part for the customer and to get satisfied customers it is important that there is as little room as possible for interpretations so that the customer really gets what he wants. The last and largest part includes the specific requirements. The demands should be divided into sections for an easy overview. Dividing the demands into functional and non-functional demands often do this. The functional demands are related to in- and output. Non-functional demands deal with limitations to the system, for instance memory, reliability and efficiency. The functional demands are often split further into general demands that are valid for all parts of the product, specific demands that deals specifically with one part of the program and interaction demands that deals with combinations of the different parts in the system. [3] When the demands are written it is important to check that the demands have a few important properties, - clear - no room for interpretations - consistent - prioritise between conflicting requirements - reasonable - all demands must be able to be fulfilled - measurable - all demands must be testable - modifiable - all demands should be revisable Make sure that all of the demands from customer have been covered in the SRS. If something is missed it costs a lot to correct in the later stages. [2, 3] When the SRS is developed it is important to include the testers. The testers make sure that all the demands included in the SRS are testable. It also 8

17 Chapter 2 Software Engineering enables the testers to start their work with designing the tests earlier as they do not have to wait until the SRS is completely finished. [2] SDD Software Design Description This document is a description of the design for the software. Sometimes this document is divided into two smaller document, STLDD Software Top Level Design Document and SDDD Software Detailed Design Document. STLDD includes high-level design and global data that is shared between different components in the system. SDDD is a more detailed description in which often the implementation itself is included. [2, 3] SVVS Software Verification and Validation Specification This is the first test related document that is developed. SVVS is developed in connection with SRS either at the same time or directly after. The document is developed by the testers and includes three main parts. The first part is inspections. All inspections planned for the project are listed and described. The second part is the tests. The team decides which types of tests that is performed and designs specifications for functional tests, system tests and regression tests. The third part is a quality evaluation in which goals are set up to meet the quality aspect of the project. [2] SVVI Software Verification and Validation Instruction As soon as the SVVS is finished the test team should start designing the tests in more detail. The tests should contain a short introduction that describes what is tested, how the test is performed and what the expected outcome of the test is. Every instruction should only contain one test case so that faults could be easier tracked. All these instructions are collected in the SVVI. The SVVI is also known as Test Procedure. [2] Other Documents Of course there are many other documents that can be used to structure the work. These include Software Configuration Management Plan (SCMP) that specifies methods to identify software products, control and implement changes. This document is normally only written in really large projects that involve many engineers. Another important document is the Software Release Plan (SRP) that defines when and how a product should be released. This process involves questions like how do we create baselines, who is responsible for releases and requirements for creating baselines and releases. However it is the documents described in the chapters above that are the most important and the ones that should get the most attention from a testing point of view. [3] 9

18 Chapter 3 Testing A General Background 3 Testing A General Background 3.1 Introduction All of the programs today are written by human developers and these humans even though they do their very best they inevitable make mistakes. The mistakes can be both costly and cause large-scale problems. The programs are tested to eliminate these problems. To find all of the faults an enormous amount of time would have to be spent and the cost would be much more than any customer could pay. Instead a good test planning is developed in which the most critical code is covered as well as possible. Twenty percent of the code contains eighty percent of the faults so a big part of testing is to find out which code to test, the so-called critical sections. [5] To get the best result from testing it is always better to test the code when it is being developed than wait until the product is finished. If the fault is found and corrected in an early stage the cost and impact of the fault could be minimised. If the fault was left for later testing the developers would have to go back in the development process and rewrite the code or perhaps even rewrite the design when the fault finally is found. Often the mistake had an influence on other parts of the code that also have to be rewritten, this take time and cost money. Besides if you look at the defect distribution you see that most of the faults are found in the early stages of development, up to sixty percent of the faults are found here. So a good rule of thumb is to test as soon as possible to prevent a long and costly rewriting process. [5] When deciding who should perform the tests it is important to keep development and testing somewhat apart so that independent testing of the product is achieved. If a developer test his own code it is not very likely that he really finds all the faults because he is reluctant to find faults in his own work. It is often better that an independent tester performs the testing. This person could then attack the product with the goal to find all faults and all bugs. To get a good atmosphere between developer and tester it is though important that the developer understands and appreciates the work of the tester. To reach this, testers have to attack the product and not the developer in the testing process. [5] The goals of the product decide when it is time to stop the testing process. Almost none of the commercial products on the market today are flawless. Instead the goal is to get the product as good as possible and the faults left should not create failures to the entire product and should not happen frequently. It is very hard to say exactly when to stop testing, on one hand you want a good product and on the other hand you want it on the market as soon as possible. You have to weigh these two against each other and reach a conclusion on when to stop the testing. [3] From this we reach the conclusion that if you should do effective testing there are a few important principles to follow: - Complete testing is not possible try to find the critical sections - Testing is difficult try to assign the best people to the testing 10

19 Chapter 3 Testing A General Background - Testing must be planned try to cover as much as possible - Testing requires independence developers should not test their own code There are great economical benefits in testing. If the testing is done properly the product has higher quality and the customers are more likely to be satisfied. The customers are not satisfied with a product that contains a lot of faults and they will not pay for such a product. This means there will not be any revenue and no profit. So we need the testing to help build a product that we can sell. If the product is tested and contains very few faults the customer is satisfied and pay the price we put on the product. [4, 5] Another factor is the time spent on developing the product. It has been argued that testing actually can save time. The testing process takes time itself, but this is often saved in later stages which means time is actually saved. For instance if good testing is performed in the beginning of the development then there will be much fewer faults later in the development process. Faults in the later stages can be hard to trace and take a lot of time, so often you actually save time by doing intense testing during the development. [3] Finally we have to take into account the resources and money spent on fixing the product after it has been released to the customers. If the product contains a lot of undetected faults much time and money have to be spent on writing patches to the product. The cost of this has to be taken into account when you calculate the total cost of the development. But with intense testing you can eliminate most of the faults and you have to spend much less resources on patching the product. [5] Testing can be seen as a tool for achieving quality in the product, but quality is a difficult term to define and it is not certain that more testing gives the product a higher quality. When you measure quality it is important to realise that the customer is the only one who can determine if the product has the required quality or not. Testing can remove faults from the program and make the program run smoothly. Since all the tests are built on the specification of the product it is not certain for instance if the specification is very weak that testing gives the product a higher quality. It is equally important that the specification actually reflects what the customer wants. If the specification is a good reflection of what the customer wants then testing actually gives the product a higher quality as it turns the product towards the required behaviour. [6] When quality is measured it is important to know that there are two basics for this. Of course the functionality of the program must be correct. The program must be able to do what the customers want it to do. But there is also the question of reliability. There is no use for a product that meets all of the requirements of the customer if it crashes all the time. So testing also has to make sure that the program is reliable and does not keep the user wondering if and when it actually works. [6] An example of this according to us is Windows 95 that had all the functionality but unfortunately the customers were not very satisfied with this product because it kept on crashing all the time. Microsoft had to try to 11

20 Chapter 3 Testing A General Background fix this and came up with Windows 98 which was slightly better. But it is not until recently when Microsoft developed Windows XP that they really focused on making the product reliably and this is also reflected in the customers who are more and more positive towards the product. The part of making the product reliable is part of the testing process and has to be tested just as thoroughly as the functionality. 3.2 Planning and Execution Planning In small projects with only limited software development planning of the tests is often not that important, but as soon as the project becomes mediumsized or even worse large, one of the most important points of the testing procedure is planning. Without planning the testing of large projects is very difficult. Planning greatly improves the effect of testing and at the same time minimise the time and money spent on testing. Another aspect is the fact that when the testing is done in a team every team member can express his views of the testing and finally all can agree and work after a plan. The workload can also easier be distributed between the members of the team. [6] The Plan When you plan the testing there are several important issues, which have to be taken into account. However first you have to have the specification for the project. All tests are built on the specification. The planning should start as soon as the specification is ready and preferably before the development starts. The reason for this is that the designing of tests often help to find faults in the specification and the specification can easily be revised if the development has not yet started. When testing the first aspect to decide is what to actually test. It is important to decide what we are going to test and what we are not going to test. The test team have to evaluate every aspect and risk associated with not testing certain areas. The fact that some areas are not tested is often due to the fact that the test team has to prioritise because they do not have the resources to test it all. This aspect is often the most important and should be done as thoroughly as possible. The second aspect is the time spent on testing. Important things to decide are: the amount of tests that should be done, the amount of time spent on each test and the amount of time spent on writing and correcting the tests. It is important that one try to evaluate at an early stage how much time should be spent on each of these things. Otherwise it is easy to get stuck on something while the time goes by. Especially important is the writing of the tests and the tests of these. If the tests are not ready the development is halted and this cost a lot. The test cases have to be ready when the developer is ready with a part to be tested. [3] The third aspect is the test development, the distribution of the work tasks. This is especially important if the team is large and there is a structure in the group. A project is often divided into smaller groups and each of these 12

21 Chapter 3 Testing A General Background groups implements some functionality; these parts of the program are called modules. It is important to get the testers to work close with the developers to achieve testing in an early stage in each module so that the integration of the project runs as smoothly as possible. When the test team is scattered it is harder to communicate, then it is much more important with the plan to avoid testing the same thing twice or miss testing something. [3] The last aspect is when to stop testing. There are several issues to take into account when deciding that the testing of the product should come to an end. Often the decision depends not only on the product but also on the time and cost. The testers have to ask themselves, what do we want to achieve? Can we achieve this within the given time and to a reasonable cost? Knowing this, criteria for when the tests are complete have to be decided. The planning should result in a written test plan that all the testers agree on. How extensive this report should be depends on whom the report is for. Most of the times this report is an internal document used solely by the testers and for this purpose the report does not have to be that extensive. Sometimes the report as well as the tests is shipped with the product to the customer and then the report must be very extensive and contain details of all the aspects mentioned above. [6] A good approach is to let someone else look at the document before it is approved to see that all needs are covered. There are no right or wrong on how much should be included in the test plan. Some testers decide just to include the planning as mentioned above while others like to extend the test plan to cover design, specifications, test bugs and more. The important issue however is to work out a common view on how the process should be developed so that all testers agree and work toward the same goal Test Report To get a structured and effective response to tests it is important to make test reports that can be understood and maybe even appreciated by the developers. To do this there are certain standards that one should follow when writing a test report. Normally every company has made up own standards to follow but most of them include the simple rules described further in the next sections. [8] A brief description of the system and the particular software that is tested should be included. The purpose of the document and contents of the report should also be covered. This part of the document is often very short and is only made to give the reader a general view of the system, software and tests. [8] An overview of the testing itself also needs to be included. A short evaluation of how the test went should initiate this part. Then the discrepancies from the expected result and the limitations of the test should be shown. The overview also includes a part that describes what could not be tested and why. For every discrepancy three main things should be evaluated, the effect on the system, how much work there is to correct the 13

22 Chapter 3 Testing A General Background discrepancy and a recommendation if the problem needs to be solved or more correctly the severity of the problem. [8] A more detailed test result also needs to be included in order for the developers to be able to follow the tests when they are to correct the faults. This part also gives managers a better chance to follow the results of the test. The test log should be included in this material. A test log shows the test in chronological order and also where the test was performed. A description of what hardware and software were used and who performed the test should also be included to better be able to deal with the problem and recreate the faults at a later time. [8] 3.3 Validation and Verification When a component or a product is ready it has to be checked to make sure that it works the way it is supposed to. The product has to be tested. There are two important terms to remember, validation and verification. - Validation are we building the right product? - Verification are we building the product right? Boehm, Validation Validation shows that the software does what the customer expects as distinct from what has been specified. This means that the purpose of validation besides satisfying the customer is to bring the SRS as close as possible to the customer s expectations so that the developed product needs less revising. [7] A good way to do this is to include the customer as early as possible. The customer could for instance revise and approve the SRS to get it as close as possible to what the customer wants. But it is also equally important to keep the customer updated during the development to see that the implementation of the demands in the SRS fulfils the customer s expectations. Sometimes it is not possible to include the customer, either because there is no specific customer due to the fact that the product is not developed under contract or because the customer does not want or has time to be included in the actual development. In the first case it could be a good idea to select a fictional customer, someone that can act as the customer; this is ideally one of the persons that decided on the making of the product though he knows what the product should do. In the second case it is important to make it clear to the customer that if the product should be as good as possible he has to participate in some extent during the development. Otherwise the product might not be what he expects and a lot of time has to be spent on revising which delays the delivery of the product. [5] The last part of the validation is often done when the product is ready for shipping in the so-called acceptance test. This test is covered in more detail later but it is practically a test of the product preferably done by the customer before the product is released or shipped to see if the developed product 14

23 Chapter 3 Testing A General Background complies with the expectations of the customer. It is vital to get the best possible evaluation of the product that this test is done by the customer. Always remember it is the customer that will use the product. If the customer is not satisfied then the product is not ready for shipping even if it according to the developers complies with all of the demands in the SRS. This type of conflict is often due to different interpretations of the specification and can often be cleared by involving the customer in an early stage. [5] Verification The developers build the product based on the demands stated in the SRS. To really know that the product complies with all of the demands in the SRS one have to verify this throughout all of the development process. Often the project is divided into phases and after each phase it is important to verify that the product met the demands set up in the SRS. The SRS should state what to be achieved in each phase and the verification after each phase should verify that this has been met and constitute the base for the next phase. It is very important that the verification work is continuous otherwise it is easy to continue the development on false grounds and correcting this at a later stage is very expensive. [4] The verifications can be done in many different ways. Two of the more common are reviews and inspections. Methods for reviews are covered later and we focus a little more on inspections here. The developing company itself often performs verifications. Sometimes the company has their own test department to help them with the task and sometimes not. To improve the product it is though important to include both developers and independent people in the inspections, these people can either be test personnel or developers that developed another function. Many different strategies for inspections are available. Below three of the available strategies are described. The first one is a formal inspection that is led by someone else than the producer of the document. This approach requires all of the participants to prepare by reading and reviewing the document. At the meeting all of the defects found during the preparation are discussed and the faults that are important are put into a report that forms the basis for the developers rework. No solutions to the problems are discussed on the meeting. The second one is walkthroughs that are led by the producer. This form requires no preparation from the participants. At the meeting the entire document is walked through by the producer and during this the participants discuss and search for faults in the document. This often proves to be a very good approach and surprisingly many faults are found during walkthroughs. The third one is Buddy checks which is the least formal of the methods. In this method the document is handed to someone other than the producer and this person reads the document thoroughly. Buddy checks are the easiest approach but also the one that finds the fewest faults. But it is still better than no checking at all. [5] 15

24 Chapter 3 Testing A General Background IV & V - Independent Verification and Validation Independent verification and validation includes a third party besides the customer and the producer, a testing company. This testing company performs all of the validation and verification. The testing company reports only to the customer and this gives the customer a great advantage in finding both faults and specification faults. There are several benefits in using an independent testing company. The testing is often more thoroughly performed as the testing company is completely independent from the developing company. This gives increased protection from faults, identifies risks associated with the software and permits fewer latent faults. Unfortunately independent testing companies cannot be involved in all projects as they often are very expensive. The companies that perform this type of testing are few and the workload is often heavy so they charge a lot for their services. Companies often only use this for very critical programs as for instance flight control software for space shuttles used by NASA. [3, 10] 3.4 Testing Methods Static and Dynamic Testing In testing there are two different approaches when looking at the code. Static testing is done without executing the code. Instead you go through the code manually to find faults. Dynamic testing is done by actually executing the code and looking for faults. [6] In static testing more code can be covered as you actually check the code without execution. This means that for instance syntax faults are eliminated. Compilers can help a great deal with this part of the testing as they analyse the code without executing it. Static tests can also be performed by human testers in the form of inspections, walkthroughs and Buddy checks as described in the previous chapter. When this is done it is important that there is a discussion between the participants to find as many faults as possible. A problem with this type of testing is that it can be really hard to check the actual demands in the SRS when testing, instead you find logical faults. Due to this static testing must be followed by dynamic testing to actually test the demands. Another problem is to test complex programs using this approach, often you can find the basic faults but it can be very hard to find code and logical faults in the more complex parts of the solution. [1, 5] In dynamic testing you actually execute the code as you test the program. This means that you can design the test to cover all of the demands stated in the SRS. This is a formal review of the product and one of the most powerful tools in finding discrepancies between the product and the customer s demands. There are different approaches to dynamic testing. The tests can be written either without looking at the code, called Black-box or by analysing the code called White-box. Dynamic testing can find logical faults much better than static testing. But it also has clear disadvantages. One problem is that it might not cover all of the code, how do we know that all branches, statements and cases have been executed when we only test the 16

25 Chapter 3 Testing A General Background demands (Black-box)? This approach might generate faults at a later stage because some of the code was not covered in the tests. In a complex program it can be very difficult to write tests that actually covers all of the code. Some dynamical approaches actually test all the code but these methods often do not cover the demands but rather only the code (White-box). Much of this testing can be automated with different programs that executes test and generate test reports, so this approach might actually require less work than static testing. [1] To get a good and thorough testing it is important to use both statically and dynamical testing even if this can take extra time. As described above both statically and dynamical testing have drawbacks but at different points. If you combine static and dynamic testing you often get the best result. A combination of the two can find both logical and syntactical faults and eliminate far more than any of the two can on its own. So a recommended approach is to try to find time for both methods in the test phases to be able to find as many faults as possible and get the best possible product Black-Box Testing One method for testing is black-box testing. Black-box tests the specification without any knowledge of the implementation. This means that the only criterion for success in the testing are if the result is what is should be according to the SRS. The input is chosen very carefully to get the desired result. For each demand a test is designed and the output is compared with the expected one. If there are no discrepancies then the product is considered to be correct. [1] Various flaws can arise when using this method. There is no way to be sure that all of the code is executed and that all of the cases in the code really is tested. This means that faults can arise at a later stage when the same demand is tried but under different conditions. Another problem is that it can be very difficult to choose the correct input for the tests. There are several methods to facilitate this as for instance equivalence partitioning and boundary value testing. But even with the help of these it is not easy to write the input as good as possible without any knowledge of the code. A good input has to test as much as possible without actually writing test for all inputs. For instance in a program that is designed to only accept letters as input, one does not want to test all letters and all non-letters as inputs. [1] Equivalence partitioning is one way to find good inputs. The method is based on dividing inputs into groups so called equivalence classes were all the inputs make the program behave in a comparable way. The groups can be chosen from the specification. When the groups are ready the tester chooses a few inputs from each group and tests these inputs. If these inputs give the correct output then all of the values in the group are considered to give the correct output and they are not actually tested. The chosen inputs should preferably be in the middle and on the boundaries of the classes. This method dramatically reduces the number of tests that needs to be performed. [7] 17

26 Chapter 3 Testing A General Background Boundary testing is a method used in black-box testing. In this method approximately five values for each allowed range of values are chosen. One valid value in the middle of the allowed range and two values on each boundary, one inside and one outside the allowed range. If these inputs give correct outputs then all values in that range are considered to give the correct output. The reason that more values are tried on the boundary than inside is that the chance of finding faults are statistically higher on the boundaries than inside the allowed range. This is based on that the developer often concentrates on making sure that the program works with correct inputs rather than with incorrect one. When you are choosing the allowed ranges equivalence partitioning can be an excellent approach. [6] Of course there are advantages with Black-box testing as well. Black-box testing is a very simple approach from the tester s point of view. All they have to do is study the specification and write tests to check that every demand is fulfilled. They do not at all have to worry about the implementation. They can concentrate completely on the functional demands and therefore this approach is also sometimes called functional testing. When discrepancies are found, this method is often much more comfortable for the tester than for the developer. When writing fault reports using this method it is often not really known what caused the fault but rather only that there was a fault. This makes revising more difficult as the developer in a greater extent have to search for the fault in a much wider part of the product, especially if the fault occurs late in the developing process. Black-box testing is perfect for checking a thorough specification to ensure that the customer s demands are fulfilled. But the method is much better on confirming that the demands in the specification is fulfilled than finding all faults due to the difficulty in deciding on input values. The method is fairly easy for the testers as they do not have to read the developer s code but on the other hand the revising could take longer as it can be difficult to decide were the fault occurred White-Box Testing When using Black box testing there are many faults one could not detect, as the testers have no knowledge of the code. For instance there could be deliberate mischief on the part of a programmer. An example of this is the following piece of code that could have been written by a payroll programmer to ensure some compensation in case of firing. [1] if my employee ID exists deposit regular pay check into my bank account else deposit an enormous amount of money into my bank account erase any possible financial audit trails erase this code William E. Lewis, Software Testing and Continuous Quality Improvement, CRC Press LLC

27 Chapter 3 Testing A General Background This type of faults requires another test technique to disclose as the else-case probably never will be tested using Black-box testing and choosing inputs with normal assortment. One method to detect faults like the one above is White-box testing. White-box testing is also known as glass-box, structural or nonfunctional testing. The last one reflects the nature of the testing, as the test is not built on the functionality but rather on the implementation. When designing the tests one look at the implementation and try to design tests to cover as much of the code as possible. The functionality that is tested is not the functionality in the specification but rather the functionality in the implementation. With this approach a lot of the code can be tested and often important parts of the code can be tested on an isolated base. This means there is a greater chance of detecting faults in the code. It is also easier to be sure that the entire program really is tested as you test the program using the code as a base for tests instead of using the specification. Not using the specification naturally also has drawbacks. Even if the code works fine as it is implemented, one cannot be sure that the program really works, as you do not know if the implementation covers all of the specification. There can be demands that has not been implemented or been implemented incorrectly. Of course it is much easier to design tests in White-box than in Blackbox as you get the code and the tests then can cover much smaller parts than in demand driven testing. One also escapes the task of writing tests for abstract demands such as the program has to be reliable. There are several good techniques when writing tests using White-box testing. When looking at the code there are a few simple rules to follow. One could try to cover as much of the actual statements as possible as done in statement coverage. Another approach is to try to cover all of the branches or conditions. Finally a path analysis can be made to see that all possible paths in the program have been tested. [6] When one make a path analysis there are often an enormous amount of ways to fulfil the conditions involved and this gives an infinite number of tests. Of course one cannot make an infinite number of tests, as there is not enough time. So instead you analyse the program from different point of views. [7] - Statement coverage is the least expensive approach. But it also covers the smallest amount of complexity as it only makes sure that all of the lines in a program have been executed without fault. Statement coverage is also known as line coverage. [6] - Branch coverage is more complex as it tests all condition statements for both the true and false cases. But it only makes sure that the conditions are either true or false, it does not test all possible cases that make a condition true or false. This approach often gives a very good general view of the program and if it works logically or not. [6] - Condition coverage is used if a real thorough examination of the product is needed. In this approach all of the cases in a condition statement is 19

28 Chapter 3 Testing A General Background tested. This means that you test all of the individual cases that can make a statement either true or false. This is often a very time-consuming and expensive approach and it is not used as often as branch coverage. [6] This is an example of the three approaches described above. IF ( A > B and C = 5 ) THEN do SOMETHING SET D = 5 (a) A < B and C = 5 (SOMETHING is done, then D is set to 5) (b) A < B and C 5 (SOMETHING is not done, D is set to 5) (c) A B and C = 5 (SOMETHING is not done, D is set to 5) (d) A B and C 5 (SOMETHING is not done, D is set to 5) Cem Keller, Jack Falk and Hung Quoc Nguyen, Testing Computer Software, John Wiley & Sons Inc In statement coverage only (a) is tested as it only makes sure that all of the lines are executed. To achieve this, the condition has to be set true by the tester. In branch coverage (a) is also covered to ensure that the true-path has been tested and one of the other three is also executed to make sure that the false-path is tested. Observe however that only one of the remaining three is tested. In condition coverage all four are executed to ensure that all possible combinations to achieve either true or false has been tested. As can be seen above for only one condition the amount of tests done is substantially larger for condition coverage and though more faults are detected they are often not many enough to justify the extra time and money spent on this testing. But this all comes down to the nature of the program and how important it is to cover all possible angles in relation to the time and cost factors. Many companies actually require condition coverage as they prioritise quality and reliability in the product even though a lot of extra time is spent on testing. [6] Regression Testing The testing process leads to faults that have to be corrected. After the testers have documented the faults the rewriting is up to the developers. The developers rewrite the code and the program has to be tested again to make sure that no faults remain but also that no new faults have occurred during the rewriting process. [4] Even if the product goes through the tests the code may have to be rewritten or some new functionality has to be inserted. The developers insert new lines into the code that have already been tested and hopefully approved. When this is done new tests have to be performed not only to cover the new code but also to retest the old code, as the recently inserted parts may have affected it. [3] These two cases both require new tests. The work has to at least be tested again and preferably even new tests have to be performed. It is important to make sure that old faults have disappeared and that no new faults have occurred. This type of testing is normally called regression testing. 20

29 Chapter 3 Testing A General Background To facilitate when rework is done the work is often divided into phases. When one phase have finished it is tested and approved. If rework is done to something in a specific phase then all of the tests in that phase have to be redone. But tests from previous phases should not have to be done again as nothing in this phase has been altered. Sometimes, especially if the specification changes in a late stage in the development there have to be extensive rework and all of the architecture of the system have to be redone. If this is the case the phase structure often collapses and all of the tests from all the phases have to be performed again to test the new structure. This is called a full regression test and is often both expensive and time consuming. 3.5 Low-Level and High-Level Testing When testing software often the test process is divided into two main parts, low-level testing and high-level testing. Low-level testing is performed on the individual parts before integration into a software program to test logic in these parts individually. High-level testing is performed when the different modules are ready and begins with integration testing, continues with system testing and ends with acceptance testing before release. There are several reasons for dividing the testing into these two parts. One of the more significantly is that the testing is very different in the two parts. Low-level often only test logic and that the code works while high-level test functionality and check that demands in the specification are fulfilled. [5] Low-Level Testing In low-level testing the modules are tested individually to see if they can logically perform the tasks they have been assigned and that the code works properly. [5] Unit Testing Unit testing or module testing is the lowest form of testing performed on regular basis and documented in a report. In a project the software development is often divided into small parts that are implemented separately. These small parts are called units or modules and constitute the building blocks for the product. The parts are often developed by different developers and on many companies the only testing that is done on these modules are easy fault checking with no structure done by the developers themselves. No doubt that the developers should do the testing on this level themselves but it is very important to do the testing in an organised and orderly fashion. It is therefore important for every company or developing team to make guidelines and tools to improve and extend the unit testing. This also makes sure that all modules receive the same testing and have roughly the same quality when they have been tested. If the modules are thoroughly tested it is much easier and requires much less time to do integration and system testing. 21

30 Chapter 3 Testing A General Background Unit testing concentrates the testing on the logic in the modules and on code faults as the module often only covers small parts of the program and there is no comprehensive functionality to test on this level. Much of this testing can therefore be automated and by writing a communication program and then in this program have the opportunity to write the simple tests for the individual modules. [1] Another important feature to test for modules that require input and output is the communication with other modules. Taking help from stubs and driver modules does this. The driver modules send information to the functions in the tested module while stubs act as modules that receive information from the tested module or more correctly from a function in the tested module. Of course this does not test any functionality in the calls but the calls themselves are tested and it is also established that the module can receive and use a correct input and send a correct output. [5] It is not necessary that the programmer himself test the code. But at this level it can often be time saving and it can even be so that more faults are found as the code is very limited and knowledge of the code helps when searching for logical and syntax faults. Another approach that is often used is to make another programmer look at the code and search for faults a so-called Buddy check. This has often proven to be equally effective or even better at this level than leaving the testing to the programmer himself. However it often takes more time than if the programmer does it. Whoever performs the test the most important thing to remember is though that this type of testing is best performed in a structured and automated way as oppose to happy testing where each programmer tests whatever he wants. [3, 5] When writing tests at the module level there are two main questions to be concerned with, does the written code work as intended and is everything that is supposed to be in the module there? With these two questions as a starting point the tests on the modules should be designed so that both of the questions as good as possible can get a positive answer. [5] Extensive and thorough testing at this level can often be one of the best investments made during the development of the product. Sadly, in most cases this type of testing is done routine like and it is often under a lot of time pressure as the module has a deadline that includes the testing. It is often better to divide the testing from the deadline and assign specific time to the testing. If the testing is done well at this level a lot of test time and rewriting time is saved at later stages in the process. [6] Most companies today have realised the importance of testing. Many of them have independent testing departments and the products that the companies develop are much more thoroughly checked today than before. But much of the time spent by the testing department could be saved if the companies realised and assigned resources to module testing. [6] When reading about testing departments and their work today it can be concluded that they are almost solely involved and concentrated on integration, system and acceptance testing. But none of these will go smoothly if the unit testing is poorly performed. So even though many companies assigned resources to testing and they get a better product, the 22

31 Chapter 3 Testing A General Background time and money spent on the testing could be dramatically reduced if they gave unit testing more time and developed better tools to help and improve this type of testing High-Level Testing When the modules are ready it is time to integrate the program. When this is done, extensive testing has to be performed to see that the program can do what is required in the specification. This part of the testing process is called high-level testing. High-level testing is often divided into three parts. Integration testing is performed to test that the modules can work together. When the modules are ready and the building of the program begins by combining these modules, a lot of integration testing is needed to confirm that all of the modules work together and that none of the modules disturb the others. [1] After the integration testing we know that all of the modules can interact flawlessly, then the system testing is started. This part of the test process should establish that all of the demands in the specification have been fulfilled. [1] When it is established that the program performs according to the specification it is time for acceptance testing. In this part the customer is involved to approve the product. This means that the product is evaluated according to what the customer wants. If the costumer is satisfied with the product then it is approved. What the customer wants should not differ too much from the specification but unfortunately it does sometimes, then some or even a lot of rework can be needed. [1] When the product has passed through all of the tests it is ready for release to the customer Integration Testing When the modules are developed and tested it is time to integrate a complete software program from the modules. When this is done several problems arise. When different modules are integrated there may be problems. It is not sure that all of them works together. There may be input-output problems, problems with shared memory or some other problem. This is solved during integration testing. [7] When the modules are integrated there are three main strategies to choose from, top-down, bottom-up or big bang. The first two represents a structured way of building the programs by sequentially adding modules while in the last one all of the modules are connected at once. Depending on the extent of unit testing you can choose the most appropriate of the methods when doing this type of testing. [3] The goal of the testing is to make all of the modules function as a program. The testing is therefore focused on the interaction between modules and not the functionality itself. When designing tests at this level, the specification should be used to get an idea of how the modules interact to perform the 23

32 Chapter 3 Testing A General Background demands. But most of the tests come from the code and from the documentation created by the developers. They have defined how the modules should interact and what the interaction should accomplish. [1, 3] When all of the modules are added and they work together as well as individually this type of testing has finished and all of the testing efforts are concentrated on the functionality defined in the specification Incremental Testing Top - Down Top-Down integration is the first of two incremental ways of doing integration testing. This method starts at the top of the structure and integrates the top level first. Testing the top-level module against stubs which generates input and receives output in relation to the tested module does this. Stubs are simulated modules that are used solely for testing. They are most commonly written by the testers and simulate modules that interact with the tested module. The implementation of stubs is kept at a minimum only the absolute necessary code is included. If the first module works, another module either in a lower level or at the same level depending on if depth first or breadth first is chosen, is tested in relation with the already tested modules and stubs. Finally all the modules have been added and the whole program works. The method is illustrated below. [1, 3, 7] Figure 3.1 Top down testing When adding a new module in incremental testing two types of testing is needed. First it has to be verified that the newly added module works with its neighbour modules. Secondly, all of the previous modules tested with integration testing have to be retested in regression testing to make sure that the new module has not introduced new faults to these modules. This is common when modules for instance use shared memory. [3] The advantages of Top-Down are that large design faults are found at a very early stage. The program can also be demonstrated as a whole in a very early phase even though the functionality might not work. This is very good in graphical programs where the graphics can be shown almost from the start using this approach. Another advantage, especially for the testing crew is that the integration testing can start very early, long before all of the modules are ready. It is possible to start testing the higher modules as soon as they are 24

33 Chapter 3 Testing A General Background ready provided that the developers design and create the modules in the correct order. [1, 3, 6, 7] The disadvantage is that a lot of time has to be spent writing stubs that are not actually a part of the program. Looking briefly at the task of writing the stubs this might not look to hard but there be several hard tasks to solve when writing the stubs. It can be very tricky to get all of the inputs and outputs to work as supposed. All of this time might not be available in the schedule for the developing process. Another important issue is that many essential calculations are done in the lowest modules and these are not tested until in the end. Faults caused by these modules can cause redesign of the project and then all of the work spent on earlier modules has to be redone. [1, 3, 7] Incremental Testing Bottom - Up Bottom-Up integration is the second of two incremental ways of doing integration testing. This method starts at the bottom of the structure and integrates the bottom level first. To test the functionality in the modules there have to be drivers at the level above the tested module. These drivers provide the input and receive the output from the tested module. Drivers are simulated modules that perform interaction with the tested module from a higher level. They are most commonly implemented by the testers and contain only the necessary code to perform the interaction. When the first module is ready another is added and new drivers for this module are designed. Finally all the modules have been added and the whole program works. The method is illustrated below. [7, 3] Figure 3.2 Bottom up testing For each new module that is added regression tests on the modules added before may be needed to see that the newly added module does not create any problems for instance with shared memory. When adding modules on a higher level this is not as important as when using Top-Down, but on some of the modules it might be necessary. [3] 25

34 Chapter 3 Testing A General Background A few advantages can be seen using this approach. One is that modules on the lower level often contain more specific code to calculate different things. If this is tested early on it can already at an early stage be seen that this works. In many programs the higher level modules are used mostly to provide communication and adding data so if the calculations work the other parts can be solved relatively easy, of course this is not the case in all programs. Just as in Top-Down the integration testing can begin long before all of the modules are ready. This might actually be even easier than in Top- Down as the modules on the lower level often are the ones developed first in most projects as they often contain the most basic calculations. [1, 3] On the negative side there is no program to demonstrate until the last module is added. This can make it very difficult to reassure customers and management as the testing proceeds. It is always better if they can see that the work is coming along and that more and more works. Architectural faults are not found until in the end and this can be devastating, as the entire or most of the program has to be redone if the architecture changes. Therefore it is much better to find these basic faults in the beginning rather than in the end. Another problem with this approach is that it is often more difficult to write drivers than writing stubs. Sometimes the drivers become very large and almost resemble the actual modules and then too much work is spent writing these. [1, 3, 6, 7] Big Bang Instead of using one of the incremental methods when combining the modules, the Big bang method could be used. This is a very common approach and if the project is small it could work but when the project gets larger it is not a recommended approach. In Big bang all of the modules are developed individually and when all of them are ready they are all connected to each other at the same time and then the program is tested. [5] Often there are extensive work to get the system running and the modules to interact properly. In this approach you try to solve problem after problem until all of the program work. This could be a very extensive and difficult work as after fixing one problem another could arise in already corrected code. This could lead to a loop where all that is done is correcting and creating new problems. It is also very difficult to really locate the origin of the problem when something does not work. Much time is often spent tracing faults backward in the code to try and find the root of the problem. [5, 6] Another thing to call the attention to when doing this type of integration testing is that all of the code might not be tested. If there is a fault in one part of the code this fault might cause some part of the program to be cut off. This means that no calls are made to this part of the code and the code in this part does not execute. Then additional faults could be hidden in this part of program without ever being tested. [5, 6] What should be kept in mind when choosing this type of testing is that all the modules have to be ready before the testing is started. This means no integration testing can be done during the development, which also means longer total time for the developing and testing processes. 26

35 Chapter 3 Testing A General Background One advantage is that no drivers and stubs need to be written. The time spent on writing stubs and drivers is often estimated to twenty percent of the total coding time and with this approach much of that time can be saved. But unfortunately, the time saved on not writing drivers and stubs is often lost again as it takes a lot longer to find and correct all the faults that arise when combining the modules with Big bang. For smaller programs the method can however be a good approach as no time is spent on drivers and stubs and smaller programs i.e. fewer modules often means that less faults are found in integration testing. [4] System Testing When all of the modules are implemented, tested and integrated so that the communication between them works, then it is time to perform system testing. This testing should establish that the developed product corresponds to the demands in the specification. Designed correctly, a lot of effort has gone into writing the specification which means that there are many and well formulated demands in the specification. Every test is built on a demand and when all of the demands are fulfilled the testing on this level is over. [1] Specific test personnel should do this type of testing. These personnel should not be connected to the developers as this can make the testers less willing to perform tests that have high likeliness to result in faults. Another important reason for external test personnel is that it takes a lot of time to design the tests and this time is often not available to developers. This type of testing is a very important part of the development and faults missed in the system testing are often not found until the product is released and then the cost for correction might be enormous. In very small projects it might not be required with specific test personnel as the developers have a large insight of the whole product as well as to avoid the increased cost of independent testers. [5] Of course testing on the previous levels are important but it is this type of testing that tests the product as a whole and not just individual or groups of modules. If the testing on previous levels has been done extensive and satisfying then system testing is relatively easy and above all it can concentrate on what it really should test. The demands in the SRS are what should be tested. But if the previous testing is not done properly then a lot of faults related to other problems than the SRS is found and this type of testing as well as the rework done by the developers take lot of extra time. [1] When developing the test cases it is very important to test all of the demands in the specification. To do this the design of the test cases should have the SRS as a base. It is simple to take each demand and design test cases to test if this part of the specification is fulfilled. When you know what to test, based on the SRS, of course the code can be looked at to write effective and good tests. This all depends on the type of testing that is chosen, black-box or white-box. The important thing however is to extensively test all of the demands and not how each specific test case is developed. [1] 27

36 Chapter 3 Testing A General Background Load Test When the demands in the specification are tested it is important to check that the demands are fulfilled under all possible circumstances. Of course the program is not supposed to run under any circumstances but if the requirements of the program are fulfilled the program should be able to carry out all of the demands equally good. To test this, the program is tested at its boundaries using several different types of test. These tests are called load tests. There are three main type of load tests that needs to be performed, volume test, stress test, and storage test. [6] Volume tests are designed to check that the program can handle large amount of input. For instance there might be a point in the program where it should read a file. Then it is tested that the program can handle a really large file. Another example is when an interactive program is fed with a reasonable large and steady input to see if the temporary memory overflows. Since many of these programs store input in the temporary memory and handles it first when the input stops problems might arise. Of course the opposite also have to be tested, a very small or empty file should be used as input to see if the program can handle this. [6] Stress tests on the other hand mainly concern themselves with that the program can handle all of the demands that is put on the performance of the program. For instance can a word processor handle one hundred and twenty keystrokes per minute? How many printers can be connected before the program cannot handle them? These tests should of course be inside or at the limits but a program should be able to handle all that it promises. If the program performs well in these tests there might be room to extend what the program can handle in the performance specification that is delivered to the customers. This type of testing is extremely important to perform on distributed systems as these often have a tendency to perform worse at extreme conditions. [6, 7] Storage tests try to focus the testing on the demands specified for the computer running the software rather on the demands for the software itself. How does the program perform with different configurations of internal memory, hard disk drives and processors? Can the program run just as good with an extremely large hard disk drive and minimal internal memory? Again all of the tests should be within or at the limits of what the specification of the program has stated to be the demands on the computer for running the specific software. [6, 1] When the system has been accepted after thorough testing, both with regards to the specification and the demands stated there as well as with regards to the different types of load tests, this type of testing has come to an end. Now the program should be fully functional and bug free. The program can be released from this testing phase and now only one important test phase remain. 28

37 Chapter 3 Testing A General Background Acceptance Testing After system testing the program complies with all of the demands specified in the SRS. This might however not be enough to release the product. If the product is developed for a specific customer or a group of customers then it is important to make sure that the developed product really is what the customer wants. Of course the customer agreed to the SRS but often the customer have great difficulty in expressing what he really wants in concrete demands. This means that even if the product fulfils the demands in the specification it is not sure that it is really what the customer wants. To check this acceptance testing is used. If there is a specific customer he should perform the tests. If the product is developed for no specific customer somebody should be chosen who could represent a typical customer. This can be hard but it is essential for the success of the product that it is done. The tests should however not be written by the customer alone or rather professional test groups under the supervision and participation of the customer should write them. But the tests should be performed and evaluated by the customer. If the customer is satisfied with the result of the acceptance test then the product is ready and can be released. [3] It is often preferable from the customer s point of view that the design of the acceptance test is done after the product is developed, especially if the customer has difficulties in describing what he wants. But from a developers point of view it is better that the acceptance test is designed when the SRS is written and if the final product fulfils the acceptance test then the product is ready and can be released. If the customer wants extra functions or is not satisfied with a product, which has gone through the acceptance test, he would have to pay extra for the fixes. The last suggestion gets more and more common today and customers is therefore often much more thorough when stipulating the demands on the product. This also helps the developers a lot since they can try to develop the product more based on the acceptance test and due to this the product become smaller and more concentrated. [6] There are two types of acceptance tests that are common today, alpha tests and beta tests. Alpha testing is used in the initial phase of the acceptance test and is most commonly performed at the developers but by the customer. This is done to get a feedback on the product by the end user but without faults regarding the product in its true environment. Faults found is often user interface related faults and things that the customer had not thought about when approving the SRS. [5] Beta testing is performed outside of the development company by the costumers or by potential costumers and puts the product in its right environment. Here new problems arise either to do with how the product works in the true environment or problems that the personnel who works with the product detect. There might sometimes be differences between what the managers and workers want from a product. Beta testing is often extensive and can therefore continue for some time. But this only helps the quality of the product and should not be viewed as wasted resources. Even though it is important that there is some type of deadline for the beta testing, 29

38 Chapter 3 Testing A General Background preferable the response on beta testing should be continuous and should not contain solely a final report when the testing finishes. This helps the developers as they can start fixing bugs and faults as soon as they are reported. [5] If there is no specific customer alpha and beta testing can be difficult. To get the most out of these testing methods the customer have to be interested in the product and give valuable feedback. If there is no specific customer time has to be spent finding one or many potential customers that has interest in testing the product so that as many faults and faults as possible are detected. 3.6 Cleanroom Software Development To reduce time and money spent on testing different alternative approaches has been created. These approaches often deal with how to minimise testing through planning but there are also approaches that deal solely with the development process. Cleanroom software development is one of these approaches. In cleanroom software development a specific method is used during the development. This approach reduces the faults so that the testing process is reduced to a minimum. To do this the method uses mathematical proof instead of debugging when showing that a product is ready for shipping. [7] Cleanroom software development consists of five steps. All of the steps have to be followed to get a good product. When all of the steps are followed most of the testing should be eliminated. The five steps are, - Formal specification - Incremental development - Structured programming - Static verification - Statistical testing of the system Ian Sommerville, Software engineering 6 th edition, Pearson Education Limited 2001 In the formal specification a state-transition model is designed to get a structured approach to the work. The specification is produced with using a very thoroughly and formal approach. The work on the specification is continuous and changes as the customer changes his demands. But this rework should not affect incremental steps that have finished. This means that finished incremental steps are locked. [7] As discussed above the development process is divided into clearly parted increments. This means that every increment should be as independent as possible from the other increments. A change in one of these should not affect the rest. The increments are decided with the help of the customer at a very early stage in the process. Due to this the critical functionality can be implemented in the early increments and is therefore available to the customer for approval at an equally early time. Moreover this gives the important increments more validation. Each increment is individually validated when finished. [7, 12] 30

39 Chapter 3 Testing A General Background In the cleanroom approach a structured programming approach is used. This means that only a limited number of programming constructs are used, sequence, if-then, if-then-else, while, do-while, for loop, case/switch. These constructs are used over and over again in a hierarchical structure. This gives programs a more correct and uniform structure and eliminates hard read code parts that often contain lots of faults. [7] The static verification is an extremely thorough process that statically verifies that the program works. This is achieved using extensive inspections. The inspections are often complemented by large mathematical arguments to prove that the output is consistent with the input. Of course these arguments or proofs are much weaker than normal mathematical proofs but they still, if developed correctly, give a good indication to the quality of the product. [7, 12] When all of the increments have been integrated the product has to be tested statistically to see if it meets the reliability demands. These tests are developed in parallel with the specification for the system. [7] Studies have shown that cleanroom development can be very effective and often create better software than normal approaches. But this approach should probably only be used in teams with experienced and highly skilled engineers as it is very difficult both to specify and to develop. The somewhat different programming approach cannot be mastered as well by inexperienced or lower skilled engineers and the quality of the product diminishes. The studies have also shown that the cost for projects developed with this method is not higher than projects developed using normal methods. This is due to the fact that most faults are found in the different inspections prior to execution and this helps reduce the costs. [7] 3.7 Testing Tools As the product grow more and more complex and the developing process gets more and more complicated the need for testing increases. But to test these complex systems with manual methods are both time consuming and expensive. To reduce the costs and the time spent on testing there are several test tasks that can be automated. Almost all of the testing procedure can actually be automated but the automation is often concentrated on tests that are uniform in all projects and that is performed at a regular base. For these tasks a variety of more or less commercial testing tools have been developed. These testing tools deal with for instance coverage analysis, capture/playback and simulation. In every step of the developing process there are a few or more good testing tools that minimise the time and money spent on testing as well as increase the effectiveness of the testing process as more faults can be detected using these tools than manual testing procedures. [5] When to Use Testing Tools When testing there are several helpful tools to simplify and unify the testing procedure. Tools can often perform a large part of the tests. Especially automated tests can be performed more efficiently and effective with the 31

40 Chapter 3 Testing A General Background help of a tool. Naturally less automated tests also can have help from tools. For instance tools that can handle the communication so that all the tester has to do is write the tests into the tool and then the output is received. In this case the tool handles all of the communication with the program. [5] Another advantage when using testing tools is that the tests can easily be recreated and run again in for instance regression testing. However testing tools today are mostly used for specific tasks that can be found in every project such as coverage analysis and complexity analysis. These tasks can be very hard to perform without the tools but for the computer they are fairly easy to perform. Coverage analysis and complexity analysis are general tasks and therefore tools to assist in these tasks are commercially developed. If the tasks are not general, tools often have to be developed internally by each company so that the tool meets that companies specific needs. The commercial products related to testing tools are today growing; this is probably due to the fact that more and more companies become aware of the benefits in testing their software. Even though much of the works done by the commercial tools are basic tasks there are still both time and money to be saved for the developing companies. [7] Switching to testing tools from some other testing methods is not always as simple as it may seem. There is the question of what we do with the old tests, do we throw them away or do we try to incorporate them into the new testing tools? A good approach is often to try it out, buy one or a few licenses and try the tools in a pilot project. If the pilot project turns out well then more licenses can be purchased but if the pilot project fails then the cost will not be as large for just a few licenses. If a transition into testing tools should be successful the personnel involved need to have a positive attitude and realise that there will be some or a lot of extra work in the beginning to make the new tools work as supposed to in the companies developing environment. [5] Tools for Inspections and Reviews The testing work done in the initial phases is often very static and can easily be facilitated by testing tools. There is therefore a wide range of testing tools available to help in this phase of the developing process. The tools can help with for instance, reviews, walkthroughs, inspections, functional design, internal design and code. All of the tools at this level work with the code. That means that the code is analysed in different aspects to establish possible faults. [5] Complexity analysis is one of the more important tools at this level. This type of tool identifies complex areas in the code or more correctly parts in the code that has a high likelihood of containing faults i.e. critical sections. Most of the tools have embedded criteria for what type of code that generally contains faults and based on this knowledge the code is analysed and certain parts selected as critical sections. The base for this type of tools is the way they decide which areas that are critical, the more complex this algorithms is the better the program actually can find the critical sections. There are 32

41 Chapter 3 Testing A General Background several recognised methods to decide complexity, for instance the McCabe Cyclomatic Complexity Metric, where the complexity is measured using the control flow of the program as a relative measure of its complexity. Another example is Halsteads Software Science complexity metrics where the complexity is calculated using a program s size expressed in terms of the number of unique operators and operands used. Both of these methods are implemented in several commercial tools for measuring the complexity of a program. [3] Another important feature is code comprehension. In this type of tools dependencies in the code is analysed and for instance dead code, code that cannot be reached are located. These types of tools are helpful before inspections or when it is important to really penetrate the code and follow all of the logic in the program. These tools often have complex graphical displays for following all of the logic in programs. The tools often execute all possible combination of the branches to see where each choice is leading and the displays the result. This type of testing is ideal for tools, as it is much to complex and time demanding for going through by hand. Other areas for tools in this phase are syntactical and semantically analyse. These tools locate faults that are either syntactical or semantically and that can normally not be detected by for instance compilers. [5] An example of a commercial tool that can perform tests in this phase is Panorama by International Software Automation, Inc. This tool can analyse the complexity of a program and point to specific areas that have to be extra carefully tested. It can also help in code comprehension by showing diagrams of logic and data flows at a low level. [13] Test Execution and Evaluation Tools When testing software there are both static and dynamic tests to perform. In the previous chapter only static methods were discussed, as these are the only ones available at that stage. But there are also dynamical tests to be performed. These tests are performed during execution to check that the program works as supposed. Of course there are several helpful tools to this type of testing as well for instance capture/playback and coverage analysis tools. [5, 1] Capture/Playback Capture/playback tools are a form of regression testing tools. When the testing starts a recording starts as well. This recording records the user events during the test and when the test finishes so does the recording. With the help of the recording the exact same test can be performed again automatically either to rerun the test or to check if a change influenced the product and its performance. Capture/playback is an important feature and is included in many different testing tools as it often is very important to be able to trace what happened when a fault occurred or to recreate an exact chain of events when retesting software after corrections or changes. Tools that perform capture/playback are based on one of the following three principles, native/software intrusive, native/hardware intrusive or nonintrusive. [5, 6] 33

42 Chapter 3 Testing A General Background Tools that are native/software intrusive intrude the code by inserting pieces of code into the software to be able to perform the capture/playback. The problem with this approach is that it can be very difficult to evaluate how much the intrusion of the code actually affects the performance of the program itself. But on the other hand this is a very cheap and easy approach, as it does not call for any expensive or time-consuming work in creating the tool. This is by far the most common implementation of capture/playback. [5] To get less influence on the code a native/hardware intrusive approach can be used. In this approach the capture/playback code is separated from the actual software but it still uses the same hardware. This approach has less chances of creating a faulty output due to problem with the intrusion from the capture/playback. But of course there can be some effect since the same hardware is used and if it is a sensitive system this could actually have some small influence on the tested software. Native/hardware intrusive is more expensive than native/software intrusive as the testing code has to be independent and be able to communicate with the tested software. [5] The last approach is non-intrusive and as the name suggests this approach does not intrude the tested software at all. In non-intrusive the testing software is completely separated from the tested software and it is also run on hardware separated from the hardware running the tested software. The major problem with this approach is the communication between the tested software and the testing software. As they are not run on the same hardware extensive work have to be done in order to get the communication to work properly. But on the other hand this approach does not affect the tested software at all, so the testing software itself can introduce no faults. This type of approach is often needed if the tested software has extensive real time demands. But for normal software it is far too expensive to develop to justify the extra security introduced. [5] An example of a tool that performs capture/playback is Websphere Studio from IBM. Websphere Studio is a program for developing and testing WebPages with interesting testing functions, which includes capture/playback. In this program a test suite is recorded and the record is transferred into a script. This script can be run over and over again. The program also offers a possibility to change certain parameters in the script so that for instance different logins can be tested or different merchandise be ordered. The only thing that is changed is the parameters, otherwise the entire test is performed in the same way every time using playback from the script. [15] Of course there are also other types of regression testing tools that are not as comprehensive. These tools often only supply the opportunity to compare output between test suites. First the test script is run and the program saves the output, then a change is introduced and the test script is run again and the new output is compared with the saved one. This simplifies the work done when comparing extensive test suites and can be very time saving. [1] 34

43 Chapter 3 Testing A General Background Simulators Another important tool is a simulator. Simulators compensate for parts in a program that have not yet been developed or software interfaces with uncontrollable or unavailable hardware devices. Simulators are often much harder to design to function with a variety of products. Instead these often have to be implemented for a more selective group of products which make them more expensive as well as commercially unavailable. Simulators can also be used to check general performance and the capabilities of certain software. They are frequently used in for instance telecommunication applications and networks. In these areas it is often important to be able to simulate as not the entire network or telecommunication applications can be connected to see if it the various parts work. [5] Coverage Analysis Coverage analysis is a very important feature when testing a product. This type of testing can detect problems related to the coverage of the code, how much of the code has been executed. This task is greatly facilitated by using tools. Coverage analysis tools are probably the most common tools commercially available. Most of these tools add probes in the code. Then the program is executed. After the execution the program evaluates how many of the probes that have been activated and how many times this has happened. The tools can also often show which parts if any have not been executed and consequently parts of the code that have not been tested. As discussed earlier there are several different approaches when designing coverage analysis. Some of the tools have combined the different approaches while others have chosen to implement just one of the approaches. It is therefore very important when using a tool to be sure of which approach the specific tool has chosen. [6] An example of a test coverage tool is BullseyeCoverage from Bullseye. In this tool Bullseye have implemented two different methods for measuring the coverage. Function coverage tells if a function has been called or not. Condition/Decision Coverage is a extended form of condition coverage for real thorough measures. The first method is quicker but not as thorough while the other method gives a complete picture of which code has been executed. The recommended approach is to start testing with function coverage as soon as possible and as the program works apply tests using condition/decision. [17] Memory Testing Memory testing is a complex but for a tool a fairly easy task to perform. This is of course especially important in languages that do not offer memory management. The memory testing tools address memory problem such as reading and using not initialised memory, problems with vector limits and when memory is not unallocated. These tasks can normally be very hard to check using manual testing methods. Memory testing tools are often very complex programs. Mostly they insert some kind of statements into the tested code and then evaluate after execution. The structure for evaluation can indeed be very complex and normally multiple parameters is used when 35

44 Chapter 3 Testing A General Background the program concludes if the software have memory problems or not. Many of the available tools give really good overview of problems that most certainly would not be discovered without the tools. [5, 18] Insure++ is a memory testing tool developed by Parasoft. The tool is very extensive but relatively easy to use. The code that should be tested is compiled with a special compiler that adds statements to the tested code. After this the compiled software is executed. The program presents a lot of valuable information on the memory used by the software. This presentation is based on the information relayed by the statements inserted into the code. Insure++ can detect for instance memory leaks, memory corruption and memory allocation faults. The program is also able to exactly pinpoint where the fault arise in the code. This is very useful in large software projects. [18] Load Tests As discussed earlier load tests are needed to make sure that the program is performing according to its specification even if the load is very high. The performance cannot go down just because a lot of users are logged on. This is also more and more important when there are more and more distributed systems. To test this it is not practical to use an enormous amount of testers instead this type of testing is automated using a load testing tool. These tools can simulate a large load on the system for instance many users or a lot of downloading. Many of these tools can also simulate different types of networks as for instance WAN. [1, 11] LoadRunner from Mercury Interactive is a load test tool. This tool can simulate thousands of users, each and everyone of these functions just like a real user. This means that every user put a different load on the system, collects different amounts of information. They put a load on both the server and the client to get as good a simulation as possible. LoadRunner supervises the system during the tests and collects information on the performance in relation to the current load. This type of data is saved in a database and can later be extracted and evaluated to see if the program met the demands. [11] 36

45 Chapter 4 Present and Future Module Testing at EMP 4 Present and Future Module Testing at EMP 4.1 Terminology Definition/Abbreviation AMR ARM DSP Host IPC MsLog LabView PCM RS232 Explanation Advanced Multi Rate Advanced RISC Machine, the main processor Digital Signal Processor See ARM Protocol for transfer between the ARM and the DSP Terminal program Graphical Programming Language Pulse Code Modulation Protocol for data transfer 4.2 Background The module testing at EMP today is often designed by each of the developing groups themselves. Most of the groups have designed extensive test cases in which they really try to cover as much as possible in their tests. The platform used at EMP has two different types of modules that require and have different types of testing. The modules located in the ARM processor are called host modules and they are easier to access and test as there is an interface between the host processor and the outside for instance through serial communication using RS232. The modules located in the Digital signal processor are called DSP modules and these modules are harder to test as they cannot be accessed directly without introducing foreign hardware or software. The modules located in the host are today mostly tested with internal test code that is compiled into the software and accessed through a terminal interface. Of course the problem with this is that new code is introduced but this is done in a controlled fashion and the code is clearly separated from the modules actually tested and should not influence the target code. This is a structured and easy way to test the target code and a lot of complex test cases can be introduced into this type of testing but it can also easily be more of system test than a module test as a lot of modules are included. This is often required as some of the modules in the host have some or a lot of dependencies to other modules located both in the ARM and the DSP. The modules located in the DSP are today often tested using external hardware or software. Often the ARM and the modules located in this processor are removed and only the DSP code is compiled. Instead of the host the DSP code is either linked to a test platform that handles signals to and from the DSP or run separately on an isolated processor. If the code is run on an isolated processor the data is often probed and analysed outside of the DSP. When a platform is used the data returned from the DSP can be directly analysed. To control the tests, scripts are used and the clear advantage of this is that they are easy to modify as no change to the code is made. This type of testing is very efficient but it is hard for non initiated personnel to understand and follow. Thereby the tests become available only to the developer and this diminishes the control over the tests. The platform 37

46 Chapter 4 Present and Future Module Testing at EMP is also a problem as this is a very complex program that can, if handled incorrectly, introduce faults that can be hard to trace. If data is probed it has to be analysed in a program outside of the test case. This could be in for instance MATLAB or some other analyzing program. If the results have to be analyzed externally the tests cannot be run automatically which is preferred. To get a good overview of the testing process currently available at Ericsson Mobile Platforms for testing the modules a thorough investigation of the tests at the different developing groups was made. Some of the groups had responsibility for modules located in the host and some groups for modules located in the DSP. Below a few of the module testing systems currently used for different types of modules at Ericsson Mobile Platforms are described. 4.3 Control Module (ARM-Module) The tests are written as a standalone test module that implements Interactive Debug. The test module communicates both with Interactive Debug and with the module that is to be tested. All functionality is placed in the test module that performs advanced tests of the module. On the computer mslog controls the flow of the tests. MsLog is a standard terminal program that can read and write to the serial port. The tester executes a test by writing the command and any parameters in mslog. The command is then sent through the serial port and received by Interactive Debug, which parses the string into a command and relays it to the test module. The test module then executes the command. In the tests the test module calls the module that is to be tested and depending on the answer the test is either considered to have passed or failed. The only answer that is given to the tester however is the printouts from the platform. A clear disadvantage is that printouts not only relay from the test module but also from lots of other modules. This makes it very hard to trace faults unless it is known exactly what to look for. The testing of control modules could be considered to be functional testing rather than module testing since it usually involves a lot of underlying modules, i.e. the modules that are to be controlled. A fault in one of these modules can be very hard to trace and cause faults that cannot be trace using this type of testing. 4.4 Audio Processing Module (DSP-Module) Audio processing modules use a complete test platform that is integrated with the host. The test platform makes sure that when a command is to be executed in the DSP all necessary initialisation are done and the command is sent to the correct module. Then an answer is sent back to the platform and this is decoded and sent back to the tester. The tester uses scripts to design the tests. Salt script is used and when the script is executed the commands are sent to the test platform via RS232. The answers are also sent via RS232 back to the tester. This approach has the clear advantage that it is easy to modify the tests without recompiling the software. A disadvantage with the test platform is that it is very complex and very hard to monitor. There is a lot that is not known to the tester. Besides the answers 38

47 Chapter 4 Present and Future Module Testing at EMP sent from the DSP to the host the output is also probed to make sure that this is correct. The output (PCM frames) that is probed from a channel in the DSP is evaluated solely using manual methods such as the human ear. 4.5 AMR Codec (DSP-Module) This type of modules are not tested directly but rather tested by using the tested modules interface in the host. This means that a test module is implemented. This test module supports Interactive Debug and contains all of the test cases. The test module communicates with the tested module s interface in the host. The interface then sets up and tests the DSP module. The DSP module performs whatever task is to be tested and sends an answer to the interface in the host. The interface then relays the answer to the test module that evaluates the answer. The communication between the test module and a computer is performed using mslog and RS232. The problem with this approach is clear. There is no direct contact with the tested module and this might introduce faults. Another problem might be that not all DSP modules have interfaces in the host and then this approach might not work. 4.6 Baseband DSP (DSP-Module) Specially designed software is integrated in the platform to control the communication with the computer and to initiate the tests against the module. This software is not delivered with the platform but is only present when the testing is performed. The software is basically an implemented protocol that codes and decodes the commands sent to the platform. On the computer side an interface designed in LabView is used. The tester has generated test sequences that are sent into the platform together with a command. The software in the platforms decodes the command and calls the module that executes the command using the specially generated test sequence. The module produces an answer that is sent back to the computer. The interface then compares the answer with a predefined answer and if the two match then the test is considered to have passed. That the test passed can be seen in the interface. There is the possibility to automatically generate some sort of test report using the data sent back from the platform. It is also possible to run a series of test and to loop tests. The test software was very specifically written for these types of modules. 4.7 Design Discussion In the host there really did not seem to be a problem as most of the development groups are using the same approach when testing the modules. The difficulty at this level is to specify the tests as these modules used many other modules at lower levels in their functionality. The major problem is to isolate the location of the fault, i.e. in which module to find the fault. In the DSP there are a few different approaches to tests. Some used systems they had created by themselves while others used old software that had not originally been designed for this purpose. This means that we would have to make a decision which approach would be the best. To do this we looked at the solution for the host and tried to see if this could work for the DSP as well. It seemed to be possible after that some additions been made in order 39

48 Chapter 4 Present and Future Module Testing at EMP for the communication between the user and the host and between the host and the DSP to work. The problem with this solution is the protocol and the message handling between the host and the DSP, but with the current standard it should be possible to solve even though an extra communication module has to be written for every module that is to be tested in the DSP. Most of the module tests in the DSP involved manual handling to decide if the test passed or failed, e.g. listen to an mp3 melody and decide if the quality was good enough. This is not to be considered as an automated test and it is also not always possible to detect all the faults this way. This could be solved in most cases by introducing the use of reference values i.e. let the module exercise predefined values and evaluate the result. This approach is also appropriate for regression testing, because it is easy to compare the results with each other. One of the groups generated test reports automatically and this was something that the other groups also wanted. Another important feature discussed with the groups was the connection to a database to be able to log the testing activity. One of the groups used an interface designed in LabView for the testing and this seemed to be both easy to use and it provided a good overview of the testing. The preferred testing solution involved using a program called Interactive Debug. This program is used to send commands to specific modules in the platform using the standard RS232. The modules in the platform have to implement a specific interface to allow communication. After this is done the modules can communicate with a PC. This means that an interface has to be introduced at the computer side that can handle both the incoming and outgoing messages and is able to decode the messages sent by Interactive Debug. The messages that are being sent are text strings. The test code is not introduced into the module actually being tested. Instead that code is allocated in its own module, the so-called test module. This module implements the specific Interactive Debug interface and it is thus possible to call that module from the PC interface (see Figure 4.1). Further that test code is calling the methods in the module to be tested. As no new testing code is inserted in the tested module the possibility to insert new faults into the module is heavily decreased. But there are also disadvantages with this approach. The two most obvious ones are the lack of code coverage and probing in the tested code. Figure 4.1 Testing of module in host A problem with Interactive Debug is that it is only available in the host. Using an additional module in the host when a DSP module is being tested can solve this (Se Figure 4.2). All modules implementing Interactive Debug 40

49 Chapter 4 Present and Future Module Testing at EMP must be registered as processes. That also concerns the processes using IPC functionality. Figure 4.2 Testing of module in DSP The test code has to evaluate if the individual test passed or failed. All additional information relayed to the tester also has to be sent by the test code. This means that it is highly preferred that the developer write the test code as he knows best what kind of information he needs from the tests. 41

50 Chapter 5 Software Top Level Design Document 5 Software Top Level Design Document 5.1 Introduction This document shows how the module test system is structured and divided into subsystems. The document itself is divided into two main parts. The first part of the two main parts presents the system from the perspective of the phone. This means that in this model the PC side is only considered as a black box just receiving signals from the environment. In this case these signals come from the phone. Nothing of the internal functionality at the PC side is displayed. The second part presents the system from the opposite perspective, i.e. with the phone as a black box without any internal functionality shown Terminology Definition/Abbreviation ARM Debug Printout DSP DSP Comm DSP DSP Comm Host DSP Target Module DSP Test Module DSP DSP Test Module Host GOOP Host Target Module Host Test Module IDbg LabView Module Tester Explanation Advanced RISC Machine, the main processor Module handling the printouts from the phone Digital Signal Processor DSP communication module located in the DSP DSP communication module located in the host Module containing the code to be tested in the DSP Module containing the test code for the DSP tests Host module containing the administration code for DSP tests Graphical Object Oriented Programming Module containing the code to be tested in the host Module containing the test code for host tests Interactive Debug (a module in the host) Graphical Programming Language The LabView application at the PC side 5.2 High Level Requirements Phone Perspective System Architecture The system is, as shown in Figure 5.1: System architecture from the phone perspective below, divided into levels of subsystems. This is done to facilitate the understanding of the design and the implementation. The highest level consists of only two modules, the PC and the phone. Module Tester is the only subsystem in the PC. 42

51 Chapter 5 Software Top Level Design Document Figure 5.1 System architecture from the phone perspective. There are two subsystems in the phone, the host and the DSP. These subsystems, in turn, contain the different processes. Communication is performed between each of these subsystems. It is only the signals sent between the subsystems that are discussed in this chapter. Additional signals sent out from any of the subsystems to the environment are not considered here. The host (the ARM-processor) and the DSP are started and initiated when the phone is turned on. Module Tester on the PC side is communicating with the host in the phone via the serial port on the computer and on the phone. When a module is being tested in the host the actual testing is done by the test module, containing the test code. The test module invokes the target module, which returns values that are checked for correctness in the test module. There are two additional modules that are of interest during testing in the host. It is the Interactive Debug (IDbg) module and the Debug Printout module. Every command sent to the serial port from the PC is received in the host by the Interactive Debug process. That process transforms the string into a function call and invokes the right method in the test module in the host. As a consequence of this it follows that the subsystems in the phone that are receiving signals from Module Tester have to know the format of the signal, i.e. have knowledge about the parameters that is being sent to them. The test code in the test module has to notice Module Tester when the test starts and ends. It is also necessary for Module Tester to know if a test 43

52 Chapter 5 Software Top Level Design Document passed or failed. This information is sent from the test code as special printouts that are made with the Debug Printout module. When a module is being tested in the DSP the testing procedure is different compared to when a module is being tested in the host. A test in the DSP is to process a frame with samples in an algorithm and compare this processed frame with a reference frame. The comparison is made in the host. It is not possible to communicate with the DSP directly from the PC. This communication has to be done via the host. For every module that is tested in the DSP an additional test module has to be included in the host. This test module is invoked from IDbg that sends the frame that is tested to a communication module in the host. This frame is sent further to the communication module in the DSP. When received in the DSP the frame is routed to the intended test module in the DSP by the communication module in the DSP. The processed frame is sent back, via the communication module in the DSP, to the communication module in the host. This frame is compared with another frame in the host that is considered to be correct. The result of the comparison is sent back to Module Tester via the test module in the host, Debug Printout and IDbg. Figure 5.2 Interactive Debug invokes methods in the test module Detailed High Level Design PC Module Tester is the only subsystem on the PC side. A more detailed highlevel design is found in Appendix D1: Detailed high level design Module Tester Detailed High Level Design Phone The phone consists of the two subsystems host and DSP. When host tests are performed there are only modules in the host as a disparity from DSP testing where modules are required both in the host and the DSP. 44

53 Chapter 5 Software Top Level Design Document There are six different modules in the host. When host tests are performed four of them are used (IDbg, Debug Printout, Host Test Module and Host Target module). The Host Test Module and the Host Target Module are the central parts where the actual testing is performed. The IDbg and Debug Printout modules take care of administrative issues like formatting the strings that are sent back to the PC. See figure 5.1 where all the six different modules are shown. The modules that are only used when testing the host are shown with a light dark background. When DSP tests are performed there are also four active modules in the host but instead of the Host Test Module and the Host Target Module two other modules are used (DSP Test Module Host and DSP Comm Module). They administrate the DSP tests and handle the communication between the host and the DSP. See figure 5.1 where also the additional modules are shown with a dark grey background. A more detailed description of the different modules in the host can be found in Appendix D. The modules that are only used during host tests can be found in Appendix D2-D3. The modules that are only used during DSP tests can be found in Appendix D4-D5 and the modules that are common for both the test can be found in Appendix D6-D7. The central parts when testing the DSP are DSP Test Module DSP and DSP Target Module. DSP Comm DSP is an additional module that handles the communication between the host and the DSP. See figure 5.1 where the modules discussed above are shown with a dark grey background. A more detailed description of the modules used in DSP tests can be found in Appendix D8-D10. Sequence diagrams that show different scenarios related to host and DSP tests can be found in Appendix D11: Sequence diagrams phone perspective. 5.3 High Level Requirements PC Perspective System Architecture At the PC side there is only one second level subsystem. It consists of the Module Tester application, which in its turn consists of all the LabView GOOP classes. Incoming signals to Module Tester should be strings. Since the type of the string (see Appendix D6: Detailed high level design Interactive Debug) is merged with the string itself it is up to Module Tester to parse the string and find out what type it is. These are not signals in the usual meaning; instead they are answers to signals that were sent to the phone from Module Tester. 45

54 Chapter 5 Software Top Level Design Document Figure 5.3 System architecture from the phone perspective Detailed High Level Design PC Module Tester is the only subsystem on the PC side Detailed High Level Design Module Tester Module Tester contains the LabView GOOP classes. All signals that are sent out from or received by Module Tester should be in the format of strings. The communication class Com_Reader handles the communication to Module Tester and the communication class Com_Writer handles the communication from Module Tester. Each signal is sent to the IDbg process in the phone. 46

55 Chapter 5 Software Top Level Design Document Figure 5.4 Signals are sent as strings. A more detailed description of the LabView GOOP classes used in Module Tester can be found in Appendix D12-D20. Sequence diagrams that show how Module Tester handles different scenarios can be found in Appendix D21: Sequence diagrams PC perspective Detailed High Level Design Phone The phone consists of the two main parts host and DSP. The PC sends strings to the phone that is received in the host. When modules that are located in the DSP are tested signals are sent from the host to the DSP and back Detailed High Level Design Host The communication that the phone performs with the environment goes via the host. It receives signals at the serial port and routes them to the right addressee. When DSP tests are to be performed signals are routed to the DSP. The relevant signals are shown in chapter Detailed high level design phone describing it from the perspective of the phone Detailed High Level Design DSP The DSP will not have direct communication with the PC. Test cases that are executed in the DSP are administrated in the host. For a more detailed description of how the DSP communicates with the environment see Detailed high level design phone. 47

56 Chapter 6 Validation and Verification 6. Validation and Verification The developed prototype can be divided into three major parts, the documents consisting of the processes for the ARM and DSP testing, the graphical user interface developed in LabView and the platform code written in C. These three have all been thoroughly tested using various methods. The graphical user interface is a prototype and therefore some faults still exist. Besides these faults some functions are also missing which make the testing somewhat more difficult. A lot of the testing has been performed by us as developers, but in the later phases especially for the graphical user interface a lot of tests were made by actual developers at Ericsson Mobile Platforms AB (EMP) as well as the responsible person for system testing at EMP. This section describes how the tests were performed and what kinds of results were found. 6.1 The ARM and DSP Processes The ARM and DSP processes are documents written to make it easier for the developers to make their testing modules work with the graphical user interface and thus support automated testing. These processes were developed in phases. Each phase consisted of a small part of the document in which some feature is described. After each phase this text was evaluated and reviewed both in consideration of text quality and functionality of the text. The written feature was then tested to see if it was possible to produce the required code with the help of the process. The textual review was done in a pair where we after reading the document on our own discussed the faults and question marks that we had found in the document. After this review the documents were revised accordingly with the review. Solely we as developers did these small reviews on each feature. When we had produced the full document we gave the documents to test engineers at EMP for evaluation. They mainly focused on the testing perspective of the documents and they found some faults that dealt not with the features but more with the readability of the documents. We changed the processes so that they had a more structured look in which it was easier to find a specific part of the process rather than follow the complete process. This was done as the engineers thought that the users only should use the processes as dictionaries as they learnt more about the processes and the product. To substantiate this, the processes were made to have more clear breakpoints and more easy to understand headlines. When the testing engineers at EMP had reviewed the processes they were given to actual developers at EMP. The developers were given the task to follow the process and make the tests that they wrote to work with the graphical user interface. As they did this they did not receive any help from us and they had to write down all difficulties and problems they found. The developers managed to write the test cases and incorporate them into the code accordingly with our process. Some small faults were found during this due to the fact that we had taken to much knowledge for granted when the processes were written. These small faults were clarified in the processes and after this none of the developers had any problems with the processes. 48

57 Chapter 6 Validation and Verification The developers had some other problems that they wrote down. They found the processes to extensive and they thought it was easy to make mistakes. Because of this it was easy that the user did not concentrate fully and missed some step. In this discussion it was also mentioned that for the automated test process to be incorporated with the module testing of today it should not take to much time and not to much work. To facilitate the work of the developers and to make the process easier to follow we wrote more skeleton classes in which we made comments. To make use of the skeleton classes we wrote a short instruction manual that made use of these classes in the development of the test cases. With the help of the skeleton classes and the comments in them together with the short instructions the developers found it more easy to follow the process and in the same time it reduced the time and work load on the developers. The developers then chose to use the skeleton classes together with the short instructions. The process was only used when problems arose or when they wanted a more thorough description of what some specific feature did. When all of these changes had been incorporated into the processes all of the testers agreed that the processes contained all that they needed and that they could not be wrongly interpreted by developers but at the same time they made it so easy to develop the tests that it should be no problems to make use of the automated testing procedure. 6.2 The Graphical User Interface The graphical interface has been developed in LabView with the support of GOOP that is an object oriented way to implement LabView code. In GOOP the developers develop classes that contain a lot of methods. Each method performs a specific task. Before implementing the graphical user interface a thorough design phase was performed. In this phase all of the functionality as well as the graphics were decided. All of the classes were designed as well as the methods and functionality that they should contain. Then the development phase started. In the development phase we chose to develop one class at the time. Each method of the classes was developed separately. When a method was finished the functionality was tested with all possible parameters. This testing was very thorough and extensive and in LabView there is a perfect testing tool where the tester can follow the flow of calls directly on the screen. When a method was approved the next method was implemented. When all of the methods in a class were ready the class was tested. Calls to the class were made simulating both the use that the product was designed for and use that possible could cause problems with the class. Some discrepancies were found mostly to do with timing faults. LabView contains a flow in which it is easy to make mistakes when not all of the necessary variables are in the flow. When a class had been tested extensively and no more faults could be detected the next class was developed. When all of the classes had been developed a main class to connect the different classes were implemented. The integration testing phase was not without problems as many of the classes did not function as well together as 49

58 Chapter 6 Validation and Verification they had done on their own. Some problems arose as we missed in- or out parameters for some functions. Other problems dealt with delays in LabView that did not work as intended. This phase was very time consuming and we had to make the same tests over and over again in long test suites to make sure that we covered all possible angles. Still the tests were done with dummy parameters instead of connecting to the actual platform. When the program worked as it was supposed to in regard to the communication between the different classes this phase came to an end. Still all testing had been performed by us as developers. Finally the product had to be tested as a product in the real environment. To help in our tests we made a platform build that contained a lot of different test cases. We then changed our dummy parameters from the integration testing to the platform. When this was done there were immediately new problems with the reading from the platform as well as the writing. We solved this by modifying our delays and fault checking in the serial communication. As soon as this was solved we started the actual system testing. We came up with a lot of scenarios that were followed in our interaction with the program. Most of these scenarios dealt with the common use of the product as others dealt with possible fault cases or load tests. Since we had made an extensive module test and a thorough integration test there were very few new faults in the system test. We found none critical faults in this testing. Some faults were however found and they mostly dealt with graphical and functionality faults. We thought that some functionality was very important to include and some graphical design were redone as it was considered illogical. When we had finished our system testing part it was time to let other people test the product. The product was given to both system testing and the developers at EMP. No test instructions were given to testers but the testers used the system in the real environment with their actual test cases. The system testing group did not object to the functionality or the reliability of the product but they felt they missed some important features. Among the more important ones, they found that they wanted the opportunity to save parameters for the test cases in a file. Most of the changes that were proposed were also incorporated with the product. The testing was then redone and the system testing group was content with the result. The developers started with developing test cases for the product and then they tried the graphical user interface with their own test cases. As they had test cases that we had not taken into consideration during our design phase, since they did not come up in our survey, problems arose. Again most of the problems dealt with the delays and error handling of our system. Our error handler reported faults in some fault-free cases. To solve this we had to rewrite the whole error handler to be able to deal with the developers test cases. The testing of our product and the rewriting process coexisted side by side for several weeks and new releases that fixed faults were constantly sent to the testers. Most of the faults detected only took a small amount of time to correct but some like the rewriting of the error handler is an ongoing process as it is not completely finished in our prototype. Almost all of the suggestions and faults found by the developers were corrected. Some of the suggestions for improvement were left with no actions, as it would have taken to much time. These things are described in detail later in this section 50

59 Chapter 6 Validation and Verification to make it easier to rewrite in the future. The developers that have tested the prototype are very satisfied with the result and the general feeling is that this improves the module testing and make is both easier and quicker as it becomes automated. The procedures described above made sure that our prototype was thoroughly tested and most of the faults were corrected. It is though important to remember that this is a prototype and it requires more testing before releasing to real users. This testing should focus on reliability which is not as good as it should have been and it is not something we focused on during testing. Some strange faults that occur with low frequency and at different places suggest that the product has some problems but these could probably be solved with extensive reliability tests. 6.3 The Developed C-Code The C-code developed in this product is limited to a few files that solely deal with the communication between the platform and the computer and the communication between the two processors in the platform. The developed C-code is designed to make it easier for the developers as they develop test cases and to make the communications protocol invisible to the developer as all the developer has to do is call a given function described in the test process. A very careful design phase made it clear which components needed to be written. This code was developed and tested first on the computer and then in the platform. The code developed was the revised accordingly with the faults found. Very few faults were found and this was probably due to the fact that we used a lot of files that had already been developed and tested in our solutions. The files developed in C should be invisible to the user and therefore we used black box testing were we only tested the calls and the result of the calls. This is what the user should do. We designed several test cases with different parameters and no discrepancies could be found. When we were satisfied that the code was thoroughly tested we gave it to the developers to test in the process of writing test cases. The developers used the developed C-code with ease and they could not find any faults other than the fact that the platform seemed to crash when very long printouts were used. This fault was investigated and it is due to an limitation in the transfer protocol we decided to use. To save memory in the platform printouts longer than 200 bytes cannot be printed as a whole. They have to be divided in smaller printouts. We revised our files accordingly and made sure that long printouts were divided in our files before we sent them to the chosen transfer protocol. This seemed to solve the problem. But a closer investigation shows that this solution could create new problem as important printouts could become divided at the wrong places and thus be considered as two printouts as other calls can make printouts in between. This problem is not solved in our prototype other than that we in the process tells the testers to divide important printouts themselves so that they are no longer than 200 bytes. No other faults were found even though the developers used our code very extensively and tested it with many different parameters. We found the C-code tested and the result was acceptable for all involved parties. 51

60 Chapter 6 Validation and Verification 6.4 Requirements Verification SRS1 Test: Comment: Left to do: The user interface should run on Windows NT or Windows The final version of the prototype was run intensively and with equal load on computers with both Windows NT and Windows There were no differences in the behaviour between the two operating systems. The prototype worked equally on both systems. The prototype requires Office 2000 to work properly. However the prototype works with other Office-packages as well but the automated report generation does not work with Office versions earlier than Nothing SRS2 Test: Comment: Left to do: The interface should be self-instructive for the developers and implemented using LabView. The final version of the prototype was given without any instructions to different developers. The developers received some help with the installation. After the installation the developers used the tool without any instructions. The time to complete the test procedure was very small. It only took about ten minutes for the developers to understand and familiarize themselves with all of the features in the prototype. After the test we interviewed the developers and made sure they had understood the entire program. No difficulties with the interface were detected during these tests. Most of the developers had ideas of how to improve the system but all of them understand and are able to interact with the interface. A manual has been developed for the interface and this manual is provided with the final software. After the testing there did not seem to be any real need for a manual as the learning threshold was very low for this interface. All of the developers managed to complete the test procedure within ten minutes. Nothing SRS3 Test: The implemented test system should influence the tested module in the target system as little as possible. We have added code in the ARM-processor and we also added one process. The memory used by our code is minimal. However, the process we added sometimes allocates a lot of memory. This allocation should not 52

61 Chapter 6 Validation and Verification influence the other processes, as the OSE-handler should guarantee this. We assigned a very low priority to our process to allow other processes to override if necessary. Running the platform with and without our process and code tests this requirement. The result is then reviewed in regard to the memory usage and process running with a debugging tool. No greater impact could be found. Comment: Left to do: Our process sometimes allocates a lot of memory and it is important to make sure that all of this memory is available to the process. This is done when the process is initiated. This is important for the developers to remember when running the test system as the platform crashes if there is not enough memory. Optimise the code to further reduce the size of the code. Rewrite the allocation method to allow for larger allocation when using print methods in our process. SRS4 Test: Comment: Left to do: Interactive Debug should be used for communication between the computer and the platform. Our interface solely uses the serial port for communication with the platform. On the platform side we did not implement any receiving method but rather used the process Interactive Debug by including this in our code. From the platform we only communicate with the computer by using methods available in Interactive Debug. All of our code on both the computer and platform was thoroughly checked so that all of the communication cases used Interactive Debug. As no communication that did not use Interactive Debug was found this requirement must be considered fulfilled. Since no communication protocol was implemented we could not communicate if we did not use Interactive Debug. That meant that all of the communication between the computer and the platform in the current version of the program needs to use Interactive Debug. Nothing SRS5 Test: All testing modules should be implemented using C-code with support for Interactive Debug. The developers should do this. A test procedure was developed and a skeleton file was written. The process and skeleton file was given to several developers and they tried to write the test cases as described in the process. With the help of the skeleton file most of the developers managed to write the test code using the C- language. All of the developers managed to write the test 53

62 Chapter 6 Validation and Verification cases but there were problems with the processes that had to be included and in which order they were included. This was fixed by clarifying this in the test process. Comment: Left to do: We have developed a skeleton file to make the developers work more easy. With the aid of this class there should be no problems implementing the test cases. Nothing SRS6 Test: Comment: Left to do: There should be a standard when writing the test modules that includes number of tests, test number, test specification, parameter specification and test code with test start and test finish. A test process for writing the tests was developed. Both developers and other independent test people at EMP reviewed this test process. The test process contains careful instructions on how to specify number of tests, test number, test documentation, parameters specification and the test code with test start, test finish and all printout calls made between the both. In a test, developers all wrote the same test case using the same instructions. All of the developers that followed the test process specified the things mentioned above in the same way. This test shows that we have made a process that makes it easy to add and remove test cases as all uses the same syntax. The test process developed is very extensive and could by some be considered too much to really take in. To facilitate this we made a short instructions manual were we only added the most important things, but on the other hand it is much easier to make a mistake when some things are left out. To get the best result the full process should be used as done in the tests. Nothing SRS7 Test: The testing procedure should be automated except for SRS21. When the test initialisation has been performed no more interaction is required to run the tests. There are only the decisions to create a test report, to start a test with manual evaluation and decide if a test with manual evaluation passed. If no manual evaluation is required and no test report needs to be written the cases can be run for eternity without any interaction from the user. This was tested by using test cases with no manual interaction and when the initialisation was completed the test cases was run until the platform battery stopped working. The tests continued 54

63 Chapter 6 Validation and Verification without interaction for the whole period. The same test was then run using manual evaluation and no other interaction was required but starting and deciding if a test case passed or failed. The number of tests was then increased to one hundred and the tests were run again. Still no interaction was required. Comment: Left to do: The testing procedure has been automated for module testing as all of the tests now can be run in a stream without any interaction from the user. This means that the tests can be started and the developer can do other things until a test case with manual evaluation is found. When this happens the execution is halted until the user starts it again. Then the user decides if the test case passed or failed. After the manual test case the automated test procedure continues. Nothing SRS8 Test: Comments: Left to do: The user interface should only communicate with a module located in the host, regardless if the module being tested is located in the host or in the DSP. As Interactive Debug is used for communication it is not possible to communicate directly with any modules located in the DSP processor. To facilitate the understanding of this we have written a test process that in detail describes how the communication works and how test modules should be implemented. With our choice of solution for communication this requirement is always met. The test process describes the communication with the DSP in detail. Nothing SRS9 Test: Comments: All communication between host and DSP should be performed according to the current standard for IPC. We have written two communication classes one on the ARM side and one on the DSP side. These two handles all of the communication between the host and the DSP. These two classes have been carefully reviewed by us, an IPC expert and other people at EMP to decide that all of the communication uses the available IPC standard and that all of the calls are done in accordance with the standard set in the IPC module. All of the reviewers agree that the standards are used and no communication between the ARM and DSP is done outside the standards. The available IPC standard might change in the future and then it is important that our code can be easily changed to 55

64 Chapter 6 Validation and Verification incorporate the new standard. We have used only basic IPC commands to facilitate this change. Left to do: Nothing SRS10 Test: Comment: Left to do: All communication with modules being tested in the DSP should be performed via a testing module also located in the DSP. The test process should guarantee that this is the case. Since our implemented communication module calls the test module located in the DSP directly all of the test code must be located in the test module on the DSP side. There is no way to call a tested module directly from our communication module. After reviewing the test process the reviewers agreed that if the test process is followed all of the test code must be located in the DSP test module. An evaluation of the test is done in the ARM test code. But this evaluation is not really a part of the test. The entire test is performed by the test code in the DSP test module and then an answer is sent to the ARM. The test code in the ARM evaluates this answer to decide whether a test case passed or failed and in the case of failure what went wrong. Nothing SRS11 Test: Comment: The testing system should support direct command communication according to the standard in Interactive Debug with the test module in the host using a terminal approach i.e. it should be possibly to override the automated test procedure. A terminal interface was implemented and this interface is accessible from the test screen of the program. The terminal interface works just as a normal terminal window where the user can read the output and write input. An extra feature is the filter function, which filters the outputs so that only the ones valid for the testing is shown. To test the terminal interface it was compared with a working terminal window (mslog). The terminal window was run in a session with different commands and then the terminal interface of the test program was run with the same commands. No discrepancies could be found between the two terminal interfaces. After the testing period the developers made the suggestion that the terminal interface should be able to run when the test session has started and not, as it is implemented, only between the test sessions. This feature is not implemented in 56

65 Chapter 6 Validation and Verification our prototype. An implementation requires a rather large change in the system that should require a few days work. Left to do: Make the terminal interface run during a test session. SRS12 Test: Comments: Left to do: The generated test report should include all of the important points in today s module test reports and system test reports. Copies of both today s module test reports and system test reports were studied carefully before the automated test report generation was implemented. A minor study and a discussion with the responsible people at EMP made sure that all of the important point was included in the automated test report. After the implementation the generated test reports were evaluated by both developers and the responsible people to make sure that the reports contained everything needed when using the reports. Everybody was satisfied with the substance of the test reports and they worked fine in the environment they were designed for. We added the printouts from the test cases that failed to the test report after the evaluation made by the developers. This was done to make it easier to remember what and where something went wrong for the future correction of the modules. Nothing SRS13 Test: Comment: For every test case there should be the possibility to have a variable number of parameters in the form of strings. The implemented interface supports a variable number of parameters in the form of strings. These are added by writing them in the interface before starting the test procedure. A small test program that could receive different types and different number of parameters was written to make sure that all of the parameters could be received. We tested to send, strings, integers, doubles, chars and vectors of these. All of these could be received as long as they were sent as strings. The conversion is made in the test code and this works as intended. When we increased the number of parameters a fault occurred. Interactive Debug only support around twenty five parameters so even though the interface supports a lot of parameters only about twenty five are actually sent to the test code. The test code also behaved well during stress tests where a lot of parameters were sent in a rapid stream. No faults occurred during this type of testing. Naturally the problem with only being able to send twenty five parameters is not optimal. But as we have chosen 57

66 Chapter 6 Validation and Verification Interactive Debug for our communication this is a limitation that we cannot control. The limitation is carefully documented in the test process to avoid any mistakes. We set the limitation to twenty parameters to avoid any problems when sending parameters in the form of strings to the platform. Left to do: Nothing SRS14 Test: Comments: Left to do: Besides the normal parameters text files should be supported. The same test code that was used in the test for SRS13 was used again. Text files formatted in the correct way was used as parameters both on their own and together with other parameters. The interface formatted all of the text files correctly and the test code could receive the vectors accordingly. The purpose of the text files was to be able to send large vectors and from the interfaces point of view this was achieved. Since Interactive Debug only supports about twenty five parameters a problem with large vectors occurred. The developers that tested the program had no comments on this problem, as they did not use the feature. When the test procedure for DSP vector tests was implemented, an implementation that took into account the limitation in parameters had to be done. This limitation made the process somewhat more difficult to follow but the tests for DSP can be done even with vector as large as several hundred thousand samples. But to achieve this the samples are sent in series of twenty samples. The solution we chose works well and is fully acceptable even if the process is somewhat more difficult to follow. Nothing SRS15 Test: Comment: Every testing module should run as a process, regardless if the module is located in the host or in the DSP. For the test code in the ARM a process is added on the ARM side. For the test code in the DSP a process is added on the DSP as well as on the ARM side. This is clearly specified in the test process, which should guarantee that processes are added for the test code. When testing DSP modules two processes are added to the platform. One module is added on the ARM side and one module is added on the DSP side. That the test code is a process at the DSP side does not fill any function at this stage of the prototype. But when and if the prototype should be extended with new types of tests in 58

67 Chapter 6 Validation and Verification the DSP it is probably needed that the test code is a process not to lock the different processes when chain calls are made. Left to do: Nothing SRS16 Test: Comment: Left to do: When writing the test modules the developer should have access to special commands for communicating with the user interface. These commands should enable the developers to decide when a test case is started and finished, when a test passed or failed and when information is printed. A special file has been written and we called it Debug_Printout. This file contains commands that are used in the test cases to define when a test case start, ends and if the test case passed, failed or has manual evaluation. There are also commands for starting and ending test documentation. Beside these features the file also contains a method for printouts to the interface located at the computer side. The class Debug_Printout is supplied in the package for the Module Tester tool and should be included when writing test cases. The test process clearly specifies how these commands should be used. No other communication with the interface is supported but through the special commands. The developers evaluated these commands and used them in their test code. They found them both easy to use and simple to understand. Since the print method supports the same formats as printf no problems occurred during this testing. The developers wanted a feature for interaction with the test procedure. This feature meant adding a command to the Debug_Printout. This was done so that the user in the test code can call a command that results in the halting of the test case to perform some kind of manual interaction for instance recording audio or video. Nothing SRS17 Test: All communication from the platform to the computer should be in the form of strings. The interface is designed to only read strings from the chosen serial port. Since our reading only reads strings all of the communication to the computer has to be in the form of strings. The strings are then converted to different types in accordance with the given rules. On the platform side every communication attempt to the computer goes through Interactive Debug, which only supports strings when it sends to the serial port. These two restrictions guarantee that 59

68 Chapter 6 Validation and Verification only strings are used in the communication to the computer. Our interface was carefully reviewed to make sure that only strings were received. Equally thorough was the inspection of the code in the platform to make sure that all of the communication really went trough Interactive Debug in the form of strings. Both of these conditions were fulfilled in the reviews and no discrepancies were found. Comments: Left to do: Since Interactive Debug is used there are no real ways to not meet this requirement. Nothing SRS18 Test: Comments: Left to do: The testing system should support manual interaction in the test evaluation i.e. to decide if a test passed or failed for instance by listening. In the test process it is carefully described how the manual interaction should be performed. If the instructions are followed the requirement is fulfilled. When a test case has manual evaluation there are three steps. In the first step a popup window where the tester has to confirm that the test can start comes up. Then the tester can be asked to perform some kind of manual interaction and finally the tester has to decide if the test case passed or failed. The developers tested this feature and when they followed the test process they got the manual interaction that they needed in their test cases. The implementation fulfilled the requirement and it even took it one step further with actual interaction. Even if a test case has manual interaction it is possible to mix the manual evaluation and coded evaluations. This was designed to allow more flexible solutions. Nothing SRS19 Test: The testing system should support automated report generation from one or many test cases. When the tester has run the test cases that were chosen the tester can make an automated test report by pressing a button. Only the test cases that were run occur in the test report. If only one test was run then only one test case is in the report. The automated test report generation was implemented using ActiveX to ensure that the generation is automated. The report generation was carefully tested with a lot of different parameters and a lot of different test cases and all of the tests produced correct reports according to the agreed standard. The testing of the report generation was extensive and a large variety of parameters were tested to make sure that ActiveX behaved equally well with all of the 60

69 Chapter 6 Validation and Verification parameters. After the testing the requirement must be considered fulfilled as no faults could be found in the tests. Comments: Left to do: ActiveX is not totally reliable and has been known to create problems from time to time. No problems could be produced during the tests but this is not a guarantee that problems with ActiveX never arises. The tests should guarantee that any problems arising should do so with very low frequency. Eventually another solution for the automated report generation might be preferable or ActiveX might become more stable in the future releases. But until then this solution should work with a very low fault rate. SRS20 The system should support automated saving of the test documentation in a database. Test: Since the automated report generation needed Office 2000 or better to work we chose Microsoft Access for the database documentation. Every time a test case is run an automated database entry is made. The entry contains all of the important features from the test case such as test number, module name, run, date, result and parameters. The logging is fully automated and not visible to the user. With our implementation it is very easy to change from Access to another database that supports SQL commands. The only way to make sure that the database logging works, as it should is to open the database afterwards and check it manually. In the tests we made an extensive test suite that run over several days. During the test suite we made manual notes on the tests. These notes were then checked against the database logging. No discrepancies were found and the requirement was considered fulfilled. Comments: Left to do: The purpose of the database logging is to save all test runs in a large database on a corporate server. The loggings can then be used to evaluate both results and time spent on testing different modules. No evaluation tool for the database is included in our prototype. This type of tool could however in the future be a big help. Evaluation tools for the database loggings. SRS21 Test: After test initialisation, the user should only decide if a test report should be created, a database saving performed, and also if manual testing passed. Please read the tests from SRS7 before continuing. When a test suite has started the procedure is completely automated. The only thing that could stop the test suite is if a test case with manual test evaluation is run. Then a popup window 61

70 Chapter 6 Validation and Verification arises and the tester has to confirm that the test can start. When the test has been performed the tester has to decide if the test passed or failed. If an interaction is required a popup window arises to alert the tester of the interaction. The other thing that could happen during the test suite is that the tester presses the abort button. Then the current test suite is interrupted. When the test case is interrupted or the test suite finishes the final step of the interaction is to decide if a test report should be generated or not. No other interaction is required. The developers tested this feature and after long tests they agreed that the test procedure was automated except for the things mentioned above. Comments: Left to do: This was one of the main things that the product should fulfil and therefore extensive testing was made to make sure that this requirement was met. The product requires only a minimum of manual interaction and is therefore to be considered as an automated testing tool. Nothing 6.5 Product Evaluation Our prototype has gone through extensive testing both isolated and in the real environment. The testing has shown that our prototype works very well even though some work is left to do. But our work was to make a prototype for automated module testing and as described above a prototype was developed. The prototype was designed for automated testing and it should be made easy to extend the prototype with new functionality as well as changing the current functionality. Another important part of our work was to develop a clear process for module testing and make it easy for the developers to design their tests. The processes developed have been thoroughly checked and they have been found easy to use and easy to understand by both the developers and the system-testing group at EMP. This part of the work has been fulfilled and it should not be any problems to understand or extend the process for either the testing engineers or the developers that should use the processes. As for the automated testing tool a working prototype has been developed. Of course there is some work left to do as the prototype go into more and more modules. The developers have made extensive testing on the tool and they are satisfied with the tool even if they would like to add some functionality and make the tool more reliable. The problem with the stability and the error handling is not solved to our satisfaction in this prototype but we have tried to facilitate this work by adding a section in the report that discusses what is left to do and how we would have implemented this given the appropriate time. But our work was to develop a prototype that had founded the ways for the automated module testing and this is a work that was accomplished. In the evaluation from the developers we got a lot of positive feedback and they had no problem incorporating the tool with their tests. The tool greatly helped them in their module testing. Even though 62

71 Chapter 6 Validation and Verification some work is left to do the feedback from the developers must be considered as a completion of the prototype. The work for future automated module testing in a stable and secure environment has been founded and even though the prototype is not as stable as could have been wished for the prototype still fulfils all of the requirements laid out from the system testing group at EMP. Both developers and the testing engineers at EMP have approved the product as something to base future module testing on. The prototype can be used in the current state for module testing and during the incorporation into more and more modules the work with the prototype can evolve into a complete and fully operational automated testing tool. Personally we think that we have accomplished the task that we were set out to do. We have developed a fully functional prototype of an automated module testing tool as well as drawing up guidelines and recommendations for how to develop and perform module tests in the different modules developed at EMP. The feedback and response that we got from both testing engineers and the developers gave us the impression that the tool was the product they had expected and required. Of course there are some work left to do but as the basics have been done the rest should not be any problems and not require all of the work that we have invested in developing both the process and the prototype. Both ourselves and the other engineers at EMP have had ideas in how to improve the product we developed. A lot of these ideas have already been incorporated into the final prototype but some are still left to do. This is however the ongoing work of a prototype. We feel that the product we developed fulfilled the requirements and in doing so have founded the basics for automated testing in module tests. 6.6 Left to do in the Product As indicated in the evaluation some things were left to do before the prototype could be considered a fully operational product. To facilitate this work we have listed most of the things we found during the evaluation below and we have also tried to describe how we would have implemented this and an estimate of the time required to do so. The things below are not crucial to the prototype and therefore they were listed as left to do. But even so it could be recommendable that most of the things described are implemented to make it easier and more convenient for the developers as they should use the product. Description: Solution: Support of normal test cases in the DSP. This means that the same types of tests that are performed in the ARM should also be included in the DSP. The only type of tests currently available in the DSP is the frame tests. Part of this has already been implemented into the communication classes in the ARM and the DSP. To avoid deadlocks it is not possible to just call a function in the testing process as done today but one should rather use signals to free methods and classes. When a signal is sent to the test process in the DSP test code this class should have the same types of printout methods as available in the ARM. 63

72 Chapter 6 Validation and Verification The printouts are sent to the ARM were they are printed to the serial device. The printout methods in the DSP is supported and implemented but the transition into signals instead of function calls has to be implemented. Time req. Description: Solution: Time req. Description: Solution: Time req. A few days work to make the transition into signals and one or two days to test and verify that the methods work as supposed. The implementation could cause problems, as printouts in the DSP as well as the transfer of strings between the two processors are known sources of faults. This might indicate that the time requirement could be as long as a week or more. Support of more than one test vector in the frame tests in the DSP. Some of the modules located in the DSP take more than one vector of parameters. To support this we have to be able to send these to the DSP as separate vectors. This requires some rewriting of both the LabView and the C-code developed. In LabView there is a method in the class main_menu called DSP_frame_test. This method contains the actual sending of the vectors to the platform. When the test vector is sent it should be fairly easy to implement an equal method to send more vectors. The C-code for communication has however to support this which requires the adding of another method just as PCM_FRAME_SENDER or PCM_FRAME_OUTVECT. You also would have to change the method PCM_FRAME_DSP_SENDER in the DSP_Comm class on both the ARM and DSP side to support double vectors in the communication between the two processors. Most of the code added can be copied from the already existing code and therefore it should be fairly easy to implement. The time required varies with the knowledge of the LabView code. With fair knowledge of the LabView code the time required should not be more than a few days. An ini-file for LabView that makes the graphical user interface looks the same on all computers. The ini-file contains information about fonts, colors and much more. On National instruments homepage there is a description on how these ini-files work and how they should be implemented. It should not be any problems to follow this information. One days work ought to be enough to get this feature to work. The implementation must however be considered of low priority as it will not involve any functionality at all but rather only the looks of the program. 64

73 Chapter 6 Validation and Verification Description: Solution: Time req. Description: Solution: Time req. Description: The possibility to decide for each individual test case the number of loops in a loop run and not as it is done today when all of the test cases are run the same number of loops in a loop run. This requires some rewriting of the LabView code. In the main_menu class there is a structural case hierarchy. In the cases five and six there is a call to the execute_test method. This call is in a for-loop that runs for as many times that the number of loops was set. If the number of loops dialog was changed into a vector where all of the test cases were set individually and this information was saved in an array then instead of a fixed number the number of loop could be decided by the elements of the array. This means that a test case only runs if the current run is equal or less than the total number of runs for that individual test case. A day s work ought to be enough to implement this feature. If the knowledge of the LabView code is limited the amount of time spent on this increases, as it is required to understand the structure of the LabView code when this change is implemented. Instead of the graphical user interface changing all the time all of the information should be on one screen. This facilitates the understanding and makes it easier to follow and change parameters for the program. This requires a lot of rewriting. The main class of the program has to be replaced by a new one as this class contains all of the graphical details of the program. The case structure should be conserved but the graphics has to be changed. This is a great task but fortunately all of the other classes works in the current state so it is only the main class that has to be rewritten. At least one week s work have to be made to make the new user interface work as supposed. The work with the graphics should not be too difficult but a lot of bugs might occur and a lot of time also has to be spent on planning the new case structure. This change is low priority, as it will not affect the functionalities but rather only the looks of the graphical user interface. Instead or together with the test case number there should be a short description of the test case visible at all times to make it easier to decide if a test case should be performed or not and if there should be any parameters. This information is available in the doc and all that is needed to be done is to find a smart way to incorporate this text with the graphical user interface without expanding it to much so that the overview is damaged. 65

74 Chapter 6 Validation and Verification Solution: Time req. Description: Solution: Time req. Description: Solution: Time req. Add a small field that contains the most important information from the doc to each test case in the graphical user interface. Make sure that this information is placed at the correct place to minimize the impact on the interface. A day s work should be enough. The main work is the planning of the new field. The incorporation should be very easy even though it requires some rewriting of the existing code in for instance the initialization of the graphical user interface. The possibility to have an array of parameters as a parameter for the test case. The purpose of this is to run the test cases once or as many times as the loop condition requires with each of the parameters in the array. This would require a large rewriting of the methods that formats the parameters as well as the execute_test method. The execute_test method would have to have an internal loop that would run the specific test case as many times as required with different parameters. This is a large rewrite as the execute_test method is complicated and not that easy to make changes in. A week s work or more as the core of the testing interface has to be rewritten. The complexity of the execute_test method suggests that the rewriting cause side effects that have to be dealt with. These effects will require additional time before the testing could function again. The adding of a button for selecting and deselecting all of the test cases at once rather than doing it one by one. Adding two buttons to the user interface that is hidden when the screen shows anything but the test cases. Then a check in the button should be added checking in case five and six of the structure to check if any of the buttons has been pressed. If any of the two has been pressed check or uncheck all of the selection boxes. A few hours work. Most of the time has to be spent on the design to make the impact of the two new buttons as small as possible to the user interface. When the correct placement has been chosen the implementation of the functionality should be fairly easy as most of the code can be copied from existing button such as Run -button. 66

75 Chapter 7 Discussion and Conclusions 7. Discussion and Conclusions The purpose of this master thesis was to study theories and strategies for testing with the focus on module testing. This was put into practice as we were given the task to improve the test process at Ericsson Mobile Platforms with focus on module testing and to specify, design and implement a prototype of a module-testing tool. First of all the background for testing was studied. In this study an extensive research was made both on testing in general and on module testing. The research on module testing showed that this is the most neglected part of the testing procedure. At lot of theory and examples was available for system testing but the material on module testing was most limited. But still some data was available and with the help of these data different methods for module testing was studied. In a large-scale product it is vital that the work is divided into minor modules. Each of these modules has to be tested individually. These tests should mainly answer two important questions. Does the existing code work as it is supposed to do and does all of the required functionality exist in the module? The module tests should mostly be carried out by the developers themselves and not by the test group. This is due to the fact that the knowledge of the code is most important when finding and correcting faults and to make sure that all of the code is tested. Another important issue to bear in mind is to make sure that all of the different modules in a project receive the same type and range of module testing. This verifies that all of the product have the same quality and thus greatly ease the work with putting the different modules together as a product. To make sure of this it is recommendable that a process for the module testing is developed and the testing itself is performed with the help of a tool. Finally it is very important to document the results of the module tests. This has more than one purpose, of course it is important to document everything for future use but the documentation also puts some pressure on the developers, as they really have to focus on the tests when they are being documented. If the work with the module tests is done properly and thoroughly then a lot of time and money is saved in later testing phases as a lot of faults are avoided and not as much expensive rework after system testing is needed. The first part of the task at Ericsson Mobile Platforms was to improve the process for module testing. This actually meant designing a new process for module tests as most of the groups implementing modules used different processes for their module tests. In the work with the new test process the project focused on the existing processes. First a study to decide which type of process would be the easiest to incorporate into today s processes was done. In this work a lot of engineers at Ericsson were interviewed. The input from the engineers was very helpful and it was decided to focus on smoothing the transition from the existing processes as the process was developed. This decision would save time by allowing the reuse of the current test code with only minor changes. As the work with the process progressed, the work on the prototype started. The first part of the work with the prototype was to decide which type of functionality that would be needed in the prototype engineers as well as the 67

76 Chapter 7 Discussion and Conclusions system-testing group at Ericsson Mobile Platforms AB was interviewed. This work resulted in a design that the prototype is based on. Another consideration taken into account was the fact that it was very important that new functionalities could be added later in the implementation phase. To solve this we decided to design the user interface in a structural manner to be able to add new functionality without major changes to the existing code. The work with the prototype evolved quickly and it did not take long before a version of the prototype that we could use with the test process was available. The early stage system testing was greatly enhanced as both the process and the prototype could be tested simultaneously. It was easier to find mistakes and faults in both the process and the prototype when they could be tested together. Even if the prototype could work with the process a lot of functionalities had not yet been implemented. During the implementation the different parts were constantly tested to make sure that they fulfilled the requirements set up. The testing was divided into many small parts and the testing was very thorough. This type of testing was something that had been found to be helpful and timesaving during the study on general testing. The time spent on low-level testing is often saved later in the project as less time is spent on integration and high-level testing. This was something that was found to be true in the development of our prototype as very few faults were found during system testing. When the prototype and process were finished the engineers at Ericsson tested it. This testing was performed both with instructions and in a more free way. Some minor faults were found but the main impression was a secure and stable product. The developers had several ideas in regard to new functionality for the prototype and much of this was implemented with ease as the structural design of the interface made the adding of new functions relatively easy. Since a lot of time was spent on the design and low-level testing of the prototype the time spent on these changes were greatly limited. This verifies that an extensive and thorough low-level testing minimizes the time and effort spent on the product as well as enhances the quality of the product. The developers had some ideas for improvements in regard to the user interface. Most of these were added immediately and the rest have been described to make it easier to incorporate with the current version of the prototype. The design of the prototype gave the opportunity to improve and enhance the user interface without any extensive and time consuming rewriting of existing code. When the testing phase ended a process and a prototype that fulfilled the requirements set up during the implementation phase had been developed. The engineers as well as the system testing group at Ericsson Mobile Platforms were pleased with the result, which they felt had set the foundation for more effective and thorough module testing at Ericsson Mobile Platforms. Our feeling is that we have accomplished the goals of this master thesis. A large-scale study on testing in general and module testing in particular that gave us invaluable knowledge was performed. This knowledge was put into 68

77 Chapter 7 Discussion and Conclusions practice as we developed a fully functional prototype of an automated module testing tool as well as drawing up guidelines and recommendations for how to develop and perform module tests in the different modules developed at EMP. The feedback and response that we got from both testing engineers and the developers gave us the impression that the tool was the product they had expected and required. Of course there are some work left to do but as the basics have been done the rest should not be any problems and not require all of the work that we have invested in developing both the process and the prototype. Both our selves and the other engineers at EMP have had ideas in how to improve the product we developed. A lot of these ideas have already been incorporated into the final prototype but some are still left to do. This is however the ongoing work of a prototype. We feel that the product that we developed fulfilled the requirements and in doing so have founded the basics for automated testing in module tests. During our project we have gained important knowledge in module testing as well as in software development. The most important knowledge is however the mixture between these two parts in a project. The careful assortment of incorporating testing in different phases of the software development and to learn and see how it this is done at a real development company is an invaluable experience. 69

78 Chapter 8 References 8. REFERENCES (1) Software Testing and Continuous Quality Improvement William E. Lewis ISBN: CRC Press LLC, Florida USA, 2000 (2) Programvaruutveckling för stora system Projekthandledning Björn Regnell and Claes Wohlin Institutionen för Telekommunikationssystem LTH, Lund Sverige, 1999 (3) Software Verification and Validation for Practitioners and Managers Steven R. Rakitin ISBN: ARTTECH HOUSE INC, Norwood USA, 2001 (4) The Complete Guide to Software Testing Bill Hetzel ISBN: John Wiley & Sons Inc, USA, 1988 (5) Software Testing in the Real World Improving the Process Edward Kit ISBN: Addison Wesley Longman Limited, USA, 1995 (6) Testing Computer Software Cem Kaner, Jack Falk and Hung Quoc Nguyen ISBN: John Wiley & Sons Inc, USA, 1999 (7) Software Engineering 6 th edition Ian Sommerville ISBN: X Pearson Education Limited, USA, 2001 (8) (9) presentation-part1.pdf (10) (11) (12) Capability Maturity Model software development using Cleanroom software engineering principles-results of an industry project R.S. Oshana and R.C. Linger IEEE Comput. Soc, Systems Sciences, 1999 (13) 70

79 Chapter 8 References (14) (15) (16) (17) (18) (19) Aggressive and enthusiastic software engineering L.G. Baker Magazine: Program Manager, July

80 Appendix A Software Requirements Appendix A - Software Requirements Specification (SRS) The software requirements specification contains all of the demands that were put on the product before the implementation. These demands should all be fulfilled in the implementation of the product. That the demands have been fulfilled should be tested as the product develops. The requirements are divided into functional and non-functional demands. This is done to get a better overview of the demands. Non-Functional Requirements The non-functional requirements primarily handle demands that are not concerned with the specific functions delivered by the product. This means that the demands often deal with the whole system rather than some specific detail. When these demands are not met the product as a whole has an fault. Another thing these non-functional requirements cover are demands on the development process rather than the product. This means that the demands are not crucial for the product but rather for rework or extension of the product. SRS1 The user interface should run on Windows NT or Windows SRS2 SRS3 SRS4 SRS5 SRS6 SRS7 SRS8 SRS9 SRS10 The interface should be self-instructive for the developers and implemented using LabView. The implemented test system should influence the tested module in the target system as little as possible. Interactive Debug should be used for communication between the computer and the platform. All testing modules should be implemented using C-code with support for Interactive Debug, the developers should do this. There should be a standard when writing the test modules that includes number of tests, test number, test specification, parameter specification and test code with test start and test finish. The testing procedure should be automated except for SRS21. The user interface should only communicate with a module located in the host, regardless if the module being tested is located in the host or in the DSP. All communication between host and DSP should be performed according to the current standard for IPC. All communication with modules being tested in the DSP should be performed via a testing module also located in the DSP. 72

81 Appendix A Software Requirements SRS11 SRS12 The testing system should support direct command communication according to the standard in Interactive Debug with the test module in the host using a terminal approach i.e. it should be possibly to override the automated test procedure. The generated test report should include all of the important points in today s module test reports and system test reports. Functional Requirements Functional demands primarily deals with some functionality or service that the product should provide. Often these demands are seen as user requirements and are therefore simplified to be easier understood by the users. But this is a mistake and it is more important that the demands really specify what they should rather than they are easy to understand. When demands are simplified they often loose their true meaning and this means that several new demands have to be stated to really cover the meaning of the original demand. It is very important to avoid rework later in the process that the functional demands are complete and consistent and much time have been spent on this when developing these demands. SRS13 SRS14 SRS15 SRS16 SRS17 SRS18 SRS19 SRS20 SRS21 For every test case there should be the possibility to have a variable number of parameters in the form of strings. Besides the normal parameters, text files should be supported. Every testing module should run as a process, regardless if the module is located in the host or in the DSP. When writing the test modules the developer should have access to special commands for communicating with the user interface. These commands should enable the developers to decide when a test case is started and finished, when a test passed or failed and when information is printed. All communication from the platform to the computer should be in the form of strings. The testing system should support manual interaction in the test evaluation i.e. to decide if a test passed or failed for instance by listening. The testing system should support automated report generation from one or many test cases. The system should support automated saving of the test documentation in a database. After test initialisation, the user should only decide if a test report should be created, a database saving performed, and also if manual testing passed. 73

82 Appendix B Module Tester Classes Appendix B Module Tester Classes Main_Tester Start class that initiate the process by setting up the serial port, reading the module names available for testing, setting up hardware and software configuration, saving new configurations and finally reading the number of tests available for the chosen module. When all of this is done the class initiates Main_Menu. During the execution the class also handles the graphical display of the Terminal Reader. Methods: void Main_Tester() void initcomport(string, int) string[] ReadModuleNames() string GetTestData() void PutTestData(string) void ShowTerminalReader() void InitMain_Menu(string, int) int GetNbrOfTests(string) Constructor. Handles the initiation of the serial port, includes interaction with the user. Reads and returns the module names available for testing. Reads and returns the configuration data from the configuration file. Writes the modified configuration data to the configuration file. Displays the window for the Terminal Reader. Sets up and initiates the graphical display of the Main_Menu. Reads and returns the number of tests available for the given module. TestData_Input Class that on demand first reads a configuration file and then return all of the software and hardware configurations for the testing procedure. Then the user can change any of the settings and the configuration is saved. Methods: void TestData_Input() void ReadConfigFile() string[] GetTestData() Constructor. Reads the configuration file and returns the contents of this file. Returns the test data from the configuration file. 74

83 Appendix B Module Tester Classes void WriteConfigFile(string) void PutTestData(string) Writes the configuration data supplied to the chosen configuration file. Sets the configuration file to the chosen configuration. NbrOfTests_Reader Class that sends a command to the platform and receives the number of tests for the chosen module. Methods: void NbrOfTests_Reader( Com_Reader, Com_Writer) int GetNbrOfTests(string) Constructor. Sends a command to the platform and receives the number of tests available for the given module. Com_Reader Class that handles all reading on the serial port. Methods: void Com_Reader( Terminal_Reader, int, int) string GetNextInput() void EmptySerialBuffer() int Get_Bytes() Constructor that initiates serial. communication with port number and baud rate. It also sets up the continuous reading of the serial port by the Terminal Reader. Returns the next available string at the serial port. Empties the buffer for the serial port. Returns the number of bytes available at the serial port. Com_Writer Class that handles all writing on the serial port. Methods: void Com_Writer(int, int) Constructor that initiates serial communication with port number and baud rate. void WriteOutput(string) Writes the input string to the chosen serial port. 75

84 Appendix B Module Tester Classes Main_Menu The main class of the test procedure. This class contains most of the graphical display. The user can choose between the different test cases, see a terminal window and decide for a report generation. The scenario is that the user selects the tests that is to be run, looks at the documentation of the tests, fills in the parameters for the different tests and executes the test. The execution proceeds through all of the tests and returns passed, failed or manual. Then the tester looks at the log for the tests that failed. The tester decides if a report is to be generated. A database logging is made for every test case that is run. Methods: void Main_Menu(Com_Reader, Com_Writer, int, string) void Initialise() cluster[] ExecuteTest(int, int, progressbar) string GetDocumentation(int) void setnewtestcase(cluster) Constructor that sets up the graphical interface with the number of tests and the chosen module name. Sets all values of the user interface to default. Executes a test with the given number and collects the test parameters and returns a cluster with the result of the test. Reads and returns the documentation for the chosen test case. Sets the internal testcases to the given cluster. void Show_Doc(string, string, string) Generates a popup window that presents the data given as module name, test case and documentation void Show_Printouts( string, string, string, string) string GetInputParameters(string, progressbar) void Renew_Params() string[] Save_Params() Generates a popup window that presents the data given as module name, test case, the current run and the printouts from the current run. Method that converts and formats the given input parameter to the correct form for the call to the module and returns the formatted input parameters. Saves the current parameters for the test cases and the state of the test case to a local file on the computer. Fetches and returns the parameters for the previous test run from a file stored 76

85 Appendix B Module Tester Classes locally on the computer. It also fetches the state for each test case in the previous run. string Sort_Printouts(string) Filters the input string so that only the printouts with the correct format for Module Tester is returned. void DSP_Frame_Test(string, string, Method that performs frame tests. The string, string, progressbar) method take two vectors as parameters and a test cases string with the test number and a progressbar and the parameters for the initialisation of the test case. First the method calls the init method for the test case then it sends the samples to the host in series of twenty. Then it waits for the reply of the current PCM frame before sending a new one. When all the samples are sent the frame test is finished. Terminal_Reader Class that reads from the serial port and returns a string which is to be presented in a terminal window. The class can filter non-important strings. Methods: void Terminal_Reader() string TerminalRead() void TerminalWrite(string) void EmptyReader() string GetReaderValue() void PutData(string) Constructor. Reads and returns the next string available at the serial port. Writes the given string to the serial port Empties the buffer with all entries for the Terminal Reader. Returns the strings currently held in the Terminal Reader (maximum 10000). Put the given parameter first in the strings currently held by the Terminal Reader, if necessary the last string held by the Terminal Reader is deleted. 77

86 Appendix B Module Tester Classes void SetCom(Com_Reader, Com_Writer) void ShowTerminal() string FilterPrintouts(string) Sets the internal Com_Reader and the internal Com-Writer to the given parameters. Displays the graphical interface for the Terminal Reader. Filters the given parameter accordingly with certain formatting rules so that it returns only the output that directly has to do with the.testing procedure Rapport_Gen Class that handles the generation of a report. All information is feed to the class and a Microsoft Word report is made. Methods: void Report_Gen() void GenerateReport( cluster[], string[], string[]) Constructor. Generates a report with the given input data which is the information about each test case, the specific configuration data and the printouts from the test cases. Database_Gen Class that generates a database logging. All information is feed to the class and an Access-entry is made. Methods: void Database_Gen(path) Constructor that sets up the database generation with the correct path to the database. void GenerateDatabaseEntry(string[]) Generates a database entry with the given input data which consists of configuration data as well as specific test case data for the chosen test case. 78

87 Appendix B Module Tester Classes Appendix C Class Diagram for the User- Interface Figure C.1 Class Diagram for the User-Interface 79

88 Appendix D Detailed High Level Design Appendix D Detailed High Level Design Appendix D1: Detailed High Level Design Module Tester All signals that are sent out from or received by Module Tester are in the format of strings and because of this they cannot be considered as function calls in the traditional meaning. It should also be noticed that the signals sent out from the processes in the phone to the PC are sent via the IDbg process. These signals could be sent directly to the PC with a usual printf, but with this solution the facilities of IDbg are utilised, e.g. that the printouts are guarantied to be sent out from the phone. All outgoing signals from Module Tester to the phone should be strings. Each signal is sent to the IDbg process in the Phone. This signal is first handled by IDbg. If the signal is supposed to go further, to another process in the phone, IDbg handles that. IDbg behaves like a command prompt that receives strings and then executes the commands. A signal sent to IDbg could be, for example, cd (change directory), ls (list) or an order to start a test case in the current directory. Module Tester decides if it is a host test or a DSP test by the number of parameters. If the number of parameters that is used in a test case in Module Tester exceeds 159 it is considered to be a DSP test. The main difference between host tests and DSP frame tests from Module Testers point of view is that it invokes six additional functions when it is a DSP frame test. It starts with a method that sets up the algorithm that is used and it ends with a method that de-allocates resources used during the test. These two functional calls will not be visible to the tester using Module Tester. Signals to Module Tester: Signal: Signal type: Task: Debug_Test_Start string Tells that the test case started. The string must consist of the name and number of the test. Debug_Test_End string Tells that the test case finished. The string must consist of the name and number of the test. Debug_Test_Passed string Tells that the case passed. The string must consist of the name and number of the test. Debug_Test_Failed string Tells that the test case failed. The string must consist of the name and number of the test. 80

89 Appendix D Detailed High Level Design Debug_Test_Manual string Tells that the test case has to be manual evaluated. The string must consist of the name and number of the test. Debug_Print string Gives an optional printout. Debug_Test_Doc_Start string Tells that the documentation starts. The string must consist of the name and number of the test. Debug_Test_Doc_End string Tells that the documentation ends. The string must consist of the name and number of the test. Signals from Module Tester: Signal: Signal type: Task: ls No param. Lists the current directory. cd string Changes current directory to the one specified by the string parameter. TEST_00X Optional Starts the test case X with optional parameters. When it is a DSP test the parameter will be a vector of samples. TEST_00X_DOC Optional Starts the documentation method for test case X that will send the documentation to Module Tester. TEST_00X_INIT Optional Only used when testing modules in the DSP. It sets up the algorithm in the DSP with optional parameters. TEST_00X_KILL Optional Only used when testing module in the DSP. It de-allocates resources used during the test in the DSP. PCM_FRAME_SENDER Vector of Sends samples to the samples host. When 160 samples are received, at the host module, these samples are sent to DSP Comm DSP. PCM_FRAME_OUTVECT Vector of Sends samples to the samples host. When 160 samples 81

90 Appendix D Detailed High Level Design are received, at the host module, they are used to compare with the processed frame that comes from the DSP. PCM_FRAME_RESET No param. The sending of the current frame will be reset and when it starts again it will start from the beginning. GET_RECIEVED_FRAME No param. Checks if a frame has been received from the DSP. 82

91 Appendix D Detailed High Level Design Appendix D2: Detailed High Level Design Host Test Module The Host Test Module sends and receives signals to the Host Target Module. These signals depend fully on the methods that are being tested in the target module, i.e. they are different for every test case and will hence not be discussed here. Signals to Host Test Module: Signal: Signal type: Task: TEST_00X() Optional Starts the test case X with optional parameters. TEST_00X_DOC() Optional Starts the documentation method for test case X that will send the documentation to Module Tester. Signals from Host Test Module: Signal: Signal type: Task: Debug_Test_Start string Tells that the test case is started. The string must consist of the name and number of the test. Debug_Test_End string Tells that the test case is finished. The string must consist of the name and number of the test. Debug_Test_Passed string Tells that the test case passed. The string must consist of the name and number of the test. Debug_Test_Failed string Tells that the test case failed. The string must consist of the name and number of the test. Debug_Test_Manual string Tells that the test case has to be manually evaluated. The string must consist of the name and number of the test. Debug_Print string Gives an optional printout. Debug_Test_Doc_Start string Tells that the documentation starts. The string must consist of the name and number of the test. Debug_Test_Doc_End string Tells that the documentation ends. The string must consist of the name and number of the test. 83

92 Appendix D Detailed High Level Design Appendix D3: Detailed High Level Design Host Target Module This is the module that is tested. Signals are sent to and received from the Host Target Module. The signals that are sent to the host target are the function calls with parameters and the signals that are sent from the target are the values that are return from the invoked methods. These signals are specific for every test case and are hence not discussed here. Signals to Host Target Module: Signal: Signal type: Task: No signals considered. Signals from Host Target Module: Signal: Signal type: Task: No signals considered. 84

93 Appendix D Detailed High Level Design Appendix D4: Detailed High Level Design DSP Test Module Host Every test case that is tested in the DSP has to be administrated in the host as well, since it is impossible to communicate with the DSP directly from the PC. This test module receives signals that are sent from Module Tester and forward them to the DSP through the Comm Module in the host to the Comm Module in the DSP. Signals to DSP Test Module Host: Signal: Signal type: Task: TEST_00X_INIT Optional Set up parameters to send to the target module. TEST_00X Sample A request to execute vector test case X. TEST_00X_KILL Optional Tear-down-parameters to send to the target. TEST_00X_DOC Optional Returns the doc for the test case. PCM_FRAME_SENDER Vector of Receives samples from samples Module Tester. When 160 samples are received, at the host module, these samples are sent to DSP Comm DSP. PCM_FRAME_OUTVECT Vector of Receives samples from samples Module Tester. When 160 samples are received, at the host module, they are used to compare with the processed frame that comes from the DSP. PCM_FRAME_RESET No param. The receiving of the current frame will be reset and when it starts again it will start from the beginning. GET_RECIEVED_FRAME No param. Checks if a frame has been received from the DSP. Signals from DSP Test Module Host: Signal: Signal type: Task: Debug_Test_Start string Tells that the test case is started. The string must consist of the name and number of the test. Debug_Test_End string Tells that the test case is finished. The string must consist of the name and number of the test. 85

94 Appendix D Detailed High Level Design Debug_Test_Passed string Tells that the test case passed. The string must consist of the name and number of the test. Debug_Test_Failed string Tells that the test case failed. The string must consist of the name and number of the test. Debug_Print string Gives an optional printout. Debug_Test_Doc_Start string Tells that the documentation starts. The string must consist of the name and number of the test. Debug_Test_Doc_End string Tells that the documentation ends. The string must consist of the name and number of the test. DSP_COMM_INIT No params. Requests to initiate the DSP communication for DSP frame testing. PCM_FRAME_SETUP int, int, Requests to set up the int, int DSP communication. PCM_FRAME_RESULT No params. Request the error codes from the performed test. PCM_FRAME_DSP_SENDER In vector Sends 160 samples from of samples, invect to DSP Comm reference Module DSP. vector of samples Get_Nbr_Of_Frames No params. Returns the difference between the number of frames received from the DSP and the number of frames sent to the DSP. Sub_Nbr_Of_Frames No params. Deletes one from the difference above. GET_RECIEVED_ No params. Returns weather the DSP_FRAME difference above is greater than zero or not. 86

95 Appendix D Detailed High Level Design Appendix D5: Detailed High Level Design DSP Comm Module Host This module handles the communication between the host and the DSP. It sends and receives signals to and from the DSP Comm Module DSP. When the test is performed in the DSP it sends the result of the execution back to this module. For more information see Appendix D8: Detailed high level design DSP Comm Module DSP. Figure D5.1 Two communication modules handles the communication between the host and the DSP. Signals to DSP Comm Module Host: Signal: Signal type: Task: DSP_COMM_INIT No params. Initiates the DSP Communication for DSP frame testing. PCM_FRAME_SETUP int, int, Sets up the DSP char, char, communication. Calls the int DSP. PCM_FRAME_RESULT No params. Returns the error codes from the performed test. PCM_FRAME_DSP_SENDER In vector Sends 160 samples from of samples, invect to the DSP. The Reference samples in the two vector of vectors are compared. Samples Get_Nbr_Of_Frames No params. Returns the difference between the number of frames received from the DSP and the number of frames sent to the DSP. Sub_Nbr_Of_Frames No params. Deletes one from the difference above. GET_RECIEVED_ No params. Returns weather the DSP_FRAME difference above is greater than zero or not. PCM_FRAME_ Vector of Receives the processed CallbackHandler samples. frame from the DSP. 87

96 Appendix D Detailed High Level Design DSP_COMM_ Printouts, Handles incoming ChannelReceive type of the messages on a the Handler printouts. channel that is used to send back printouts from the DSP. Signals from DSP Comm Module Host: Signal: Signal type: Task: DSP_COMM_ Commands Sends messages to the ReceiveHandler DSP. PCM_FRAME_ Vector of Sends a frame to the ReceiveHandler samples DSP. 88

97 Appendix D Detailed High Level Design Appendix D6: Detailed High Level Design Interactive Debug Before a signal is received by one of the processes that uses it in the host, it goes via the Interactive Debug (IDbg) process. The IDbg process converts the incoming signals from strings into function calls. Every process that is invoked in the phone via IDbg also has to be registered by IDbg. In this case it is only the processes containing the test code when testing modules in the host (Host Test Module) and the module in the host that is administrating the DSP tests (DSP Test Module Host). The signals that are going further, i.e. the ones that are not solely intended for IDbg, are not discussed in this chapter, despite that the IDbg process also handles them. They are discussed in the paragraph of the module that receives them. This is also the case for the signals that are going to the PC via IDbg. Even though this process handles them, they are not discussed here, instead in the paragraph of the module that sends them. Signals to Interactive Debug: Signal: Signal type: Task: ls No param. Lists the current directory. cd string Changes current directory to the one specified by the string parameter. Signals from Interactive Debug: Signal: Signal type: Task: TEST_XXX string Sends the name, XXX, of the modules available for testing back to Module Tester. NbrOfTests (u2) = n string Sends the number, n, of tests available for testing back to Module Tester. 89

98 Appendix D Detailed High Level Design Appendix D7: Detailed High Level Design Debug Printout The signals that are sent to the PC are, as mentioned earlier, only in the format of strings. In order to separate the strings that are sent out from the executing test (and that are designated for Module Tester) from all the other strings that are sent out from the phone additional information is required. This additional information has to be merged together with the string it self before it is sent to the PC. That is the task of Debug Printout. This does not affect the logical way of the signal. Signals to Debug Printout: Signal: Signal type: Task: Debug_Test_Start() string Tells that the test case no is started. The string must consist of the name and number of the test. Debug_Test_End() string Tells that the test case is finished. The string must consist of the name and number of the test. Debug_Test_Passed() string Tells that the test case passed. The string must consist of the name and number of the test. Debug_Test_Failed() string Tells that the test case failed. The string must consist of the name and number of the test. Debug_Test_Manual() string Tells that the test case has to be manually evaluated. The string must consist of the name and number of the test. Debug_Print() string Gives an optional printout. Debug_Test_Doc_Start() string Tells that the documentation starts. The string must consist of the name and number of the test. Debug_Test_Doc_End() string Tells that the documentation ends. The string must consist of the name and number of the test. Signals from Debug Printout: Signal: Signal type: Task: Request_IDbg_Printf() String Sends a request to IDbg to send the printout to Module Tester. 90

99 Appendix D Detailed High Level Design Appendix D8: Detailed High Level Design DSP Comm Module DSP This module handles the communication between the host and the DSP. The frame that is used in the algorithm in the DSP Target Module is received from the DSP Comm Module Host to this module. When the algorithm in the DSP Target Module has used and modified the frame it is sent back to DSP Comm Module Host. For more information see Appendix D5: Detailed high level design DSP Comm Module Host. Signals to DSP Comm Module DSP: Signal: Signal type: Task: DSP_COMM_ Command Receives incoming ReceiveHandle commands from the host. PCM_FRAME_ Vector of Receives incoming frames ReceiveHandler samples. from the host. Debug_Print Print type, Sends a printout to the String host. Signals from DSP Comm Module DSP: Signal: Signal type: Task: PCM_FRAME_ Vector of Sends the processed CallbackHandler samples. frame to the host. DSP_COMM_ Printouts, Sends printouts to the ChannelReceive type of the host. Handler printouts 91

100 Appendix D Detailed High Level Design Appendix D9: Detailed High Level Design DSP Test Module DSP The DSP Test Module DSP sends and receives signals to and from the DSP Target Module. These signals depend fully on the methods that are being tested in the target module, i.e. they are different for every test case and will hence not be discussed here. The module also receives and sends signals to the DSP Comm Module DSP. Signals to DSP Test Module DSP: Signal: Signal type: Task: TEST_INIT Optional Sets up the test in the target by a initialisation method. TEST_CASE Sample Executes the test by vector invoking the method to test in the target. TEST_KILL Optional Tears down the function that have been tested, i.e. de allocates resources that won t be used after the test is finished. Signals from DSP Test Module DSP: Signal: Signal type: Task: Debug_Print Print type, Sends a string to the String host. Works like a printf. It also has a parameter that decides which type of printouts that should be used in the ARM. 92

101 Appendix D Detailed High Level Design Appendix D10: Detailed High Level Design DSP Target Module This is the module that is tested in the DSP. Signals are sent to and received from the Host Target Module. The signals that are sent to the host target is function calls with a frame consisting of samples as parameter and the signals that are sent from the target are the frame after that the algorithm has been used and changed the frame. These signals are specific for every test case and are hence not discussed here. Signals to DSP Target Module: Signal: Signal type: Task: No signals considered. Signals from DSP Target Module: Signal: Signal type: Task: No signals considered. 93

102 Appendix D Detailed High Level Design Appendix D11: Sequence Diagrams Phone Perspective Figure D11.1 Retrieving number of test cases available in multimedia control module. There is a special variable among the test cases in every test module directory. This variable gives the number of test cases in that directory. The developers have to modify this when they add one or more test cases. Figure D11.2 The initialisation method is invoked. In this case the test does not need any parameters. 94

103 Appendix D Detailed High Level Design Figure D11.3 Executing test case 2. The test passes. Every test case starts with a string that notifies Module Tester. After that is done the test case can be executed. When the test case is considered to have passed or failed it will notify Module Tester once again and finally it will notify Module Tester that the test case has ended. 95

104 Appendix D Detailed High Level Design Figure D11.4 Executing test case 3. Manual evaluation is requested and the outcome of the test is passed. Some test cases require manual evaluation. Before the test starts it is suitable to give the tester a notice so he can pay attention to the outcome of the test since it is the tester that decides manually if the test passed or not. 96

105 Appendix D Detailed High Level Design Figure D11.5 Retrieving the documentation for test case 2 in the multimedia control module. Every test case should have documentation included in the test module code. It is possible to receive that documentation by sending a signal from Module Tester to the phone. The documentation is used for different purposes, e.g. when Module Tester examines if the current test case is manually evaluated. It is also used when the tester wants to see what parameters that are used in that test case. 97

106 Appendix D Detailed High Level Design Appendix D12: Detailed High Level Design Main_Tester The Main_Tester class is the one initiating the other classes and hardware. This includes setting up the serial port, reading module names available for testing, setting up the configuration for hardware and software. Finally the class starts and initiates the Main_Menu class. The Main_Tester class shall also handle the graphical user interface of the Terminal_Reader during execution. Terminal_Reader is used in Module Tester as a regular terminal window. Signals to Main_Tester: Signal: Signal type: Task: No signals. Signals from Main_Tester: Signal: Signal type: Task: GetTestData No param. Requests the data in the configuration file. PutTestData string Requests that the modified configuration data should be written to the configuration file. ShowTerminal No param. Requests that the module tester terminal window should be shown. GetNbrOfTests string Requests the number of tests available for the given module. 98

107 Appendix D Detailed High Level Design Appendix D13: Detailed High Level Design Main_Menu This class is handling the execution of the test cases. It shall also contain the main parts of the graphical interface. The user is able to choose between the available test cases for the chosen test module. There is an option to see a regular terminal window with printouts and the user also have the possibility to generate an automatic report contain information about the latest executed test cases. Execution of a test is done in steps. First the tests to be executed are selected by the user. If the tester has no knowledge about the test, e.g. what parameters to use, he can use the documentation, which is easily found with a button push. Then the right parameters should be filled in and finally it should be just to start the test. Each test case shall return passed, failed or manual. Manual means that the tester himself has to evaluate the test and mark if it passed or not. After the test cases have executed the tester shall decide if he want to generate the automatic report. Even though he decides not to make the report, the results are saved in a database for later use. Signals to Main_Menu: Signal: Signal type: Task: Initialise No param. Sets all values of the user interface to default. SetNewTestCase cluster Requests to set the internal test cases to the given parameter. Show_Doc string, Requests a window to pop string, up that presents the string data given as module name, test case and documentation. Show_Printouts string, Requests a window to pop string, up that presents the string, data given as module name, string test case, current run and printouts from current run. Signals from Main_Menu: Signal: Signal type: Task: GetNextInput No param. Requests to read the next available string on the serial port. EmptySerialBuffer No param. Requests to empty the buffer of the serial port. WriteOutput string Requests to write the parameter to the serial port. 99

108 Appendix D Detailed High Level Design Appendix D14: Detailed High Level Design Com_Reader This class handles all the reading on the serial port. It should gather all the information available on the port and when it is requested it should send the line first in queue to the invoking class. This class will not only return the information from the serial port to the one requesting it but also always to Terminal_Reader. Signals to Com_Reader: Signal: Signal type: Task: GetNextInput No param. Requests to read the next available string on the serial port. EmptySerialBuffer No param. Requests to empty the buffer of the serial port. Signals from Com_Reader: Signal: Signal type: Task: PutData string Sends a request that the given parameter should be put as the first line in the Terminal_Reader. The last line will be deleted if necessary. 100

109 Appendix D Detailed High Level Design Appendix D15: Detailed High Level Design Com_Writer This class handles all the writing on the serial port. The parameter sent with the signal should be written on the serial port. Signals to Com_Writer: Signal: Signal type: Task: WriteOutput string Requests to write the parameter to the in advance chosen serial port. Signals from Com_Writer: Signal: Signal type: Task: No signals. 101

110 Appendix D Detailed High Level Design Appendix D16: Detailed High Level Design Terminal_Reader This class handles all the strings received on the serial port. Every string received on the port should be sent to Terminal_Reader before it is sent to the one requesting it. Signals to Terminal_Reader: Signal: Signal type: Task: ShowTerminal No param. Requests that the module tester terminal window should be shown. PutData string The given parameter is put as the first line in the Terminal_Reader. The last line will be deleted if necessary. EmptyReader No param. Empties the buffer with all entries for the Terminal_Reader. TerminalWrite string Writes the given string to the serial port. GetReaderValue No param. Returns the string currently held in Signals from Terminal_Reader: Signal: Signal type: Task: GetNextInput No param. Requests to read the next available string on the serial port. WriteOutput string Requests to write the parameter to the in advance chosen serial port. 102

111 Appendix D Detailed High Level Design Appendix D17: Detailed High Level Design NbrOfTests_Reader This class sends a request to the phone to get the number of tests for the chosen module. The signal to the phone goes via the Com_Reader and the Com_Writer. Signals to NbrOfTests_Reader: Signal: Signal type: Task: GetNbrOfTests string Requests the number of tests available for the given module. Signals from NbrOfTests_Reader: Signal: Signal type: Task: EmptySerialBuffer No param. Requests to empty the buffer of the serial port. GetNextInput No param. Requests to read the next available string on the serial port. 103

112 Appendix D Detailed High Level Design Appendix D18: Detailed High Level Design TestData_Input This class handles the configuration file for the tests. It is possible both to read from the configuration file and to write to it. The data in the file is used for example the automatic report generation and database insertions. The file consists of, for example, information about the tester, version of both hardware and software. Signals to TestData_Input: Signal: Signal type: Task: GetTestData No param. Requests the data in the configuration file. PutTestData string Requests that the modified configuration data should be written to the configuration file. Signals from TestData_Input: Signal: Signal type: Task: No signals. 104

113 Appendix D Detailed High Level Design Appendix D19: Detailed High Level Design Report_Gen This class handles the automatic report generation. All the information needed in the report has to be sent to the class. The report is made as a Microsoft Word document. The only signals that are sent out from the class are to Microsoft Word using ActiveX. These signals are not discussed here. Signals to Report_Gen: Signal: Signal type: Task: GenerateReport cluster[], Generates a report with string[] the given input data which is the information about each test case and the specific configuration data. Signals from Report_Gen: Signal: Signal type: Task: No signals. 105

114 Appendix D Detailed High Level Design Appendix D20: Detailed High Level Design Database_Gen This class handles the automatic database savings. All the information needed in the database entry has to be sent to the class. The only signals that are sent out from the class are to Microsoft Access using ActiveX. These signals are not discussed here. Signals to Database_Gen: Signal: Signal type: Task: GenerateDatabase string[] Generates a database Entry entry with the given input data which consists of the configuration data as well as specific test case data for the chosen test case. Signals from Database_Gen: Signal: Signal type: Task: No signals. 106

115 Appendix D Detailed High Level Design Appendix D21: Sequence Diagrams PC Perspective Figure D21.1 Generating a test report after finished test execution. A report is generated with information about the module name, software version, hardware version, name of the tester and the test cases that have been executed. 107

116 Appendix D Detailed High Level Design Figure D21.2 Executing test case 2. The test passes and a database entry is made. Test case 2 is executed. The test passes. A database entry is made for the execution of the test case after that the test case is finished. 108

117 Appendix D Detailed High Level Design Figure D21.3 Executing test case 3. Manual evaluation is requested and the outcome of the test is passed. A database entry is made. Manual evaluation of a test case is performed. Instead of that the test case is considered passed or failed automatically the tester is requested to decide during testing if it passed or not. A database entry is made for the execution of the test case after that the test case is finished. 109

118 Appendix D Detailed High Level Design Figure D21.4 Retrieving the documentation for test case 3 in the multimedia control module. The test documentation is retrieved from the phone. 110

119 Appendix E Test Process HOST Appendix E Test Process HOST APPENDIX E - TEST PROCESS HOST INTRODUCTION Terminology SYSTEM ARCHITECTURE OUTLINE TEST PROCESS Step 1: Register the module as a Process Step 2: Register the module with Interactive Debug Step 3: Naming rules and number of test cases-variable Step 4: Define start and stop of the test Step 5: Decide if the test passed or failed Step 6: Interaction during test execution Step 7: Additional debug messages Step 8: Documentation SHORT INSTRUCTIONS FOR ACHIEVING MODULE TESTS IN HOST-MODULES APPENDIX E:1 REGISTER THE MODULE AS A PROCESS APPENDIX E:2 INIT_XX_DEBUG, HANDLE_XX_DEBUGSIGNAL APPENDIX E:3 HANDLE_XX_RESPONSESIGNAL

120 Appendix E Test Process HOST 1 Introduction The purpose of this test process is to prepare the test code for use with the application Module Tester (see manual and other documents for MT) for tests in the host. With a fully completed test process the tester has the possibility to make use of all the facilities in Module Tester, e.g. repeat tests an optional number of times and create an automatic generated test report in Microsoft Word. The idea is to test each module in the target isolated from the rest of the modules. This should offer an environment that is free from distracting interference from other modules. Hence the testing is focused on only one module at a time and the functionality of that module. It is though possible to write test code and perform the test process for several modules but testing is only done for one module. There are two different kinds of tests allowed; the so-called return value tests and the vector comparison tests. A return value test can only be performed in the host and a vector test can only be performed in the DSP. In a return value test the target code is invoked from the test code. Usually a value is returned from the target code to the test code and that value should fulfil a condition. If the value fulfils the condition it is considered that the test has passed and else not. It should be defined with a special notification in the test code where the return value test starts and ends. It should also be defined in the test code where the test is considered to have passed or failed. It is also possible perform tests with manual evaluation. This means that the tester decides when the test is finished if it passed or not. Printouts can be used in the test code to give additional information. Vector comparison tests are just slightly different. A vector with, for example, samples are sent to a function in the target module. The vector is used in an algorithm and the answer is sent back and compared with another vector, which is predefined. For further information se Test process DSP. Note that the code in the appendices is not formatted to fit in this document, rather to fit in Visual C++ or equivalent code formatting tool. 112

121 Appendix E Test Process HOST 1.1 Terminology Definition/Abbreviation ARM DSP IDBG LabView Module Tester Target module Test module Explanation Host processor Digital Signal Processor Interactive Debug Graphical Programming Language The LabView application at the PC side Module containing the code to be tested Module containing the test code 113

122 Appendix E Test Process HOST 2 System Architecture Outline It is possible to control the modules in the platform from a program called Interactive Debug. To be able to do this, certain code has to be included in the module. After that the registered methods can be invoked from IDbg. This in its turn is communication via the serial port to a PC. As a result of this comes that it is possible to invoke methods in the phone from a regular computer, just using a terminal application. The same terminal window also receives the printouts executed in the phone. The code needed for the communication is not included directly in the code being tested. Instead a new module is introduced which contains all the test cases and the communication with IDbg. This new module invokes the module being tested. Figure E.1 System architecture host tests Choose an abbreviation for the name of the test module that is used throughout this document and in the test code. Everywhere xx is used in this document and in the appendices it should be replaced with the previously chosen abbreviation. It is important to be case sensitive when changing them. For example when the abbreviation nr (as in noise reduction) is chosen, xx must be replaced with nr and XX must be replaced with NR. Everywhere xxx is mentioned it should be replaced with the name of the module, e.g. MMCTRL when multimedia control is tested. Note that the code in the appendices is not formatted to fit in this document, rather to fit in Visual C++ or equivalent code formatting tool. 114

123 Appendix E Test Process HOST 3 Test Process Note that a concise test process can be found in 4 Short instructions for achieving module tests in host-modules. Step 1: Register the module as a process The module containing the test code has to be registered as a process. This has to be done in order to make it possible to register the code with Interactive Debug. Register the module as a process. Follow the steps below. (I) (II) Add the following lines to the beginning of the header file of the test code #include r_os.h #include t_basicdefinitions.h Add this line to the beginning of the c file of the test code #include c_system.h (III) Add the following line to the file OSEMAIN.CON PRI_PROC ( Printout_Process, Printout_Process, 500, 27, DEFAULT, 0, NULL ) This file should already be located on the user s local hard disk drive. If that s not the case, find the file in cme2. Then place it on a proper location on the hard disk drive. Add the following line to DescrExtra.cfg under include files c:\ folder \osemain.con DescrExtra.cfg is also found in cme2. (IV) Add this line after the one in (III) PRI_PROC ( Xxx_Process, Xxx_Process, 500, 27, DEFAULT, 0, NULL ) (V) Insert the code located in Appendix E:1 preferably in the end of the test code. Step 2: Register the module with Interactive Debug The module containing the test code has to be registered with Interactive Debug and it must be able to handle incoming debug and response signals. The code also has to be prepared for setting up Interactive Debug Tables. This is done with macros. 115

124 Appendix E Test Process HOST Register the module with Interactive Debug. Follow the steps below. (I) Add the following line to the beginning of the header file of the test code #include r_idbg.h #include r_debug.h Add the following line to the beginning of the c file of the test code #include u_idbg.h (II) The methods Init_Xx_Debug and Handle_Xx_DebugSignal have to be inserted somewhere in the module code, preferably just before the OS_PROCESS method. The code is located in Appendix E:2. (III) The method Handle_Xx_ResponseSignal has to be inserted somewhere in the test code, preferably somewhere before the OS_PROCESS method. Instead of using this method the response signals can be handled in the OS_PROCESS. In this example they are not, as a consequent of that contains the switch structure only the default case, which is only executed when the signal is unknown. An example of how the code could be written is located in Appendix E:3. (IV) Add the following line to the beginning of the OS_PROCESS method in the test code Init_Xx_Debug (); After that the module code should look like this OS_PROCESS ( Xxx_Process ) { union SIGNAL *RecPrimitive_p = NIL; Init_Xx_Debug (); Debug_Print ( "\nxxx_process started\n" ); } (V) Add the following lines to the OS_PROCESS method in the module and don t forget to insert their curly brackets indicating the end (not shown below). Note that the response signals are handled in the method Handle_Xx_ResponseSignal and not in the OS_PROCESS. // Test if debug signal if (!Handle_Xx_DebugSignal( RecPrimitive_p ) ) { // Test if Response signal i.e. for OPA if (!Handle_Xx_ResponseSignal(RecPrimitive_p) ) { 116

125 Appendix E Test Process HOST After this is done, the OS_PROCESS in the module code should look something like this RecPrimitive_p = RECEIVE(SIGSEL); if (RecPrimitive_p!= NIL) { // Test if debug signal if (!Handle_Xx_DebugSignal ( RecPrimitive_p )) { // Test if Response signal i.e. for OPA if (!Handle_Xx_ResponseSignal (RecPrimitive_p) ) { switch (RecPrimitive_p->Primitive) { default: { (VI) The following line should be added in the beginning of the code in the header file of the test code. It creates a main directory for the process in the Interactive Debug table structure. IDBG_TBL_EXTERN (Xx_DebugTable); Step 3: Naming rules and number of test cases-variable The test cases in the module have to follow certain naming rules. An additional variable indicating the number of cases is required. Both the test cases and the variable have to be registered in the Interactive Debug table. Additional rules are to be followed when using in parameters in a test case, see (IV) below. (I) The test cases have to be on the form test001, test002 and so on. A test case declaration could, as an example, look like this static void test_001 ( char *cmd_buf, int *arg_index, int args_found ) Note that the parameters are discussed in (IV) below. (II) The number of test cases in the module has to be defined in a static unsigned 16 bits variable called nbroftests. With five test cases it would look like this static uint16 nbroftests = 5; (III) The test cases and the variable are registered in the Interactive Debug table with macros. Below is an example, where the variable and the two test cases test001 and test002 are registered. Example 1: Register test001 and test002 with Interactive Debug macros. IDBG_TBL_START( Xx_DebugTable ) IDBG_TBL_VAR_UDEC(0, nbroftests, "nbroftests") IDBG_TBL_CMD( test001, "test001" ) IDBG_TBL_CMD( test002, "test002" ) IDBG_TBL_END 117

126 Appendix E Test Process HOST (IV) The in parameters used in a test case are treated in a special way. Regardless if there s a lack of parameters or if there are many of them the same declaration is used. The declaration involves the three following variables char *cmd_buf int *arg_index int args_found Contains the parameters to use. Used together with arg_index. Index of the parameter to use. Used together with cmd_buf. Number of parameters. It is up to the tester to handle the incoming parameters in a correct and proper way. Knowledge about the number of parameters, their type, and in which order they arrive is required. NB: It is important that not more than 20 parameters are sent to a test case. If more than that is used none of them are received in the test code. Example 2: Handle incoming parameters to a test case. Below follows an example of a test case method with two parameters, which are of the type string. The values are stored in the char matrix tmp for later use. static void test_001 ( { char *cmd_buf, int *arg_index, int args_found ) // a maximum of 30 characters are allowed in the // two strings char tmp[2][30]; //exactly two parameter are allowed if (args_found == 2) { uint8 k; for (k = 0; k < args_found; k++) { uint8 i = 0; } } } //as long as the parameter is not empty while (*(cmd_buf + arg_index[k] +i)!=0x00) { //stores character by character tmp[k][i] = *(cmd_buf + arg_index[k]+i); i++; } //end of string character tmp[k][i] = 0x0000; 118

127 Appendix E Test Process HOST It is of course possible to handle all types of parameters, just as long as the type is known. Arrays are handled a little bit different. Before the first element in the array there is an integer indicating the number of elements in the array. Every element in the array is considered as an independent parameter. Example 3: Handle incoming arrays to the test case An array with five integers is to be used as a parameter in a method. They are handled in the module as six incoming parameters. The first parameter is the integer indicating the number of element in the array, i.e. five in this case. The last five parameters are the integers from the array. static void test_001 ( char *cmd_buf, int *arg_index, int args_found ) { int i[5]; uint8 k; } for (k = 1; k < *(cmd_buf + arg[0]); k++) { i[k-1] = *(cmd_buf + arg_index[k]); } Step 4: Define start and stop of the test Each test case has to define where the test code starts and ends. This is done with the Debug_Test_Start and the Debug_Test_End methods. First this line has to be include in the beginning of the test code #include "debug_printout.h" The Debug_Test_Start method call should be put where the test case starts. It should have the name and the number of the test case as a string parameter. This line is a mandatory and all test cases have to start with this. Debug_Test_Start ( TestName(Test_Nr) ); The Debug_Test_End method call should be put where the test case ends. It should, as the start method, have the name and the number of the test case as string a parameter. This line is also mandatory and all test cases have to end with this. Debug_Test_End ( TestName(Test_Nr) ); See Example 4: How to write simple test code below. 119

128 Appendix E Test Process HOST Step 5: Decide if the test passed or failed It has to be defined in every test where it is considered to have passed or failed. This is done with the Debug_Test_Passed, Debug_Test_Failed or Debug_Test_Manual methods. The Debug_Test_Passed method call should be put where the test case is considered to have passed. It should have the name and the number of the test case as a string parameter. It always has to be followed by the Debug_Test_End method discussed above in Step 4. Debug_Test_Passed ( TestNameTest(Test_Nr) ); The Debug_Test_Failed method call should be put where the test case is considered to have failed. It should have the name and the number of the test case as a string parameter. It always has to be followed by the Debug_Test_End method discussed above in Step 4. Debug_Test_Failed ( TestName(Test_Nr) ); It is not always possible to decide if a test case has passed or not automatically without any interaction with the tester who is executing it. Then it is suitable to use the Debug_Test_Manual method. This method gives the tester a notice that the test should be evaluated manually, i.e. the tester should press a button in Module Tester and in that way indicates if the test passed or failed. An example when this kind of test evaluation is suitable is when the tester should evaluate a test tone by listening. It should have the name and the number of the test case as a string parameter. This method also always has to be followed by the Debug_Test_End method discussed above in Step 4. Debug_Test_Manual ( TestName(Test_Nr) ); It is possible to have several Debug_Test_Passed, Debug_Test_Failed and Debug_Test_Manual methods in one test case since it could be several different ways for the test to pass or fail. Perhaps it is more common with the Debug_Test_Failed method than with the Debug_Test_Passed method. Step 6: Interaction during test execution In some test cases the tester is supposed to interact in some way. An example of this is when the tester needs to send a sms or call the phone during the test. A special line can be added in the code to make the tester to pay attention to that he should do something special, e.g. a voice call. This is done as in the following line Debug_Test_Interact( Description of why the delay is needed. ) 120

129 Appendix E Test Process HOST The line after should be a delay of an appropriate length. This is done as in the following line. Delay(N); Where N is the delay in milliseconds. Step 7: Additional debug messages It is possible to send debug messages to Module Tester from the test code. This is done with the Debug_Print method. This method can be put everywhere in the code where a debug message is needed. It works in the same way as printf and can thus have both strings and other variables as parameters. The debug messages have to be put after the Debug_Test_Start method and before the Debug_Test_End method. It looks as follows Debug_Print ( Optional test message ); Important strings should always be kept under 200 bytes. The number of characters 200 bytes can represent differs from implementation to implementation. Printouts that are longer than 200 bytes cannot be guaranteed to be consecutive. This means that after every multiple of 200 bytes another printout that is sent to IDbg from another extern process can interfere with the printouts that come from the process that currently are being tested. Example 4: How to write simple test code static void test_001 ( char *cmd_buf, int *arg_index, int args_found ) { Debug_Test_Start ( ComfortToneTest(Test_001) ); Debug_Print ( Generating comfort tone ); } if (cond) // condition fulfilled, i.e. the test passed { Debug_Test_Passed ( ComfortToneTest(Test_001) ); Debug_Test_End ( ComfortToneTest(Test_001) ); } else { Debug_Test_Failed ( ComfortToneTest(Test_001) ); Debug_Test_End ( ComfortToneTest(Test_001) ); } 121

130 Appendix E Test Process HOST Step 8: Documentation When executing Module Tester at the PC side it is possible to receive information about the test cases. In order for that to work, a documentation method has to be implemented in the test code. The documentation method is also used when a test case is declared to use manual evaluation. Somewhere in the documentation a special string should be put. This is discussed later in this step. The Debug_Test_Doc_Start method call should be put where the test documentation start. It should have the name and the number of the test case as a string parameter. This line is mandatory for the documentation method. Debug_Test_Doc_Start ( TestName(Test_Nr) ); The Debug_Test_Doc_End method call should be put where the test documentation ends. It should have the name and the number of the test case as a string parameter. This line is mandatory for the documentation method. Debug_Test_Doc_End ( TestName(Test_Nr) ); The actual documentation is given by Debug_Print methods that are placed between the Debug_Test_Doc_Start and the Debug_Test_Doc_End method. See Step 6 for further information about Debug_Print. Debug_Print ( Optional debug message ); Example 5: How to implement the documentation method static void test_001_doc ( char *cmd_buf, int *arg_index, int args_found ) { Debug_Test_Doc_Start ( ComfortToneTest(Test001) ); Debug_Print ( Optional info about the test case ); Debug_Test_Doc_End ( ComfortToneTest(Test001) ); } 122

131 Appendix E Test Process HOST 4 Short Instructions for Achieving Module Tests in Host- Modules 1. The following files should be stored locally (supplied in the Module Tester - package) - debug_printout.c - debug_printout.h - style_debug.c - style_debug.h - osemain.con (not supplied in the package, take from CME) - Add all of the files above to the DescrExtra.cfg (located in LD_SubSystems_003) 2. Open style_debug.c in for instance Visual C++ - Change all XX to a combination of letters that describes the module (for instance MC) - Change all XXX to the name of the module (for instance MMCTRL) - Include all files needed in the module testing - Use the method TEST_001 as a template for writing the test cases - All test cases should start with the calling of DEBUG_TEST_START( ) - All test cases must contain one or more of the following calls - DEBUG_TEST_PASSED( ) When the test case passed - DEBUG_TEST_FAILED( ) When the test case failed - DEBUG_TEST_MANUAL( ) When the test case has manual evaluation - All test cases should end with the calling of DEBUG_TEST_END( ) - Each test case can contain a variable number of printouts (as for instance printf( )) - DEBUG_PRINT( ) - Use the method TEST_001_DOC as a template when writing the doc. for the test cases - All doc-methods should start with the calling of DEBUG_TEST_DOC_START( ) - All documentation is written by using the DEBUG_PRINT( ) calls - All doc-methods should end with the calling of DEBUG_TEST_DOC_END( ) - If the test has manual evaluation make sure that the documentation contains the line below - DEBUG_PRINT(Manual evaluation( description shown when the test starts )); - Change the number of test cases in the variable shown below - static uint16 nbroftests = 1; - Add the lines below for each new test case. The lines are added in the interactive debug table - IDBG_TBL_CMD( TEST_001, "TEST_001" ) - IDBG_TBL_CMD( TEST_001_DOC, "TEST_001_DOC" ) - Be careful with the syntax for the different calls to Debug_Printout, refer to the test process 123

132 Appendix E Test Process HOST 3. Open style_debug.h in for instance Visual C++ - Change all XX to a combination of letters that describes the module (for instance MC) - Change all XXX to the name of the module (for instance MMCTRL) - Be sure that the table located in the end of the c-file is externally declared (se below) - IDBG_TBL_EXTERN(XXX_DebugTable) 4. Open debug_printout.c in for instance Visual C++ - Add a line for the test module as shown below - IDBG_TBL_SUB_DIR( XXX_DebugTable, "TEST_XXX" ) - Make sure that the header file is included (style_debug.h) as shown below - #include "style_debug.h" 5. Open osemain.con in for instance Visual C++ - Add the following two lines in a place you know executes (please make sure that the lines are placed in the same order as shown below) - PRI_PROC(Printout_Process, Printout_Process, 500, 27, DEFAULT, 0, NULL) - PRI_PROC(XXX_Debug_Process, XXX_Debug_Process, 500, 27, DEFAULT, 0, NULL) 6. Save all files and compile with a suitable build (for inst. EFREOLS_MODULE_TESTER_ARM) 7. Download the build to the platform and execute the Module Tester - program 124

133 Appendix E Test Process HOST Appendix E:1 Register the Module as a Process //****************************************************** ***************************************************** // // OS process handling the IDBG commands // //****************************************************** ***************************************************** OS_PROCESS( Xxx_Process ) { union SIGNAL *RecPrimitive_p = NIL; Debug_Print ("\nxxx_process started\n" ); while (TRUE) { static const SIGSELECT SIGSEL[] = {0}; RecPrimitive_p = RECEIVE(SIGSEL); } } if (RecPrimitive_p!= NIL) { switch (RecPrimitive_p->Primitive) { default: { FREE_BUF(&RecPrimitive_p); break; } } } 125

134 Appendix E Test Process HOST Appendix E:2 Init_Xx_Debug, Handle_Xx_DebugSignal //====================================================== =============================== // Init_Xx_Debug (Required for registering the debug table.) //====================================================== =============================== void Init_Xx_Debug(void) { (void) Request_IDbg_Register(WAIT_RESPONSE); } // END - Init_Xx_Debug //====================================================== =============================== // HandleRTPDebugSignal (Required for the debug commands.) //====================================================== =============================== boolean Handle_Xx_DebugSignal( union SIGNAL *psignal ) { return Do_IDbg_HandleSignal( &psignal, // type for sig_in is union SIGNAL * &Xx_Debug, Xx_DebugTable); } // END - Handle_Xx_DebugSignal 126

135 Appendix E Test Process HOST Appendix E:3 Handle_Xx_ResponseSignal //====================================================== =============================== // Handle_Xx_ResponseSignal (Check and handle if this is a response signal) //====================================================== =============================== boolean Handle_Xx_ResponseSignal( union SIGNAL *pinsignal ) { boolean CaseFound = true; } switch( pinsignal->sig_no ) { case SIGNAL_NBR_1: { // Handle signal number 1 break; } case SIGNAL_NBR_2: { // Handle signal number 2 break: } default: { CaseFound = false; // } return CaseFound; 127

136 Appendix F Test Process DSP Appendix F - Test Process DSP APPENDIX F - TEST PROCESS DSP INTRODUCTION Terminology SYSTEM ARCHITECTURE OUTLINE Modules in the host Interactive Debug Test_Module_Host Comm_Module_Host Debug Printout Modules in the DSP Comm_Module_DSP Test Module DSP Target Module DSP TEST PROCESS Step by Step DSP Step 1: Choose an abbreviation for the module name Step 2: Required files Step 3: Inclusion guard in xx_debug.h Step 4: Include lines in xx_debug.c Step 5: Register the dsp test module as a process Step 6: Add a test case Step 7: Create a new load module on the DSP-side Step 8: Compile the load module Step 9: Convert the build into a header file Step 10: Add DSP header file to DescrExtra.cfg Step by Step Host Step 1: Choose an abbreviation for the module name Step 2: Required files Step 3: Inclusion guard in xx_debug.h Step 4: Include lines in xx_debug.c Step 5: Handle PCM frames Step 6: Register the host test module as a process Step 7: Register the module with Interactive Debug Step 8: Handle debug and response signals Step 9: Add a test case SHORT INSTRUCTIONS FOR ACHIEVING MODULE TESTS IN DSP - MODULES Appendix F1: Include Lines in xx_debug.c (dsp) Appendix F2: Register the DSP Test Module as a Process Appendix F3: Include Lines in xx_debug.c Appendix F4: Handle PCM Frames Appendix F5: Register the Host Test Module as a Process Appendix F6: Init_Xx_Debug, Handle_Xx_DebugSignal Appendix F7 Handle_Xx_ResponseSignal Appendix F8: TEST_N_INIT Appendix F9: TEST_N Appendix F10: TEST_N_KILL Appendix F11: TEST_N_DOC Appendix F12: Test Case Registration (IDbg)

137 Appendix F Test Process DSP 1 Introduction The purpose of this test process is to prepare the test code for use with the application Module Tester (see manual and other documents for MT) for tests in the DSP. With a fully completed test process the tester is able to make use of all the facilities in Module Tester, e.g. repeat tests an optional number of times and create an automatic generated test report in Microsoft Word. The idea is to test each module in the target isolated from the rest of the modules. This should offer an environment that is free from distracting interference from other modules. Hence the testing is focused on only one module at a time and the functionality of that module. It is though possible to write test code and perform the test process for several modules but only one module can be chosen in Module Tester for testing at a time. There are two different kinds of tests allowed; the so-called return value tests and the vector comparison tests. A return value test can only be performed in the host and a vector test can only be performed in the DSP. In a return value test the target code is invoked from the test code. Usually a value is returned from the target code to the test code and that value should fulfil a condition. If the value fulfils the condition it is considered that the test has passed and else not. Note that this test process does not deal with these tests. For further information about return value tests and how to perform them see Test Process HOST. Vector comparison tests are just slightly different. A vector, with for example samples, is sent to a function in the target module. The vector is used in an algorithm and the answer is sent back and compared with another vector, which is predefined. That predefined vector could be produced in for example MATLAB. How to write this kind of tests is described in detail in this test process. It is important that the test process is carefully followed in order to make it possible to use Module Tester. Note that the code in the appendices is not formatted to fit in this document, rather to fit in Visual C++ or equivalent code formatting tool. 129

138 Appendix F Test Process DSP 1.1 Terminology Definition/Abbreviation ARM DSP Host IDbg LabView Module Tester Target module Test module Explanation Advanced RISC Machine, main processor Digital Signal Processor See ARM Interactive Debug Graphical Programming Language The LabView application at the PC side Module containing the code to be tested Module containing the test code 2 System Architecture Outline Even though the tests are to be performed in the DSP, code also has to be added in the host. There are four interesting modules in the host and three in the DSP. New code that is needed in the testing has to be added in Test Module Host, Debug Printout, Test Module DSP and DSP Comm DSP. The other modules only have to be included in the code. Figure F.1 System architecture 2.1 Modules in the Host The modules in the host both administrate and evaluate test vectors. The input vector is handled and forwarded to the DSP and when the processed vector returns from the DSP it will be evaluated. The answer of the evaluation is sent back to the PC Interactive Debug It is possible to control the modules in the platform from the environment. This is done by sending commands in the format of strings from the PC, 130

139 Appendix F Test Process DSP using a terminal application, to the phone. These strings are received at the serial port on the phone and interpret by a special process called Interactive Debug (IDbg). The strings are transformed into function calls routed by IDbg to the right module. The same terminal window can also receive the printouts executed in the phone. Nothing should be changed in the Interactive Debug module Test_Module_Host This module contains four methods for every test case. When a new test case is added four new methods should be added in this file. The first method (init) is the initialisation of the test. It can have optional parameters, e.g. parameters to the algorithm that are to be executed in the DSP. The initialisation method also executes the entire test but it will not evaluate the result of it. The second method (test) is the main test method. The only task of it is to evaluate the result of the init method described above. The method returns the result to the PC. The third method (kill) ends the test session and send optional parameters to the kill method in the dsp. The fourth and last method is the documentation method (doc). This method contains all the documentation and returns it when the method is invoked. The samples that are to be tested and sent from the PC via the Interactive Debug module are received in the Test Module at the host side Comm_Module_Host The communication module at the host side sends the frames to the DSP. After a frame has been processed at the DSP side it is sent back to the host side and received by the communication module. The main task of this module is to route the test case method calls that are made in the host to the DSP Debug Printout All signals that are sent to the PC are in the format of strings. They are handled as printouts directed to the serial port of the phone. Since Module Tester also requires the type of the signal, it has to be merged together with the string. The Debug Printout process does this. It takes the string that is to be sent and adds information to it depending on what method that is invoked in Debug Printout. This does not affect the logical way of the signal. 2.2 Modules in the DSP The actual test code and the target module are located in the DSP. A request to execute one of the specific test cases in the DSP is received from the host. The request is in the form of a method call that is sent via the DSP Comm modules. 131

140 Appendix F Test Process DSP Comm_Module_DSP This module receives a request, probably containing a frame. The request is routed further to the right method in the test module Test Module DSP This module receives a frame from the communication module at the DSP side. The frame is sent further to the target module. The answer that the test module receives from the target module is sent back to the communication module Target Module DSP This module receives a frame from the test module and processes that frame in an algorithm. The answer is sent back to the test module. 3 Test Process Note that a concise test process can be found in 4 Short instructions for achieving module tests in DSP modules. 3.1 Step by Step DSP This part of the guide only illustrates the implementation of a test case file by file on the DSP side. The corresponding guide to how it is done on the host side can be found in 3.2 Step by step host. The guide shows how to implement the test code and other files without using prewritten shell files. How to do it with the shell files is shown in 4 Short instructions for achieving module tests in DSP modules below. The shell files contain most of the fundamental code needed in a test case implementation. The idea is to let the test writer download the files, change their names and then change some parts of the code that is written in them. Step 1: Choose an abbreviation for the module name If a suitable abbreviation was chosen in Step 1 in 3.2 Step by step host this step can be skipped. Abbreviations are used in several places in this document and in the test code. Choose an abbreviation for the module name. Throughout this document xx is used and it is supposed to be changed to the abbreviation previously chosen. It is important to be case sensitive. This means, for example, that if the abbreviation nr (as in noise reduction) is used it is important that xx is changed to nr and not to NR. When Xx is used it should be replaced with Nr and so on. Whenever Xxx is used in this document and in the test code the name of the test module is intended, i.e. the name of the module containing the test code. 132

141 Appendix F Test Process DSP This means that the Xxx should be replaced with the name of the test module wherever it appears. Step 2: Required files Several files are needed at the DSP side as seen in the system architecture (see Figure F.1: System architecture above). (I) (II) TEST_MODULE_DSP: xx_debug.c, xx_debug.h Create these two files and use the abbreviation discussed in Step 1 above when giving them names. DSP_COMM_DSP: dsp_comm.c, dsp_comm.h Small changes need to be done to dsp_comm.c (III) TARGET MODULE These files are the actual files to be tested. They should be invoked from the files in TEST MODULE. Step 3: Inclusion guard in xx_debug.h Add the following two lines before the first line in the beginning of xx_debug.h. #ifndef XX_DEBUG_H #define XX_DEBUG_H Add the following line after the last line in the end of xx_debug.h. #endif Step 4: Include lines in xx_debug.c Add the lines found in Appendix F1 in the beginning of xx_debug.c. Change xx to the chosen abbreviation. It is suitable to also add the include lines that are specific for the test code here. Step 5: Register the dsp test module as a process The module containing the test code has to be registered as a process. Register the module as a process. Follow the steps below. (I) (II) Add the following line to the beginning of xx_debug.h #include <sigbase.h> Add the following line to xx_debug.h between the #define and #endif described in Step 3: Inclusion guard in xx_debug.h above. extern PROCESS XX_debug_; 133

142 Appendix F Test Process DSP (III) Add the following lines in the beginning of the xx_debug.c union SIGNAL { SIGSELECT SigNo; }; (IV) Add the lines found in Appendix F2 in the end of xx_debug.c. Nothing in the code needs to be changed. Step 6: Add a test case The xx_debug.c file consists of three methods mainly. They are TEST_INIT, TEST_CASE and TEST_KILL. When the first test case is created those methods also need to be created. But when any following test case is implemented the methods should only be modified. All these three methods also need to be declared in xx_debug.h. (I) The initialisation method initialises the test with the chosen parameters (if any is used). This is done by the test writer that should invoke a set up method in the target code if one exists. Even if the method is empty it should although be implemented. Add an if-case to the code for each test case. void TEST_INIT( int testnbr, char *cmd_buf, int *arg_index, int args_found ) { if (testnbr == 1) { //Calls to setup the module that is to be tested in //test case 1 } else if (testnbr == 2) { //Calls to setup the module that is to be tested in //test case 2 } else if (testnbr == N) { //Calls to setup the module that is to be tested in //test case N } } (II) The test method invokes the method in the target containing the algorithm that is to be tested. outpcm is the frame that must be used in the test. 134

143 Appendix F Test Process DSP Add an if-case to the code for each test case. void TEST_CASE( int testnbr, char *cmd_buf, int *arg_index, int args_found, int16bit* outpcm ) { if (testnbr == 1) { //Calls to execute test case 1 } else if (testnbr == 2) { //Calls to execute test case 2 } else if (testnbr == N) { //Calls to execute test case N } } (III) The kill method de-allocates all the resources that have been used in the test method in the target. This is done by the test writer that should invoke a de-allocation method in the target code if one exists. Even if the method is empty it should although be implemented. Add an if-case to the code for each test case. void TEST_KILL( int testnbr, char *cmd_buf, int *arg_index, int args_found) { if (testnbr == 1) { //Calls to tear down the module that was tested in //test case 1 } else if (testnbr == 2) { //Calls to tear down the module that was tested in //test case 2 } else if (testnbr == N) { //Calls to tear down the module that was tested in //test case N } } Step 7: Create a new load module on the DSP-side Every load module needs a corresponding appcon file and Makefile. These files need to be created manually by the test writer. The files can be written from nothing or already existing files can be modified. Existing appcon files and Makefiles can be found in cnh _dsp_software. 135

144 Appendix F Test Process DSP (I) (II) Create a new appcon file or modify an already existing. Open the appcon file and add the following two lines so that the different processes are initiated PRI_PROC(0, dsp_comm, dsp_comm, 1500, 16) PRI_PROC(0, XX_debug, XX_debug, 450, 16) (III) Create a new Makefile for the load module or modify one that already exist. (IV) Open the Makefile and change the following line so that the appcon file mentioned above is read CONFIG = XX_test-appcon.con (V) Open DescrExtra.cfg. This file is found in LD_SubSystems_003 in cme. Add the following line c:\ folder \XX_test-appcon.con (VI) Open the Makefile for cnh _dsp_software and add the following two lines in the Libs section dsp_comm.lib\ XX_debug.lib\ Step 8: Compile the load module Compile the load module in for instance cygwin using the following command make CONFIG=XX_test Step 9: Convert the build into a header file Convert the build into a header file using the program a01toc. Step 10: Add DSP header file to DescrExtra.cfg Add the created header file to your locally stored files in DescrExtra.cfg (located in LD_SubSystems_003) in the ARM build. 3.2 Step by Step Host This part of the guide only illustrates the implementation of a DSP test case file by file on the host side. The corresponding guide to how it is done on the DSP side can be found in 3.1 Step by step DSP. This guide shows how to do this without prewritten shell files. How to do it with the shell files is shown in the 4 Short instructions for achieving module tests in dsp - modules below. The shell files contain most of the fundamental code needed in a test case implementation. The idea is to let the test writer download the files, change their names and then change some parts of the code that is written in them. 136

145 Appendix F Test Process DSP Step 1: Choose an abbreviation for the module name If a suitable abbreviation was chosen in Step 1 at the DSP side this step can be skipped. Abbreviations are used in several places in this document and in the test code. Choose an abbreviation for the module name. Throughout this document xx is used and it is supposed to be changed to the abbreviation previously chosen. It is important to be case sensitive when this is done. This means, for example, that if the abbreviation mmctrl (as in multimedia control) is used it is important that xx is changed to mmctrl and not to MMCTRL. When Xx is used it should be replaced with Mmctrl and so on. Whenever Xxx is used in this document and in the test code the name of the test module is intended, i.e. the name of the module containing the test code. This means that the Xxx should be replaced with the name of the test module wherever it appears. Step 2: Required files Several files are needed on the host side as seen in the system architecture (see Figure X: System architecture above). (I) (II) TEST_ MODULE_HOST: xx_debug.c, xx_debug.h Create these two files and use the abbreviation discussed in Step 1: Choose an abbreviation for the module name when giving them names. DEBUG_PRINTOUT: debug_printout.c debug_printout.h Both the files can be found in [cme path]. Some changes have to be made in the debug printout files. (III) DSP_COMM_HOST: dsp_comm.c, dsp_comm.h Both the files can be found in [cme path]. No changes need to be made in the dsp comm files. (IV) INTERACTIVE DEBUG: r_idbg.h, u_idbg.h These files should only be included in some of the other files. No changes need to be made to Interactive Debug. Step 3: Inclusion guard in xx_debug.h Add the following two lines to the very beginning of xx_debug.h. #ifndef INCLUSION_GUARD_XX_DEBUG_H #define INCLUSION_GUARD_XX_DEBUG_H 137

146 Appendix F Test Process DSP Add the following line to the very end of xx_debug.h. #endif Step 4: Include lines in xx_debug.c Add the lines found in Appendix F3 in the beginning of xx_debug.c. Change XX to the chosen abbreviation and N to the current number of test cases. Step 5: Handle PCM frames PCM frame handling code is needed, e.g. where to send the frames and what to do with errors that may arise. All of this code is already written and no changes should be made to it. Add the code found in Appendix F4 to xx_debug.c. Nothing in the code needs to be changed. Step 6: Register the host test module as a process The module containing the test code has to be registered as a process. This has to be done in order to make it possible to register the code with Interactive Debug. Register the module as a process. Follow the steps below. (I) (II) Add the following line to the beginning of the header file of the module #include r_os.h #include t_basicdefinitions.h Add the following line to the file OSEMAIN.CON PRI_PROC ( Printout_Process, Printout_Process, 500, 27, DEFAULT, 0, NULL ) This file should already be located on the user s local hard disk drive. If that s not the case, find the file in cme2. Then place it on a proper location on the hard disk drive. Add to the file DescrExtra.cfg the following line c:\ folder \osemain.con DescrExtra.cfg is also found in cme2. (III) Add this line after the one in (II). Be sure that it is not already included in DescrExtra.cfg. PRI_PROC ( Xxx_Process, Xxx_Process, 500, 27, DEFAULT, 0, NULL ) (IV) Insert the code located in Appendix F5 preferably in the end of the module code and exchange xxx. 138

147 Appendix F Test Process DSP Step 7: Register the module with Interactive Debug The test module has to be registered with Interactive Debug. It should also be prepared for setting up Interactive Debug Tables. That is done with macros. Register the module with Interactive Debug. Follow the steps below. (I) Add the following line to the beginning of xx_debug.h #include r_idbg.h #include r_debug.h Add the following line to the beginning of xx_debug.c #include u_idbg.h #include r_idbg.h (II) The following line should be added in the beginning of the code in xx_debug.h. It declares a main directory extern for the process in the Interactive Debug table structure. IDBG_TBL_EXTERN (XX_DebugTable); (III) To register the main directory discussed in (II) above the following line should be added next after IDBG_TBL_START( Test_Sub ) in debug_printout.c: IDBG_TBL_SUB_DIR( XX_DebugTable, "TEST_XXX" ) (IV) Include the xx_debug.h file in the beginning of debug_printout.c. Including the following line does this #include "path/xx_debug.h" Be sure that the path to the file is correct. It is suitable to include the line next after #include "debug_printout.h" Step 8: Handle debug and response signals The test module has to be able to handle incoming debug and response signals. (I) (II) The methods Init_XX_Debug and Handle_XX_DebugSignal have to be inserted somewhere in the module code, preferably next before the OS_PROCESS method. Both methods merely call other methods in the system in contrast to the Handle_Xx_ResponseSignal (see below). Hence, they should not be modified. The code is located in Appendix F6. The method Handle_XX_ResponseSignal has to be inserted somewhere in the module code, preferably somewhere next before the OS_PROCESS method. Instead of using this method the response signals can be handled in the OS_PROCESS. In this example they are 139

148 Appendix F Test Process DSP not, consequently the switch structure only contains the default case, which is only carried out when the signal is unknown. An example of how the code could be written is located in Appendix F7. (III) Add the following line to the beginning of the OS_PROCESS method in the module. Init_Xx_Debug (); After that the module code should look like this OS_PROCESS ( Xxx_Process ) { union SIGNAL *RecPrimitive_p = NIL; Init_Xx_Debug (); Debug_Print ("\nxxx_process started\n" ); } (IV) Add the following lines to the OS_PROCESS method in the module and don t forget to insert their curly brackets indicating the end (not shown below). Note that the response signals are handled in the method Handle_Xx_ResponseSignal and not in the OS_PROCESS. // Test if debug signal if (!Handle_Xx_DebugSignal( RecPrimitive_p ) ) { // Test if Response signal i.e. for OPA if (!Handle_Xx_ResponseSignal(RecPrimitive_p)) { After this is done, the OS_PROCESS in the module code should look something like this RecPrimitive_p = RECEIVE(SIGSEL); if (RecPrimitive_p!= NIL) { // Test if debug signal if (!Handle_Xx_DebugSignal ( RecPrimitive_p )) { // Test if Response signal i.e. for OPA if (!Handle_Xx_ResponseSignal (RecPrimitive_p) ) { switch (RecPrimitive_p->Primitive) { default: { (V) Insert the following lines to the end of xx_debug.h void Init_XX_Debug( void ); boolean Handle_XX_DebugSignal( union SIGNAL *psignal); boolean Handle_XX_ResponseSignal( union SIGNAL *pinsignal ); 140

149 Appendix F Test Process DSP Step 9: Add a test case For each new test case that is added four new methods needs to be added in the xx_debug.c file. Every test case needs its own init, test, kill and doc method. For every new test case a variable indicating the number of test cases needs to be increased. Every test case also has to be registered with Interactive Debug. It should also be prepared for setting up Interactive Debug Tables. That is done with macros. (I) One initialisation method is required. This method initialises the test with the chosen parameters (if any used) and actually executes the test. The evaluation of the test is performed in the test method (TEST_N) mentioned below. Place the init code found in Appendix F8 somewhere in the file. It is nothing else except the number of the test case, N, that needs to be changed in this method. N always has to consist of three digits. For example for test case number 1, N should be exchanged with 001 and for test case 11 it should be exchanged with 011. static void TEST_N_INIT( char *cmd_buf, int *arg_index, int args_found ) { } // test initialisation code (II) One test evaluation method is required. This method evaluates the result of the test, i.e. examine the error code that was made during the test init method (TEST_00N_INIT) mentioned above. Place the initialisation code found in Appendix F9 somewhere in the file. Change the N to the number of the test case and exchange XX to the chosen abbreviation so the printouts in the code refer to the current test case. N always has to consist of three digits. For example for test case number 1, N should be exchanged with 001 and for test case 11 it should be exchanged with 011. static void TEST_N( char *cmd_buf, int *arg_index, int args_found ) { // test evaluation code } (III) One test kill method is required. This method de-allocates all the resources that have been used in the test. Place the kill code found in Appendix F10 somewhere in the file. It is nothing else except the number of the test case, N, that needs to be changed in this method. N always has to consist of three digits. For 141

150 Appendix F Test Process DSP example for test case number 1, N should be exchanged with 001 and for test case 11 it should be exchanged with 011. static void TEST_N_KILL( char *cmd_buf, int *arg_index, int args_found ) { // de-allocate resources used in the test case } (IV) One documentation method is required. This method contains the documentation of the test case and send it to the PC whenever it is required. Place the documentation code found in Appendix F11 somewhere in the file. Change the N to the number of the test case both in the method name and in the method code. Change XX to the chosen abbreviation so the printouts refer to the current test case. Add the documentation between the Debug_Test_Doc_Start and Debug_Test_Doc_End in usual debug printout methods (Debug_Print). (V) When a new test case is written it is important to change the nbroftests variable indicating the number of test cases. If this is not done the last test case will not be shown in ModuleTester. Update the following line or add it if it does not exist already. N should be replaced with the current number of test cases. static uint16 nbroftests = N; It is suitable to include the line somewhere in the beginning of the code. (VI) Every test case has to be defined in the Interactive Debug table structure. The following macros should be inserted in the end of the code. If there are no Interactive Debug macros at all, the code in Appendix F12 should be placed in the end of the code. IDBG_TBL_CMD( TEST_00N, "TEST_00N" ) IDBG_TBL_CMD( TEST_00N_DOC, "TEST_00N_DOC" ) IDBG_TBL_CMD( TEST_00N_INIT, "TEST_00N_INIT" ) IDBG_TBL_CMD( TEST_00N_KILL, "TEST_00N_KILL" ) Where the N should be replaced with the new test case number. 142

151 Appendix F Test Process DSP 4 Short Instructions for Achieving Module Tests in DSP - Modules The shell files contain most of the fundamental code needed in a test case implementation. The idea is to let the test writer download the files, change their names and then change some parts of the code that is written in them. 1. The following files should be stored locally (supplied in the Module Tester - package) (For the ARM-build) - debug_printout.c - debug_printout.h - dsp_comm.c - dsp_comm.h - arm_style_debug.c - arm_style_debug.h - osemain.con (not supplied in the package, take from CME) - Add all of the files above to the DescrExtra.cfg (located in LD_SubSystems_003) 2. The following files should be stored locally (supplied in the Module Tester - package) (For the DSP-build) - dsp_comm.c - dsp_comm.h - dsp_style_debug.c - dsp_style_debug.h - Makefile for the load module (see 3.) (not supplied in the package, take from CME) - Con-file for the load module (see 3.) (not supplied in the package, take from CME) 3. Create a new load module on the DSP-side. Open the Makefile for the load module and change the following line so that the correct appcon file is read (please change XX to a combination of letters that describes the module), - CONFIG = XX_test-appcon.con Open the Makefile for cnh _dsp_software and add the following two lines in the Libs section (please change XX to a combination of letters that describes the module), - dsp_comm.lib \ - XX_debug.lib \ Open the appcon file and add the following two lines to initiate the different processes (please change XX to a combination of letters that describes the module), - PRI_PROC(0, dsp_comm, dsp_comm, 1500, 16) - PRI_PROC(0, XX_debug, XX_debug, 450, 16) 4. Open dsp_style_debug.c in for instance Visual C++ - Change all XX to a combination of letters that describes the module (for instance NR) - Include all files needed in the module testing - Use the method TEST_INIT as a template for writing the test case initiations 143

152 Appendix F Test Process DSP - Add more cases as the number of test cases increase - Make sure that all of the initiation needed to perform the test is set up during init - Use the method TEST_CASE as a template for writing the test case executions - Add more cases as the number of test cases increase - Use the method TEST_KILL as a template for writing the test case tear downs - Add more cases as the number of test cases increase - Make sure that all of the deallocations are performed during kill - If the test case is non-pcm-frame then please follow the same syntax as for host tests - Make sure that dsp_comm.h is included 5. Open dsp_style_debug.h in for instance Visual C++ - Change all XX to a combination of letters that describes the module (for instance NR) - Be sure that the process is externally declared (see below) - extern PROCESS XX_debug_; 6. Open dsp_comm.c in for instance Visual C++ - Make sure that the test module is included (please change XX to a combination of letters that describes the module) - #include "XX_debug.h" 7. Compile the load module in for instance cygwin using the command, make CONFIG=XX_test (please change XX to a combination of letters that describes the module) 8. Covert the build into a header file using the program a01toc 9. Add the created header file to your locally stored files in DescrExtra.cfg (located in LD_SubSystems_003) in the ARM build 10. Open arm_style_debug.c in for instance Visual C++ - Change all XX to a combination of letters that describes the module (for instance NR) - Change all XXX to the name of the module (for instance NOISE) - Make sure that the following files are included - #include "dsp_comm.h" - #include "debug_printout.h" - Use the method TEST_001_INIT as a template for writing the test case initiations - Make sure that every test case has a init method that initiates any variables needed for the test - Make sure that an init command is sent to the DSP as well - Use the method TEST_001_KILL as a template for writing the test case tear downs - Make sure that every test case has a kill method that tears down anything used in the test - Make sure that an kill command is sent to the DSP as well 144

153 Appendix F Test Process DSP - Use the method TEST_001 as a template for writing the test cases - All test cases should start with the calling of DEBUG_TEST_START( ) - All test cases must contain one or more of the following calls - DEBUG_TEST_PASSED( ) When the test case passed - DEBUG_TEST_FAILED( ) When the test case failed - DEBUG_TEST_MANUAL( ) When the test case has manual evaluation - All test cases should end with the calling of DEBUG_TEST_END( ) - Each test case can contain a variable number of printouts (as for instance printf( )) - DEBUG_PRINT( ) - If the test is a pcm-frame-test the result is returned from the following method, - PCM_FRAME_RESULT(); - The method above returns a vector with five integers (see test process for more details) - Use the method TEST_001_DOC as a template when writing the doc. for the test cases - All doc-methods should start with the calling of DEBUG_TEST_DOC_START( ) - All documentation is written by using the DEBUG_PRINT( ) calls - All doc-methods should end with the calling of DEBUG_TEST_DOC_END( ) - If the test has manual evaluation make sure that the documentation contains the line below - DEBUG_PRINT(Manual evaluation( description shown when the test starts )); - Change the number of test cases in the variable shown below - static uint16 nbroftests = 1; - Add the lines below for each new test case. The lines are added in the interactive debug table - IDBG_TBL_CMD( TEST_001, "TEST_001" ) - IDBG_TBL_CMD( TEST_001_DOC, "TEST_001_DOC" ) - IDBG_TBL_CMD( TEST_001_INIT, "TEST_001_INIT" ) - IDBG_TBL_CMD( TEST_001_KILL, "TEST_001_KILL" ) - Be careful with the syntax for the different calls to Debug_Printout, refer to the test process 11. Open arm_style_debug.h in for instance Visual C++ - Change all XX to a combination of letters that describes the module (for instance NR) - Change all XXX to the name of the module (for instance NOISE) - Be sure that the table located in the end of the c-file is externally declared (se below) - IDBG_TBL_EXTERN(XXX_DebugTable) 12. Open debug_printout.c in for instance Visual C++ 145

154 Appendix F Test Process DSP - Add a line for the test module as shown below (please change XXX to a combination of letters that describes the module) - IDBG_TBL_SUB_DIR( XXX_DebugTable, "TEST_XXX" ) - Make sure that the header file is included (arm_style_debug.h) as shown below - #include "arm_style_debug.h" 13. Open osemain.con in for instance Visual C++ - Add the following two lines in a place you know executes (please make sure that the lines are placed in the same order as shown below) (please change XX to a combination of letters that describes the module) - PRI_PROC(Printout_Process, Printout_Process, 500, 27, DEFAULT, 0, NULL) - PRI_PROC(XXX_Debug_Process, XXX_Debug_Process, 500, 27, DEFAULT, 0, NULL) - PRI_PROC(DSP_COMM_Process, DSP_COMM_Process, 500, 27, DEFAULT, 0, NULL) 14. Save all files and compile with a suitable build (for inst. EFREOLS_MODULE_TESTER_DSP) 15. Download the build to the platform and execute the Module Tester - program 146

155 Appendix F Test Process DSP Appendix F1: Include Lines in xx_debug.c (dsp) #include <stddef.h> #include <assert.h> #include <ose.h> #include <string.h> #include <stdlib.h> #include <stdio.h> #include <dspmem.h> #include <errbase.h> #include <chan.h> #include <p_types.h> #include <chanid.h> #include <log.h> #include "../dsp_comm/dsp_comm.h" #include "xx_debug.h" 147

156 Appendix F Test Process DSP Appendix F2: Register the DSP Test Module as a Process OS_PROCESS(XX_debug) { static const SIGSELECT SelectAny[] = {0}; union SIGNAL *SigIn; for (;;) { SigIn = receive(selectany); free_buf(&sigin); } } 148

157 Appendix F Test Process DSP Appendix F3: Include Lines in xx_debug.c #include "c_system.h" #include "stdlib.h" #include <math.h> #include "string.h" #include "wchar.h" #include "xmalloc.h" #include "r_hal_dspif.h" #include "t_hal_dspif.h" #include "t_hal_dspif_id.h" #include "t_vcrv.h" #include "r_os.h" #include "t_basicdefinitions.h" #include "r_inputoutput.h" #include "r_debug.h" #include "r_sys.h" #include "r_fsu.h" #include "r_gvi.h" #include "r_vie.h" #include "r_dsp_rm.h" #include "r_hal_dspif.h" #include "r_gvi.h" #include "dsp_comm.h" #include "debug_printout.h" #include "XX_debug.h" typedef short Word16; typedef Word16 int16bit; static IDbg_Context_t XX_Debug; static uint16 nbroftests = 1; 149

158 Appendix F Test Process DSP Appendix F4: Handle PCM Frames uint16 nbrofpcm = 0; uint16 currentpcm = 0; int nbrofsamples; uint16 samples_in1[160]; uint16 samples_in2[160]; int place_in = 0; uint16 samples_out1[160]; uint16 samples_out2[160]; int place_out = 0; int currentdspframe = 0; static void PCM_FRAME_SENDER(char *cmd_buf, int *arg_index, int args_found) { char x[8]; int k; int i; for (k = 0; k < args_found; k++) { x[0] = '0'; x[1] = 'x'; for (i = 0; i < 4; i++) { x[i+2] = cmd_buf[arg_index[k]+i]; } x[i+2] = '\0'; if (place_in < 160) samples_in1[place_in++] = strtol(x, NULL, 16); else samples_in2[(place_in++) - 160] = strtol(x, NULL, 16); if (place_in == 160) { PCM_FRAME_DSP_SENDER(samples_in1, samples_out1); currentpcm++; } if (place_in == 320) { PCM_FRAME_DSP_SENDER(samples_in2, samples_out2); currentpcm++; place_in = 0; } } } 150

159 Appendix F Test Process DSP static void PCM_FRAME_OUTVECT(char *cmd_buf, int *arg_index, int args_found) { char x[8]; int k; int i; if (place_out == 160 place_out == 0) if (Get_Nbr_Of_Frames() > 0) Sub_Nbr_Of_Frames(); for (k = 0; k < args_found; k++) { x[0] = '0'; x[1] = 'x'; for (i = 0; i < 4; i++) { x[i+2] = cmd_buf[arg_index[k]+i]; } x[i+2] = '\0'; if (place_out < 160) samples_out1[place_out++] = strtol(x, NULL, 16); else samples_out2[(place_out++) - 160] = strtol(x, NULL, 16); if (place_out == 320) place_out = 0; } } static void PCM_FRAME_RESET(char *cmd_buf, int *arg_index, int args_found) { IDENTIFIER_NOT_USED(cmd_buf); IDENTIFIER_NOT_USED(arg_index); IDENTIFIER_NOT_USED(args_found); if ((place_in < 160 && place_in > 0) (place_out < 160 && place_out > 0)) { place_in = 0; place_out = 0; } else if ((place_in < 320 && place_in > 160) (place_out < 320 && place_out > 160)) { place_in = 160; place_out = 160; } else { } } 151

160 Appendix F Test Process DSP static void GET_RECIEVED_FRAME(char *cmd_buf, int *arg_index, int args_found) { IDENTIFIER_NOT_USED(cmd_buf); IDENTIFIER_NOT_USED(arg_index); IDENTIFIER_NOT_USED(args_found); Request_IDbg_Printf(WAIT_RESPONSE, GET_RECIEVED_DSP_FRAME()); } 152

161 Appendix F Test Process DSP Appendix F5: Register the Host Test Module as a Process OS_PROCESS( XXX_Test_Process ) { union SIGNAL *RecPrimitive_p = NIL; Debug_Print ("\nxxx_process started\n" ); while (TRUE) { static const SIGSELECT SIGSEL[] = {0}; RecPrimitive_p = RECEIVE(SIGSEL); if (RecPrimitive_p!= NIL) { switch (RecPrimitive_p->Primitive) { default: { printf("\nunknown_primitive RECEIVED BY XXX_TEST_PROCESS\n"); printf("\n%d",recprimitive_p->primitive); FREE_BUF(&RecPrimitive_p); break; } } } } } 153

162 Appendix F Test Process DSP Appendix F6: Init_Xx_Debug, Handle_Xx_DebugSignal void Init_Xx_Debug(void) { (void) Request_IDbg_Register(WAIT_RESPONSE); } // END - Init_Xx_Debug boolean Handle_Xx_DebugSignal( union SIGNAL *psignal ) { return Do_IDbg_HandleSignal( &psignal, // type for sig_in is union SIGNAL * &Xx_Debug, Xx_DebugTable); } // END - Handle_Xx_DebugSignal 154

163 Appendix F Test Process DSP Appendix F7 Handle_Xx_ResponseSignal boolean Handle_Xx_ResponseSignal( union SIGNAL *pinsignal ) { boolean CaseFound = true; } switch( pinsignal->sig_no ) { case SIGNAL_NBR_1: { // Handle signal number 1 break; } case SIGNAL_NBR_2: { // Handle signal number 2 break: } default: { CaseFound = false; // } return CaseFound; 155

164 Appendix F Test Process DSP Appendix F8: TEST_N_INIT static void TEST_N_INIT(char *cmd_buf, int *arg_index, int args_found) { place_in = 0; place_out = 0; currentpcm = 0; DSP_COMM_INIT(); PCM_FRAME_SETUP(1, INIT_MODULE, cmd_buf, arg_index, args_found); } 156

165 Appendix F Test Process DSP Appendix F9: TEST_N static void TEST_N(char *cmd_buf, int *arg_index, int args_found) { int *errorcodes; IDENTIFIER_NOT_USED(cmd_buf); IDENTIFIER_NOT_USED(arg_index); IDENTIFIER_NOT_USED(args_found); Debug_Test_Start("XX_TEST(Test_00N)"); Debug_Print("Number of frames sent to XX: %d\n", currentpcm); errorcodes = PCM_FRAME_RESULT(); if (errorcodes[0]) { Debug_Print("All frames are correct"); Debug_Test_Passed("XX_TEST"); } else { Debug_Print("XX failed for the test vector"); Debug_Print("First error occured in frame %d sample %d", errorcodes[1], errorcodes[2]); Debug_Print("\nRecieved sample from XX: %d\nexpected sample from XX: %d\n", errorcodes[3], errorcodes[4]); Debug_Test_Failed("XX_TEST"); } Debug_Test_End("XX_TEST(Test_00N)"); } 157

166 Appendix F Test Process DSP Appendix F10: TEST_N_KILL static void TEST_N_KILL(char *cmd_buf, int *arg_index, int args_found) { PCM_FRAME_SETUP(1, KILL_MODULE, cmd_buf, arg_index, args_found); } 158

167 Appendix F Test Process DSP Appendix F11: TEST_N_DOC static void TEST_N_DOC(char *cmd_buf, int *arg_index, int args_found) { Debug_Test_Doc_Start("TestName (Test_N)"); Debug_Print("Description of the test case"); } Debug_Test_Doc_End("TestName (Test_N)"); 159

168 Appendix F Test Process DSP Appendix F12: Test Case Registration (IDbg) IDBG_TBL_START( XX_DebugTable ) IDBG_TBL_CMD( TEST_N, "TEST_N" ) IDBG_TBL_CMD( TEST_N_DOC, "TEST_N_DOC" ) IDBG_TBL_CMD( TEST_N_INIT, "TEST_N_INIT" ) IDBG_TBL_CMD( TEST_N_KILL, "TEST_N_KILL" ) IDBG_TBL_CMD( PCM_FRAME_RESET, "PCM_FRAME_RESET" ) IDBG_TBL_CMD( PCM_FRAME_OUTVECT, "PCM_FRAME_OUTVECT" ) IDBG_TBL_CMD( PCM_FRAME_SENDER, "PCM_FRAME_SENDER" ) IDBG_TBL_CMD( GET_RECIEVED_FRAME, "GET_RECIEVED_FRAME" ) IDBG_TBL_VAR_UDEC(0, nbroftests, "NbrOfTests") IDBG_TBL_END 160

169 Appendix G Module Tester Users Manual Appendix G: Module Tester Users Manual 1. Introduction Communication Port and Baud Rate Modules Currently Available Configuration Information Test Execution Test Parameters Number of Execution Loops Documentation Manual Test Evaluation Terminal Window Printouts Report

170 Appendix G Module Tester Users Manual 1. Introduction The purpose of Module Tester is to test the code in a single module in the target isolated from the other modules. This should offer an environment that is free from distracting interference from other modules. Hence the focus only needs to be on the module being tested and the functionality of that specific module. Equally important is that the tests are automatically tested. It is for example possible to repeat a test an optional number of times and create a full test report with just one button click. Every test that is composed in a specific way could be tested in Module Tester. There are two different kinds of tests allowed; the so-called return value test and the vector comparison test. The idea with a return value test is to invoke the target code from the test code and retrieve a value that should fulfil a condition and if it does the test is considered to have passed and else not. It should be defined in the test code where the test starts and ends and there can also be optional printouts giving additional information. Vector comparison tests are just slightly different. A vector with, for example, samples are sent to a function in the target module. The vector is used in an algorithm and the answer is sent back and compared with another vector, which is predefined. That predefined vector could be produced in MatLab for example. How to write the test code like this is described in detail in the test processes for both host tests (see Test Process HOST) and DSP tests (see Test Process DSP). These processes have to be followed carefully in order to make it possible to use Module Tester. It is possible to prepare testing for several modules in advance but only one module at a time can be chosen for testing. There are references, in the instructions for each function, to the corresponding text boxes that are used in the images. The purpose of these is to in detail illustrate the working procedure of every function. 162

171 Appendix G Module Tester Users Manual 2. Communication Port and Baud Rate Choose the serial port (1) and baud rate (2) to use. Make sure that the connection device supports the chosen baud rate. If the connection fails an fault message appears and the settings should be changed. 1. Serial port 2. Baud rate Figure G.1 Sets the communication port and baud rate. Press OK to go further on to choose test module or Quit to end the program. 163

172 Appendix G Module Tester Users Manual 3. Modules Currently Available The available test modules are listed in the image below. Choose the one that is going to be tested. Note that only one module at a time can be chosen. It is possible to go back to this screen later on. 2. Press to go further on 1. Search for available modules once again. Figure G.2 Currently available modules. If the expected module does not appear in the list there were probably some complications during the test process proceedings (see Test Process HOST or Test Process DSP). Redo the test process and press the Update button to (1) view the available modules once again. Choose the module to be tested by pressing the green button placed together with the module name (2). 164

173 Appendix G Module Tester Users Manual 4. Configuration Information Every module has configuration information about software, hardware, operator and version. This is shown in the image below (1) together with the name of the module and the number of test cases that are available. 1. Configuration information Figure G.3 Configuration file. It is possible to change the information due to the current test characteristics. Do not change the fields containing information about the module name and the number of tests available. The first time the module is executed the configuration file is empty and it is up to the tester to update it. The fields can also of course be left without any modifications. Press OK to go further on to the main execution screen or Cancel to back to the previous screen to change what module to test. 165

174 Appendix G Module Tester Users Manual 5. Test Execution The available test cases for the chosen module are listed (1). Every test case is pre-chosen and ready for test execution. Test cases are removed from the execution sequence by unmarking them in the column named Run? (2). Possible parameters have to be defined before execution. For help with the parameters see 6. Test parameters. 1. Name of an available test case 2. Highlight button to include test in execution sequence 3. Test result Figure G.4 Test execution screen. 4. Number of times the test case has executed so far The result of the test execution is shown for each test case in the column named Result (3). The only alternatives are passed and failed. It is presented after the test case execution. The number of runs a test has been executed (4) so far is updated every time a test sequence has finished. That is not modifiable in the Runs-column. If a test fails in a test sequence that very test case is removed from the test sequence and only the remaining tests continue to execute. Hence the number of runs for the failing test case is not incremented as from the test case that failed. Before the test sequence are about to start the tester are requested to define the number of runs in an input window. Every time a test is executed a database entry is automatically made. One entry is made each time a test case is executed, i.e. for every loop (see 7. Number of execution loops). 166

175 Appendix G Module Tester Users Manual 6. Test Parameters Enter the test parameters (1). As described in the test process, it is up to the tester writing the test to handle incoming parameters. It is also up to the test writer to inform about the parameters in the documentation of the test (see 7. Documentation), i.e. a description of the parameters can be found by clicking the DOC button (2). 2. Press to see the documentation 1. Enter the parameters Figure G.5 Test parameters. After each parameter there should be one comma. That is naturally not needed after the last one. Press Run in order to proceed to the screen defining how many loops of the sequence that should be executed (see 7. Number of execution loops) or press Cancel to go back to the start screen. All the parameters used in the current test execution when ending are saved in a file and recovered the next time this module is chosen. Which test cases that are chosen are also saved in the file to be used next time the program starts. 167

176 Appendix G Module Tester Users Manual 7. Number of Execution Loops Before the test cases are executed a prompt asks how many times the sequence of tests should be executed (1). All test cases are executed in a sequence before the next loop starts. 1. Appears after that the Run-button is pressed 2. Start execute test sequence Figure G.6 Number of execution loops If a test fails during the execution it is interrupted and the next test in the sequence starts to execute. The number of runs is then not increased for that test case with the number of loops given in the prompt. There is no choice to dynamically change the parameters from one test execution in the loop to another, e.g. increase the parameter with one for each test execution. That has to be done manually between every execution sequence or in the test code. Each test case execution generates a database entry. Press Start test (2) to start execution. 168

177 Appendix G Module Tester Users Manual 8. Documentation Every test case should be documented in a special documentation method in the code. The one writing the test code should do this. This documentation is easily presented for the one executing the test by a click on the DOC button placed together with the test case (1). 1. Press the button to see the documentation Figure G.7 Documentation button screen There should be information about possible parameters used in the test case (2) and information about possible manual evaluation (3). That kind of information is written as in the example image below. The text between the parenthesises is presented when the notice for manual evaluation is shown (see 9. Manual test evaluation). 3. Information about manual test evaluation 2. Information about the parameters Figure G.8 Documentation 4. Press to print the information 169

178 Appendix G Module Tester Users Manual Press the Print button (4) to send the information to a printer. 9. Manual Test Evaluation Some of the test cases are not suitable to evaluate automatically. The purpose of a test could be to play a tone and listen to it and then decide if the test passed or not. That a test should be manually evaluated is defined in the documentation (1) of the test by the one writing it (see Test Process HOST or Test Process DSP). Before such a test starts the tester is noticed by a popup window (2). A short description of the test is also displayed when the test notice appears (3). 1. The documentation declares a test to be manually evaluated Figure G.9 Declaring a test to be manually evaluated. 3. Short test description 2. Appears when a test with manual evaluation starts 4. Press to start the execution of the test Figure G.10 Notice that a test case with manual evaluation starts. 170

179 Appendix G Module Tester Users Manual Press Start test (4) to start the execution of the test. When that specific test in the loop has finished a prompt appears (5) that requests the tester to decide if the test passed or not (6). 5. Appears after that the test has finished 7. If the test failed the tester add a description 6. The tester decides if the test passed or failed Figure G.11 Request to manually evaluate a test case. If the tester consider the test to have failed it is possible to add an explanation about that decision (7). 171

180 Appendix G Module Tester Users Manual 10. Terminal Window Every debug printout (see Test Process HOST or Test Process DSP) in the phone is sent to the PC and can be viewed in a special terminal window. To start the window it is just to press the Terminal Window button on the main execution screen (1). The window cannot be started during test execution. 1. Press to see the Terminal Window Figure G.12 Terminal Window button screen The terminal window in Module Tester simulates a standard terminal window like mslog. This is useful when more information is needed than what is given in the main execution window, i.e. more information than if the test passed or not. The window has a possibility to filter the printouts. If the filter is turned off then it shows all printouts received (2). And if the filter is on there are only the printouts that are recognised by Module Tester, e.g. the start test printout and the doc printout. The filter is turned on by marking the Filter On/Off square (3). 172

181 Appendix G Module Tester Users Manual 2. Printouts shown without using the filter Figure G.13 Terminal window that is not using the filter function. 3. Mark the check box to filter the outputs 4. Write IDBG command here Figure G.14 Terminal window that is using the filter function. It is not only possible to see the printouts that are received. It is also possible to send Interactive Debug commands to the phone. These are written on the WriteText line (4). 173

182 Appendix G Module Tester Users Manual 11. Printouts To see the printouts that were made during the execution of a test case it is just to press the PRINTOUTS button (1) after that the test has finished. 1. Press the button to see the printouts Figure G.15 Printouts button screen It is also possible to send these printouts to the printer. Press the Print button (2) to create a printout document that is sent to the printer or press Close (3) to return to the test execution screen. 3. Close the window and return to the execution screen 2. Send the information to the printer Figure G.16 Printouts that were made during execution. 174

183 Appendix G Module Tester Users Manual 12. Report It is possible to generate a report containing information about the tests that previously executed. Press the Generate Report button on the test execution screen to create a report that looks like the one below. All the information is automatically generated from the information in the configuration file, the recently executed test cases and the current date and time. Figure G.17 Generated test report. 175

(Refer Slide Time: 01:52)

(Refer Slide Time: 01:52) Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This

More information

Software testing. Objectives

Software testing. Objectives Software testing cmsc435-1 Objectives To discuss the distinctions between validation testing and defect testing To describe the principles of system and component testing To describe strategies for generating

More information

To introduce software process models To describe three generic process models and when they may be used

To introduce software process models To describe three generic process models and when they may be used Software Processes Objectives To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software

More information

Basic Testing Concepts and Terminology

Basic Testing Concepts and Terminology T-76.5613 Software Testing and Quality Assurance Lecture 2, 13.9.2006 Basic Testing Concepts and Terminology Juha Itkonen SoberIT Contents Realities and principles of Testing terminology and basic concepts

More information

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur Module 10 Coding and Testing Lesson 23 Code Review Specific Instructional Objectives At the end of this lesson the student would be able to: Identify the necessity of coding standards. Differentiate between

More information

VDM vs. Programming Language Extensions or their Integration

VDM vs. Programming Language Extensions or their Integration VDM vs. Programming Language Extensions or their Integration Alexander A. Koptelov and Alexander K. Petrenko Institute for System Programming of Russian Academy of Sciences (ISPRAS), B. Communisticheskaya,

More information

Software Engineering. Software Processes. Based on Software Engineering, 7 th Edition by Ian Sommerville

Software Engineering. Software Processes. Based on Software Engineering, 7 th Edition by Ian Sommerville Software Engineering Software Processes Based on Software Engineering, 7 th Edition by Ian Sommerville Objectives To introduce software process models To describe three generic process models and when

More information

Software Project Models

Software Project Models INTERNATIONAL JOURNAL OF TECHNOLOGY ENHANCEMENTS AND EMERGING ENGINEERING RESEARCH, VOL 1, ISSUE 4 135 Software Project Models Abhimanyu Chopra, Abhinav Prashar, Chandresh Saini Email-abhinav.prashar@gmail.com,

More information

Software Process for QA

Software Process for QA Software Process for QA Basic approaches & alternatives CIS 610, W98 / M Young 1/7/98 1 This introduction and overview is intended to provide some basic background on software process (sometimes called

More information

Verification and Validation of Software Components and Component Based Software Systems

Verification and Validation of Software Components and Component Based Software Systems Chapter 5 29 Verification and Validation of Software Components and Component Based Christina Wallin Industrial Information Technology Software Engineering Processes ABB Corporate Research christina.wallin@mdh.se

More information

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur Module 2 Software Life Cycle Model Lesson 4 Prototyping and Spiral Life Cycle Models Specific Instructional Objectives At the end of this lesson the student will be able to: Explain what a prototype is.

More information

A system is a set of integrated components interacting with each other to serve a common purpose.

A system is a set of integrated components interacting with each other to serve a common purpose. SYSTEM DEVELOPMENT AND THE WATERFALL MODEL What is a System? (Ch. 18) A system is a set of integrated components interacting with each other to serve a common purpose. A computer-based system is a system

More information

Automated software testing--a perspective.(database AND NETWORK JOURNAL INTELLIGENCE)

Automated software testing--a perspective.(database AND NETWORK JOURNAL INTELLIGENCE) Database and Network Journal April 2005 v35 i2 p8(4) Page 1 COPYRIGHT 2005 A.P. Publications Ltd. My perspective on most things is that the glass is half full rather than half empty. This attitude carries

More information

Fault Slip Through Measurement in Software Development Process

Fault Slip Through Measurement in Software Development Process Fault Slip Through Measurement in Software Development Process Denis Duka, Lovre Hribar Research and Development Center Ericsson Nikola Tesla Split, Croatia denis.duka@ericsson.com; lovre.hribar@ericsson.com

More information

Formal Software Testing. Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com

Formal Software Testing. Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com Formal Software Testing Terri Grenda, CSTE IV&V Testing Solutions, LLC www.ivvts.com Scope of Testing Find defects early Remove defects prior to production Identify Risks Unbiased opinion When Should Testing

More information

Software Engineering. Software Development Process Models. Lecturer: Giuseppe Santucci

Software Engineering. Software Development Process Models. Lecturer: Giuseppe Santucci Software Engineering Software Development Process Models Lecturer: Giuseppe Santucci Summary Modeling the Software Process Generic Software Process Models Waterfall model Process Iteration Incremental

More information

CS4507 Advanced Software Engineering

CS4507 Advanced Software Engineering CS4507 Advanced Software Engineering Lectures 2 & 3: Software Development Lifecycle Models A O Riordan, 2015 Some diagrams from Sommerville, some notes from Maciaszek/Liong Lifecycle Model Software development

More information

Implementing ERP in Small and Mid-Size Companies

Implementing ERP in Small and Mid-Size Companies Implementing ERP in Small and Mid-Size Companies This is an excerpt from the April 1, 2001 issue of CIO Magazine ERP Implementation in 10 Easy Steps. 1. Ask the board of directors for an arbitrary but

More information

IV. Software Lifecycles

IV. Software Lifecycles IV. Software Lifecycles Software processes and lifecycles Relative costs of lifecycle phases Examples of lifecycles and processes Process maturity scale Information system development lifecycle Lifecycle

More information

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur Module 10 Coding and Testing Lesson 26 Debugging, Integration and System Testing Specific Instructional Objectives At the end of this lesson the student would be able to: Explain why debugging is needed.

More information

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction

Software Testing. Knowledge Base. Rajat Kumar Bal. Introduction Software Testing Rajat Kumar Bal Introduction In India itself, Software industry growth has been phenomenal. IT field has enormously grown in the past 50 years. IT industry in India is expected to touch

More information

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur Module 2 Software Life Cycle Model Lesson 3 Basics of Software Life Cycle and Waterfall Model Specific Instructional Objectives At the end of this lesson the student will be able to: Explain what is a

More information

Objectives. The software process. Basic software process Models. Waterfall model. Software Processes

Objectives. The software process. Basic software process Models. Waterfall model. Software Processes Software Processes Objectives To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software

More information

Karunya University Dept. of Information Technology

Karunya University Dept. of Information Technology PART A Questions 1. Mention any two software process models. 2. Define risk management. 3. What is a module? 4. What do you mean by requirement process? 5. Define integration testing. 6. State the main

More information

White Paper On Pilot Method Of ERP Implementation

White Paper On Pilot Method Of ERP Implementation White Paper On Pilot Method Of ERP Implementation Rod Clarke Rod Clarke provides guidance, advice and support to businesses in successfully applying IS/IT in support of their business goals. He brings

More information

Software Development Life Cycle

Software Development Life Cycle 4 Software Development Life Cycle M MAJOR A J O R T TOPICSO P I C S Objectives... 52 Pre-Test Questions... 52 Introduction... 53 Software Development Life Cycle Model... 53 Waterfall Life Cycle Model...

More information

Software Engineering Reference Framework

Software Engineering Reference Framework Software Engineering Reference Framework Michel Chaudron, Jan Friso Groote, Kees van Hee, Kees Hemerik, Lou Somers, Tom Verhoeff. Department of Mathematics and Computer Science Eindhoven University of

More information

CS 389 Software Engineering. Lecture 2 Chapter 2 Software Processes. Adapted from: Chap 1. Sommerville 9 th ed. Chap 1. Pressman 6 th ed.

CS 389 Software Engineering. Lecture 2 Chapter 2 Software Processes. Adapted from: Chap 1. Sommerville 9 th ed. Chap 1. Pressman 6 th ed. CS 389 Software Engineering Lecture 2 Chapter 2 Software Processes Adapted from: Chap 1. Sommerville 9 th ed. Chap 1. Pressman 6 th ed. Topics covered Software process models Process activities Coping

More information

Accelerating software testing effectiveness using Agile methodologies..

Accelerating software testing effectiveness using Agile methodologies.. Accelerating software testing effectiveness using Agile methodologies.. How can testing be completed faster, and more efficiently, within short iterations? The Problem It is a painful paradox that while

More information

Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II)

Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which

More information

What is a life cycle model?

What is a life cycle model? What is a life cycle model? Framework under which a software product is going to be developed. Defines the phases that the product under development will go through. Identifies activities involved in each

More information

What is Agile Software Development?

What is Agile Software Development? What is Agile Software Development? Introduction What is agile software development, and what changes does it require of a tester? How does a tester become more effective in an agile environment? This

More information

Testing, What is it Good For? Absolutely Everything!

Testing, What is it Good For? Absolutely Everything! Testing, What is it Good For? Absolutely Everything! An overview of software testing and why it s an essential step in building a good product Beth Schechner Elementool The content of this ebook is provided

More information

How to Select and Implement an ERP System

How to Select and Implement an ERP System How to Select and Implement an ERP System Prepared by 180 Systems Written by Michael Burns 180 Systems WHAT IS ERP?... 3 ANALYSIS... 4 VENDOR SELECTION... 6 VENDOR DEMONSTRATIONS... 8 REFERENCE CALLS...

More information

Software Engineering. So#ware Processes

Software Engineering. So#ware Processes Software Engineering So#ware Processes 1 The software process A structured set of activities required to develop a software system. Many different software processes but all involve: Specification defining

More information

Using Earned Value, Part 2: Tracking Software Projects. Using Earned Value Part 2: Tracking Software Projects

Using Earned Value, Part 2: Tracking Software Projects. Using Earned Value Part 2: Tracking Software Projects Using Earned Value Part 2: Tracking Software Projects Abstract We ve all experienced it too often. The first 90% of the project takes 90% of the time, and the last 10% of the project takes 90% of the time

More information

http://www.test-institute.org International Software Test Institute

http://www.test-institute.org International Software Test Institute THE ONLY BOOK CAN SIMPLY LEARN SOFTWARE TESTING! Page 1 Contents ABOUT THE AUTHOR... 3 1. Introduction To Software Testing... 4 2. What is Software Quality Assurance?... 7 3. What Is Software Testing?...

More information

Answers to Review Questions

Answers to Review Questions Tutorial 2 The Database Design Life Cycle Reference: MONASH UNIVERSITY AUSTRALIA Faculty of Information Technology FIT1004 Database Rob, P. & Coronel, C. Database Systems: Design, Implementation & Management,

More information

Empirical Development of a Mobile Application: UVA- Wise Undergraduate Software Engineering Capstone Project

Empirical Development of a Mobile Application: UVA- Wise Undergraduate Software Engineering Capstone Project Empirical Development of a Mobile Application: UVA- Wise Undergraduate Software Engineering Capstone Project I. Weissberger, S. Showalter, T. Deel, M. Ward, M. Whitt, and A. Qureshi University of Virginia

More information

Testing of safety-critical software some principles

Testing of safety-critical software some principles 1(60) Testing of safety-critical software some principles Emerging Trends in Software Testing: autumn 2012 Matti Vuori, Tampere University of Technology 27.11.2012 Contents 1/4 Topics of this lecture 6

More information

User Stories Applied

User Stories Applied User Stories Applied for Agile Software Development Mike Cohn Boston San Francisco New York Toronto Montreal London Munich Paris Madrid Capetown Sydney Tokyo Singapore Mexico City Chapter 2 Writing Stories

More information

National University of Ireland, Maynooth MAYNOOTH, CO. KILDARE, IRELAND. Testing Guidelines for Student Projects

National University of Ireland, Maynooth MAYNOOTH, CO. KILDARE, IRELAND. Testing Guidelines for Student Projects National University of Ireland, Maynooth MAYNOOTH, CO. KILDARE, IRELAND. DEPARTMENT OF COMPUTER SCIENCE, TECHNICAL REPORT SERIES Testing Guidelines for Student Projects Stephen Brown and Rosemary Monahan

More information

應 用 測 試 於 軟 體 發 展 生 命 週 期. Testing In The Software Development Life Cycle

應 用 測 試 於 軟 體 發 展 生 命 週 期. Testing In The Software Development Life Cycle The Second Management Innovation and Practices Conference, Tamsui, Taiwan, April 2001,Volume 2, pp59-68 應 用 測 試 於 軟 體 發 展 生 命 週 期 Testing In The Software Development Life Cycle 蔡 博 元 莊 立 文 真 理 大 學 資 訊

More information

SOFTWARE CONFIGURATION MANAGEMENT GUIDEBOOK

SOFTWARE CONFIGURATION MANAGEMENT GUIDEBOOK Office of Safety and Mission Assurance NASA-GB-9503 SOFTWARE CONFIGURATION MANAGEMENT GUIDEBOOK AUGUST 1995 National Aeronautics and Space Administration Washington, D.C. 20546 PREFACE The growth in cost

More information

An Analysis on Objectives, Importance and Types of Software Testing

An Analysis on Objectives, Importance and Types of Software Testing Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 9, September 2013,

More information

ISTQB Certified Tester. Foundation Level. Sample Exam 1

ISTQB Certified Tester. Foundation Level. Sample Exam 1 ISTQB Certified Tester Foundation Level Version 2015 American Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged. #1 When test cases are designed

More information

Development Methodologies Compared

Development Methodologies Compared N CYCLES software solutions Development Methodologies Compared Why different projects require different development methodologies. December 2002 Dan Marks 65 Germantown Court 1616 West Gate Circle Suite

More information

Chapter 4 Software Lifecycle and Performance Analysis

Chapter 4 Software Lifecycle and Performance Analysis Chapter 4 Software Lifecycle and Performance Analysis This chapter is aimed at illustrating performance modeling and analysis issues within the software lifecycle. After having introduced software and

More information

Metrics in Software Test Planning and Test Design Processes

Metrics in Software Test Planning and Test Design Processes Master Thesis Software Engineering Thesis no: MSE-2007:02 January 2007 Metrics in Software Test Planning and Test Design Processes Wasif Afzal School of Engineering Blekinge Institute of Technology Box

More information

Peter Mileff PhD SOFTWARE ENGINEERING. The Basics of Software Engineering. University of Miskolc Department of Information Technology

Peter Mileff PhD SOFTWARE ENGINEERING. The Basics of Software Engineering. University of Miskolc Department of Information Technology Peter Mileff PhD SOFTWARE ENGINEERING The Basics of Software Engineering University of Miskolc Department of Information Technology Introduction Péter Mileff - Department of Information Engineering Room

More information

Custom Web Development Guidelines

Custom Web Development Guidelines Introduction Custom Web Development Guidelines Unlike shrink wrap software, custom software development involves a partnership between the architect/programmer/developer (SonicSpider) and the owner/testers/users

More information

Advanced Software Engineering. Software Development Processes

Advanced Software Engineering. Software Development Processes Agent and Object Technology Lab Dipartimento di Ingegneria dell Informazione Università degli Studi di Parma Advanced Software Engineering Software Development Processes Prof. Agostino Poggi Software Development

More information

Presentation: 1.1 Introduction to Software Testing

Presentation: 1.1 Introduction to Software Testing Software Testing M1: Introduction to Software Testing 1.1 What is Software Testing? 1.2 Need for Software Testing 1.3 Testing Fundamentals M2: Introduction to Testing Techniques 2.1 Static Testing 2.2

More information

Understanding Software Test Cases

Understanding Software Test Cases Understanding Software Test Cases Techniques for better software testing Josh Kounitz Elementool The content of this ebook is provided to you for free by Elementool. You may distribute this ebook to anyone

More information

Software Development Under Stringent Hardware Constraints: Do Agile Methods Have a Chance?

Software Development Under Stringent Hardware Constraints: Do Agile Methods Have a Chance? Software Development Under Stringent Hardware Constraints: Do Agile Methods Have a Chance? Jussi Ronkainen, Pekka Abrahamsson VTT Technical Research Centre of Finland P.O. Box 1100 FIN-90570 Oulu, Finland

More information

Sample Exam. 2011 Syllabus

Sample Exam. 2011 Syllabus ISTQ Foundation Level 2011 Syllabus Version 2.3 Qualifications oard Release ate: 13 June 2015 ertified Tester Foundation Level Qualifications oard opyright 2015 Qualifications oard (hereinafter called

More information

SECTION 2 PROGRAMMING & DEVELOPMENT

SECTION 2 PROGRAMMING & DEVELOPMENT Page 1 SECTION 2 PROGRAMMING & DEVELOPMENT DEVELOPMENT METHODOLOGY THE WATERFALL APPROACH The Waterfall model of software development is a top-down, sequential approach to the design, development, testing

More information

Title: Topic 3 Software process models (Topic03 Slide 1).

Title: Topic 3 Software process models (Topic03 Slide 1). Title: Topic 3 Software process models (Topic03 Slide 1). Topic 3: Lecture Notes (instructions for the lecturer) Author of the topic: Klaus Bothe (Berlin) English version: Katerina Zdravkova, Vangel Ajanovski

More information

The Role of Automation Systems in Management of Change

The Role of Automation Systems in Management of Change The Role of Automation Systems in Management of Change Similar to changing lanes in an automobile in a winter storm, with change enters risk. Everyone has most likely experienced that feeling of changing

More information

Essentials of the Quality Assurance Practice Principles of Testing Test Documentation Techniques. Target Audience: Prerequisites:

Essentials of the Quality Assurance Practice Principles of Testing Test Documentation Techniques. Target Audience: Prerequisites: Curriculum Certified Software Tester (CST) Common Body of Knowledge Control Procedures Problem Resolution Reports Requirements Test Builds Test Cases Test Execution Test Plans Test Planning Testing Concepts

More information

VAIL-Plant Asset Integrity Management System. Software Development Process

VAIL-Plant Asset Integrity Management System. Software Development Process VAIL-Plant Asset Integrity Management System Software Development Process Document Number: VAIL/SDP/2008/008 Engineering For a Safer World P u b l i c Approved by : Ijaz Ul Karim Rao Revision: 0 Page:2-of-15

More information

Chapter 8 Software Testing

Chapter 8 Software Testing Chapter 8 Software Testing Summary 1 Topics covered Development testing Test-driven development Release testing User testing 2 Program testing Testing is intended to show that a program does what it is

More information

2. Analysis, Design and Implementation

2. Analysis, Design and Implementation 2. Analysis, Design and Implementation Subject/Topic/Focus: Software Production Process Summary: Software Crisis Software as a Product: From Programs to Application Systems Products Software Development:

More information

(Refer Slide Time 00:56)

(Refer Slide Time 00:56) Software Engineering Prof.N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-12 Data Modelling- ER diagrams, Mapping to relational model (Part -II) We will continue

More information

Adversary Modelling 1

Adversary Modelling 1 Adversary Modelling 1 Evaluating the Feasibility of a Symbolic Adversary Model on Smart Transport Ticketing Systems Authors Arthur Sheung Chi Chan, MSc (Royal Holloway, 2014) Keith Mayes, ISG, Royal Holloway

More information

Module 1. Introduction to Software Engineering. Version 2 CSE IIT, Kharagpur

Module 1. Introduction to Software Engineering. Version 2 CSE IIT, Kharagpur Module 1 Introduction to Software Engineering Lesson 2 Structured Programming Specific Instructional Objectives At the end of this lesson the student will be able to: Identify the important features of

More information

Fundamentals of Measurements

Fundamentals of Measurements Objective Software Project Measurements Slide 1 Fundamentals of Measurements Educational Objective: To review the fundamentals of software measurement, to illustrate that measurement plays a central role

More information

2. Analysis, Design and Implementation

2. Analysis, Design and Implementation 2. Subject/Topic/Focus: Software Production Process Summary: Software Crisis Software as a Product: From Individual Programs to Complete Application Systems Software Development: Goals, Tasks, Actors,

More information

Nova Software Quality Assurance Process

Nova Software Quality Assurance Process Nova Software Quality Assurance Process White Paper Atlantic International Building 15F No.2 Ke Yuan Yi Road, Shiqiaopu, Chongqing, P.R.C. 400039 Tel: 86-23- 68795169 Fax: 86-23- 68795169 Quality Assurance

More information

Managing Successful Software Development Projects Mike Thibado 12/28/05

Managing Successful Software Development Projects Mike Thibado 12/28/05 Managing Successful Software Development Projects Mike Thibado 12/28/05 Copyright 2006, Ambient Consulting Table of Contents EXECUTIVE OVERVIEW...3 STATEMENT OF WORK DOCUMENT...4 REQUIREMENTS CHANGE PROCEDURE...5

More information

Complete Web Application Security. Phase1-Building Web Application Security into Your Development Process

Complete Web Application Security. Phase1-Building Web Application Security into Your Development Process Complete Web Application Security Phase1-Building Web Application Security into Your Development Process Table of Contents Introduction 3 Thinking of security as a process 4 The Development Life Cycle

More information

A Framework for Software Product Line Engineering

A Framework for Software Product Line Engineering Günter Böckle Klaus Pohl Frank van der Linden 2 A Framework for Software Product Line Engineering In this chapter you will learn: o The principles of software product line subsumed by our software product

More information

THE THREE ASPECTS OF SOFTWARE QUALITY: FUNCTIONAL, STRUCTURAL, AND PROCESS

THE THREE ASPECTS OF SOFTWARE QUALITY: FUNCTIONAL, STRUCTURAL, AND PROCESS David Chappell THE THREE ASPECTS OF SOFTWARE QUALITY: FUNCTIONAL, STRUCTURAL, AND PROCESS Sponsored by Microsoft Corporation Our world runs on software. Every business depends on it, every mobile phone

More information

INDEPENDENT VERIFICATION AND VALIDATION OF EMBEDDED SOFTWARE

INDEPENDENT VERIFICATION AND VALIDATION OF EMBEDDED SOFTWARE PREFERRED RELIABILITY PRACTICES PRACTICE NO. PD-ED-1228 PAGE 1 OF 6 INDEPENDENT VERIFICATION AND VALIDATION OF EMBEDDED SOFTWARE Practice: To produce high quality, reliable software, use Independent Verification

More information

Example Software Development Process.

Example Software Development Process. Example Software Development Process. The example software development process is shown in Figure A. The boxes represent the software development process kernels. The Software Unit Testing, Software Component

More information

a new generation software test automation framework - CIVIM

a new generation software test automation framework - CIVIM a new generation software test automation framework - CIVIM Software Testing is the last phase in software development lifecycle which has high impact on the quality of the final product delivered to the

More information

Object Oriented Analysis and Design and Software Development Process Phases

Object Oriented Analysis and Design and Software Development Process Phases Object Oriented Analysis and Design and Software Development Process Phases 28 pages Why object oriented? Because of growing complexity! How do we deal with it? 1. Divide and conquer 2. Iterate and increment

More information

1. Software Engineering Overview

1. Software Engineering Overview 1. Overview 1. Overview...1 1.1 Total programme structure...1 1.2 Topics covered in module...2 1.3 Examples of SW eng. practice in some industrial sectors...4 1.3.1 European Space Agency (ESA), software

More information

Outline. 1 Denitions. 2 Principles. 4 Implementation and Evaluation. 5 Debugging. 6 References

Outline. 1 Denitions. 2 Principles. 4 Implementation and Evaluation. 5 Debugging. 6 References Outline Computer Science 331 Introduction to Testing of Programs Mike Jacobson Department of Computer Science University of Calgary Lecture #3-4 1 Denitions 2 3 4 Implementation and Evaluation 5 Debugging

More information

Software Engineering. How does software fail? Terminology CS / COE 1530

Software Engineering. How does software fail? Terminology CS / COE 1530 Software Engineering CS / COE 1530 Testing How does software fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible to implement Faulty design Faulty code Improperly

More information

Software Development: The Waterfall Model

Software Development: The Waterfall Model Steven Zeil June 7, 2013 Contents 1 Software Development Process Models 2 1.1 Components of the Waterfall Model................................. 2 1.1.1 What is a requirement?. 2 1.1.2 Testing..........

More information

Software Development Life Cycle (SDLC)

Software Development Life Cycle (SDLC) Software Development Life Cycle (SDLC) Supriyo Bhattacharjee MOF Capability Maturity Model (CMM) A bench-mark for measuring the maturity of an organization s software process CMM defines 5 levels of process

More information

Introduction to Automated Testing

Introduction to Automated Testing Introduction to Automated Testing What is Software testing? Examination of a software unit, several integrated software units or an entire software package by running it. execution based on test cases

More information

PESIT Bangalore South Campus. Department of MCA SOFTWARE ENGINEERING

PESIT Bangalore South Campus. Department of MCA SOFTWARE ENGINEERING PESIT Bangalore South Campus Department of MCA SOFTWARE ENGINEERING 1. GENERAL INFORMATION Academic Year: JULY-NOV 2015 Semester(s):III Title Code Duration (hrs) SOFTWARE ENGINEERING 13MCA33 Lectures 52Hrs

More information

Software Process Models. Xin Feng

Software Process Models. Xin Feng Software Process Models Xin Feng Questions to Answer in Software Engineering? Questions to answer in software engineering What is the problem to be solved? Definition What are the characteristics of the

More information

A Software Engineering Model for Mobile App Development

A Software Engineering Model for Mobile App Development APPENDIX C A Software Engineering Model for Mobile App Development As we mentioned early in the book (see Chapter 1), to successfully develop a mobile software solution you should follow an engineering

More information

Maturity, motivation and effective learning in projects - benefits from using industrial clients

Maturity, motivation and effective learning in projects - benefits from using industrial clients Maturity, motivation and effective learning in projects - benefits from using industrial clients C Johansson Ericsson Software Technology AB/University of Karlskrona/Ronneby P Molin University of Karlskrona/Ronneby,

More information

Software Engineering. What is a system?

Software Engineering. What is a system? What is a system? Software Engineering Software Processes A purposeful collection of inter-related components working together to achieve some common objective. A system may include software, mechanical,

More information

WHAT WE NEED TO START THE PERFORMANCE TESTING?

WHAT WE NEED TO START THE PERFORMANCE TESTING? ABSTRACT Crystal clear requirements before starting an activity are always helpful in achieving the desired goals. Achieving desired results are quite difficult when there is vague or incomplete information

More information

Unit 1 Learning Objectives

Unit 1 Learning Objectives Fundamentals: Software Engineering Dr. Rami Bahsoon School of Computer Science The University Of Birmingham r.bahsoon@cs.bham.ac.uk www.cs.bham.ac.uk/~rzb Office 112 Y9- Computer Science Unit 1. Introduction

More information

Defect Prevention: A Tester s Role in Process Improvement and reducing the Cost of Poor Quality. Mike Ennis, Senior Test Manager Accenture

Defect Prevention: A Tester s Role in Process Improvement and reducing the Cost of Poor Quality. Mike Ennis, Senior Test Manager Accenture Defect Prevention: A Tester s Role in Process Improvement and reducing the Cost of Poor Quality Mike Ennis, Senior Test Manager Accenture IISP, 1996-2008 www.spinstitute.org 1 Defect Prevention versus

More information

SPECIFICATION BY EXAMPLE. Gojko Adzic. How successful teams deliver the right software. MANNING Shelter Island

SPECIFICATION BY EXAMPLE. Gojko Adzic. How successful teams deliver the right software. MANNING Shelter Island SPECIFICATION BY EXAMPLE How successful teams deliver the right software Gojko Adzic MANNING Shelter Island Brief Contents 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Preface xiii Acknowledgments xxii

More information

Software Error Analysis

Software Error Analysis U.S. DEPARTMENT OF COMMERCE Technology Admistration National Institute of Standards and Technology Computer Systems Laboratory Gaithersburg, MD 20899 Software Error Analysis NIST Special Publication 500-209

More information

ST3006 - Software Engineering

ST3006 - Software Engineering University of Dublin Trinity College ST3006 - Software Engineering Anthony Harrington Department of Computer Science Trinity College Dublin Anthony.Harrington@cs.tcd.ie Lifecycles A software project goes

More information

Why Aircraft Fly and Software Systems Don t

Why Aircraft Fly and Software Systems Don t Why Aircraft Fly and Software Systems Don t Robert Howe Copyright Verum Consultants BV 1 Contents Introduction Aeronautical Engineering Circuit Engineering Software Engineering Analytical Software Design

More information

Software Life Cycle. Main issues: Discussion of different life cycle models Maintenance or evolution

Software Life Cycle. Main issues: Discussion of different life cycle models Maintenance or evolution Software Life Cycle Main issues: Discussion of different life cycle models Maintenance or evolution Not this life cycle SE, Software Lifecycle, Hans van Vliet, 2008 2 Introduction software development

More information

Chap 1. Software Quality Management

Chap 1. Software Quality Management Chap 1. Software Quality Management Part 1.1 Quality Assurance and Standards Part 1.2 Software Review and Inspection Part 1.3 Software Measurement and Metrics 1 Part 1.1 Quality Assurance and Standards

More information

Agile Projects 7. Agile Project Management 21

Agile Projects 7. Agile Project Management 21 Contents Contents 1 2 3 Agile Projects 7 Introduction 8 About the Book 9 The Problems 10 The Agile Manifesto 12 Agile Approach 14 The Benefits 16 Project Components 18 Summary 20 Agile Project Management

More information

G53QAT COURSEWORK BY ADEOLU OPEOLUWA OPEODU AXO16U. Test Planning

G53QAT COURSEWORK BY ADEOLU OPEOLUWA OPEODU AXO16U. Test Planning Test Planning The topic that I will discuss in this essay is Test Planning. I will address the topic of test planning in relation to software test planning. Defining the scope During the course of this

More information

ABSTRACT. would end the use of the hefty 1.5-kg ticket racks carried by KSRTC conductors. It would also end the

ABSTRACT. would end the use of the hefty 1.5-kg ticket racks carried by KSRTC conductors. It would also end the E-Ticketing 1 ABSTRACT Electronic Ticket Machine Kerala State Road Transport Corporation is introducing ticket machines on buses. The ticket machines would end the use of the hefty 1.5-kg ticket racks

More information