Improving testability of object-oriented systems

Size: px
Start display at page:

Download "Improving testability of object-oriented systems"

Transcription

1 Improving testability of object-oriented systems Dissertation zur Erlangung des Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) des Fachbereichs Informatik der FernUniversität in Hagen vorgelegt von Dipl.-Ing. Stefan Jungmayr erster Berichterstatter Univ.Prof. Dr. H.-W. Six (FernUniversität in Hagen) zweiter Berichterstatter Ao.Univ.Prof. Dr. G. Futschek (TU-Wien) Tag der mündlichen Prüfung 18. Dezember 2003

2 Jungmayr, Stefan: Improving testability of object-oriented systems Stefan Jungmayr. Als Ms. gedr.. Berlin : dissertation.de Verlag im Internet GmbH, 2004 Zugl.: Hagen, Fernuniv., Diss., 2003 ISBN Bibliografische Information Der Deutschen Bibliothek Die Deutsche Bibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über < abrufbar. Copyright dissertation.de Verlag im Internet GmbH 2004 Alle Rechte, auch das des auszugsweisen Nachdruckes, der auszugsweisen oder vollständigen Wiedergabe, der Speicherung in Datenverarbeitungsanlagen, auf Datenträgern oder im Internet und der Übersetzung, vorbehalten. Es wird ausschließlich chlorfrei gebleichtes Papier (TCF) nach DIN-ISO 9706 verwendet. Printed in Germany. dissertation.de - Verlag im Internet GmbH Pestalozzistraße Berlin URL:

3 Unless the LORD builds the house, its builders labour in vain. [Psalm 127,1] i

4 ii Zusammenfassung Software kontrolliert in zunehmendem Maße sicherheits- und geschäftskritische Systeme. Zur Sicherstellung der Qualität und Fehlerfreiheit dieser Software werden weitgehend Tests eingesetzt. Der Testvorgang selbst ist dabei meist mit einem großem Ressourcenaufwand verbunden. Deshalb wurden bisher zahlreiche Ansätze zur Reduktion des Testaufwandes entwickelt, die auf eine Verbesserung der eingesetzten Testmethoden und Testwerkzeuge fokusieren. Nicht zuletzt hängt der Testaufwand bzw. Testerfolg aber auch wesentlich von den Eigenschaften des zu testenden Produktes selbst ab, welche unter dem Begriff Testbarkeit zusammengefasst werden. Die vorliegende Arbeit beschreibt einen methodischen Ansatz zur Verbesserung der Testbarkeit von objektorientierter Software. Der Schwerpunkt liegt dabei auf der Kontrolle der Abhängigkeiten zwischen den verschiedenen Teilen des Softwaresystems, beginnend in der Anforderungsermittlung bis hin zur Implementierung und dem Test. Unterstützt wird der Ansatz durch Entwurfsrichtlinien, neuartige Produktmetriken sowie einem eigens entwickelten Analysewerkzeug. Wesentliche Teile des Ansatzes werden anhand von Fallbeispielen demonstriert und validiert.

5 iii Summary Safety- and business-critical systems are often controlled by software. To guarantee the quality and correctness of the software, tests are widely used. Unfortunately, testing consumes a significant amount of resources. A number of approaches to reduce the test effort have been designed so far which mainly focus on the test methods and tools applied. However, a main influence on the test effort and test success comes from characteristics of the software under test, also known as testability. This thesis describes a systematic approach to improve the testability of object-oriented software. The main focus lies on controlling the dependencies between different parts of the system, from the requirements capture activity to the implementation and test activity. The approach is supported by design guidelines, new product metrics, and an analysis tool. Essential parts of the approach are demonstrated and validated based on a set of case studies.

6 iv

7 v Preface This thesis covers some important aspects of testability but of course not the entire field of testability engineering. You can find additional information on the subject as well as information on how to contact the author at the following website 1 : Any comments, questions and suggestions regarding testability are welcome! 1 The author is not able to guarantee, that this website will be available for more than several years after the publication date of this work.

8 vi

9 vii Acknowledgements I thank god for all what he provided to me out of his grace and outside of my control, including the numerous support by other people and the ideas concerning technical solutions. I am grateful to Univ.Prof. Dr. Hans-Werner Six for the opportunity to work on this thesis, for his comments and advice on the content, and for the chance to learn from his writing experience. I m thankful to Ao.Univ.Prof. Dr. Gerald Futschek for his encouragement and for his willingness to act as a referee. Appreciations go to Prof. Dr. Mario Winter for his support and comments on working drafts as well as to Edgar Merl and Alexander Müller who contributed significantly to this work in the course of their master theses and in terms of reviews. Many thanks go to my dear colleagues Henrik Behrens, Andreas Homrighausen and Thorsten Ritter, which have always been willing to discuss technical issues at their doorstep. My special thanks goes to my parents, who granted me the possibility to study and who encouraged me all the time as well as to my loving wife Sandra who enabled me to work on this thesis by managing much of our private life and obligations. Also thanks to TogetherSoft and Borland who granted a campus license of Together (a tool which has been used for this work) to the FernUniversität in Hagen.

10 viii

11 ix Contents Part I Introduction and Related Work 1 Introduction Testing Testability Definition Importance of Testability Application Problem, Software Artifacts, and Testability Dependencies Open Problems Outline of Thesis Dependencies and Test Tasks Test Levels, Test Activities, and Test Tasks Effects of Dependencies on Test Tasks Define Test Order Define Test Cases Implement Stubs, Mocks, and Drivers Build System Instantiate and Initialize Objects Involved in Tests Execute Tests Observe and Analyze Test Results Isolate Errors Identify Change Impact Maintain Test Cases Automate Test Cases Effects of Dependency Cycles Effects of Indirect Dependencies Related Work Introduction Closely Related Work Metric Average Component Dependency Test-First Design Metric for Class Interactions Short Contributions... 25

12 x Contents 3.3 Other Related Work Coupling Metrics ISO Standard Design Guidelines Law of Demeter Part II Approach 4 Overview of the Approach Systematic Control of Test-Critical Dependencies Life-Cycle Support Guidelines, Metrics, and Tool Support Requirements Capture Introduction Overview of Requirements Capture Activity Important Concepts Requirements Capture and Testability Use Case Dependencies Use Case Dependencies and Testing Specify Use Case Criticality and Frequency Control Use Case Dependencies Control Mandatory Domain Class Associations Control References from Use Cases to Domain Classes Control Dependencies to External Systems Analysis Introduction Overview of Analysis Activity Important Concepts Analysis and Testability Identify Classes Relevant to Testability Identify Test-Critical Classes Identify Hard-To-Test Classes Identify Test-Sensitive Classes Using Prototypes to Identify Classes Relevant to Testability Relationships Between Different Class Categories Identify Test-Critical Dependencies Specify Testability Requirements Design Introduction Overview of Design Activity... 59

13 Contents xi Important Concepts Design and Testability Design Dependencies Hard-Wired Dependency Semi-Hard-Wired Dependency Type Dependency Design Dependencies and Testing Effects of Hard-Wired Dependencies Effects of Semi-Hard-Wired Dependency Effects of Type Dependency Control Design Dependencies General Design Principles Limit Indirect Dependencies Test-Critical Dependencies Shouldn t be Hard-Wired Involve Hard-To-Test and Test-Sensitive Classes on Demand Ensure Substitutability Isolate Access to Technical Infrastructure Put Special Attention on Object Construction Make Entity Classes Self-Contained Techniques to Control Design Dependencies Early Creation of Test Artifacts Design Reviews Design Metrics Implementation Introduction Overview of Implementation Activity Implementation and Testability Implementation Dependencies Control Dependency Structure Identify Static Dependencies Evaluate Dependency Structure Evaluate Dependency Cycles Identify Test-Critical Dependencies Identify Candidates for Refactoring Perform Refactorings Control Hard-Wired Dependencies Identify Hard-Wired Dependencies Evaluate Hard-Wired Dependencies Identify Candidates for Refactoring Perform Refactorings Control Semi-Hard-Wired Dependencies Identify Semi-Hard-Wired Dependencies Evaluate Semi-Hard-Wired Dependencies Perform Refactorings Application of Metrics to Design...99

14 xii Contents 9 Test Introduction Evaluate Testability Test Problem Reports Test Process Metrics Part III Metrics and Tool Support 10 Dependency Graph Introduction Overview of Dependency Graph Model Classes of Dependency Graph Model Class Class Class Member Class ClassDependency Class Generalization Class TypeDeclaration Class MemberAccess Class System Complete Dependency Graph Model Constructing a Dependency Graph Metrics Introduction Basic Measurement Concepts Overview of Metrics Class Metrics Metric CD Metrics CDh and CDsh System Metrics Metric ACD Metrics ACDh and ACDsh Metric NCDC Metric NFD Dependency Metrics Metric DSTM Metric DSTMh Reduction Metrics Introduction Metric racd Metrics racdh and racdsh rncdc Metric rnfd Metrics racdin and racdout

15 Contents xiii 12 Tool Support Overview of Design2Test Functionality of Design2Test Architecture of Design2Test Process View Architectural View Part IV Case Studies 13 Outline of Case Studies Introduction Investigated Systems Investigated Issues Results Distribution of Reduction Metric Values Correlation between Reduction Metrics Correlation between racd and Coupling Metrics Metric racd and Effect on System Structure Metric racd and Design Errors Metric racd and Effect on Testing Metric racd and Strength of Dependencies Amount and Cause of Hard-Wired Dependencies Semi-Hard-Wired Dependencies and Refactoring Feedback Dependencies Metric racdin and Effect on System Structure Part V Conclusion 15 Summary Future Work Testability Engineering Process Improvement of Testability Metrics Outlook...196

16 xiv Contents Appendix A Glossary B Java - Syntactic Dependencies B.1 Generalization B.2 Type Declarations B.3 Member Access B.4 Dependencies Involving Initializers and Inner Classes C Coupling Metrics D Testability and Other Quality Characteristics D.1 Overview D.2 Testability and Maintainability D.3 Testability and Reliability D.4 Testability and Performance D.5 Testability and Reusability D.6 Testability and Traceability D.7 Testability and Usability E Index F Bibliography...237

17 1 Part I Introduction and Related Work The first part of this thesis introduces the topic addressed and motivates its importance for software development. Furthermore this part describes related work and identifies open research issues due to the shortcomings of currently existing approaches.

18 2

19 3 Chapter 1 Introduction The purpose of this chapter is to give an overview of the topic of this thesis. It describes the problems addressed, why they are important and nontrivial to solve, and provides an overview of the content and structure of the document. 1.1 Testing Dynamic testing The term testing is widely used as a synonym for the term dynamic testing. Dynamic testing is a defect detection technique based on executing a software system with selected inputs and comparing the observed results with the expected results. (Other defect detection techniques like reviews, inspections, and automated static analysis are referred to as static testing techniques.) In the remainder of this document we use the term testing as an abbreviation for dynamic testing. Motivation Testing is important for error detection and continuos software evolution: The potential impact of software errors on business, human life, and environment grows as software controls more and more critical functionality within technical products and business processes. Unfortunately, software development is an error prone process. Testing is the most widely used technique to detect errors. If user requirements change frequently, it is important for the software developer that the software system can be adapted and extended easily. New functionality added to a system should not break existing functionality. Regression testing is one technique to assure, that existing functionality remains intact after implementation changes. Without the ability to perform regression tests quickly and easily after implementation changes, the risk of undetected errors in the new soft-

20 4 Chapter 1: Introduction ware release increases. Testing is therefore an enabling factor for continuos and rapid software evolution. Additionally, test cases are utilized in a new manner in the context of Xtreme Programming and agile software development: they act as a requirements specification. This practice is called Test-First Design or Test-Driven Development and means, that the required functionality is specified in terms of executable test cases before the implementation of the related functionality starts - no separate requirements specification document is created [Beck03]. Test Effort While testing is important for software quality and evolution, it is a major cost driver as well: about 25% to 50% of an average development budget are spent on testing [Pol00] [Spil03]. The actual amount of time and money which is needed to achieve the test goals as well as the ability to achieve the test goals at all depends on several factors. These factors include e.g. human skills and the degree of test automation but also the characteristics of the software, the later subsumed by the term testability. 1.2 Testability Definition Testability comprises the characteristics of a software artifact 1 which have an impact on the ease to achieve test goals. The term testability is defined in this work as following: The degree to which a software artifact facilitates testing in a given test context. The following considerations are important to our definition: Not only source code or executables effect testing but other software artifacts as well (e.g. requirements specification documents). Whether a software artifact facilitates testing or not can only be determined relative to the test context, i.e. with respect to a given set of test goals, resources, techniques and tools. 1 A software artifact is a document or product created during software development. Examples are requirement specification documents, design documents, source code, and executables.

21 Chapter 1: Introduction 5 Example: a software artifact may facilitate black-box testing by providing a dedicated test interface while not supporting white-box testing because of unstructured code. If achieving white-box test coverage is not a test goal, then this does not have a major negative impact on overall testability, otherwise it does. Other Definitions of Testability The relative ease and expense of revealing software faults. [Bind94] (1) The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met, and (2) the degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met. [IEEE90] A set of attributes that bear on the effort needed for validating the modified software. [ISO9126-2] A software system is testable if 1) its components can be tested separately, 2) test cases can be identified in a systematic manner and repeated, and 3) the test results can be observed. [Kahl98] Importance of Testability Large systems Several software development and testing experts pointed out the importance of testability and design for testability, especially in the context of large systems: During the design of new systems we do not have only to answer the question can we build it? but also the question can we test it?. Good testability of systems is becoming more and more important. [Pol00] The absence of design for testability in large systems can greatly reduce testing effectiveness. [Bind99] Design for testability, although rarely the first concern of smaller projects, is of paramount importance when successfully constructing large and very large C++ systems. [Lako96]

22 6 Chapter 1: Introduction Factors contributing to importance of testability Beneficiaries of testability The importance of software testability for a particular software system increases with the size and complexity of the system, the risks for life and business if errors remain undetected, the frequency of the test activities, and the life-time of the system (assuming that maintenance and regression testing are permanent tasks). Testability is important for software testers and programmers because it helps them to keep the test effort under control. Additionally it is relevant to customers as well: customers benefit from higher product quality and faster fixing of errors occurring at the customer site when testability features like built-in-tests, automatic failure reports, and built-in diagnostic capabilities provide better and faster information to the developers about the cause of failures which accelerates problem fixing. Testability and Other Defect Detection Techniques Testability is by definition linked to testing. Of course, testing is not the only defect detection technique which benefits from a high level of testability: improving testability is beneficial for reviews, too. Characteristics of testable software like adequate complexity, low coupling, and good separation of concerns make it more easy for reviewers to understand the software artifacts under review Application Problem, Software Artifacts, and Testability The testability of a software system depends not only on the software artifacts specifying, designing, or implementing the structure and behavior of the system, but also on the original application problem to be solved and the software artifact(s) used as a test reference when defining the test cases (Figure 1).

23 Chapter 1: Introduction 7 software artifacts application problem requirements specification design implementation test reference test object test process input testability influence Figure 1 Application problem, software artifacts, and testability Related to these three subjects of testability considerations we distinguish three main characteristics of testability: problem testability, test reference testability, and test object testability. Problem testability Problem testability describes the effect of the application problem on the ease to test the planned software system. The application problem is usually defined by the customer and seldom under the control of the software developer. Oracle problem Interaction with external systems An application problem makes testing difficult if a good test oracle 1 does not exist [Weyu82], e.g. because 1) the program is written to calculate an output which is difficult or impractical to calculate manually or 2) because the planned software system will have to produce so much output that it is impractical to verify it all. Part of the application problem is the required interaction with external systems as specified by the customer. These external systems and required interaction may generate test problems as well. 1 A test oracle is a source of an expected test result for a test case which can be e.g. a human or another software system.

24 8 Chapter 1: Introduction Test reference testability Test reference testability is the degree to which the test reference facilitates testing in a given test context. Quantitative requirements A test reference facilitates testing, if it provides non-ambiguous, quantitative, complete, and consistent information which can be used to design test cases and to implement a (reliable) test oracle. A typical test reference is a requirements specification document. The testability of requirement specifications is probably the most important [concern for requirements], and, at the same time, the most difficult one to achieve. [Roma85] Test object testability Test object testability is the degree to which a software artifact specifying, designing, or implementing the structure and behavior of the system facilitates testing in a given test context. A number of factors contribute to test object testability. The most important factors include complexity, separation of concerns, coupling (i.e. dependencies), controllability, and observability. it is usually the software developer who specifies and implements the required level of test object testability which is therefore under his (and not the customer s) control. In the remainder of this document we use the term testability as an abbreviation for test object testability. Our Focus Testability is, like testing itself, a very broad topic. Therefore we limit our focus on selected aspects of testability: We deal to some extent with problem testability and describe e.g. how to reduce the application problem. The main focus is on test object testability and in particular on the effect of dependencies on the testability of object-oriented systems. Test reference testability is not addressed in this thesis. Note: The interested reader can find information on how to define testable requirements in [Rupp02] and information on how to define testable UML 1 -models to be used as a test reference in [Bind99]. 1 UML is the abbreviation for Unified Modelling Language [UML01].

25 Chapter 1: Introduction Dependencies Origin of dependencies Definition Semantic dependency Syntactic dependency Large systems are usually built from smaller units which makes it easier to understand the system, to define development tasks and to assign them to different people. Naturally, breaking a system into units leads to dependencies between these units. The majority of the software units within a large system depends on other units to provide their functionality. Dependencies not only exist between software units but also between entities of other software artifacts like requirements specification and design documents. The dependencies between all kinds of software entities have an effect on testing. In the context of this work, we define the term dependency as follows: A dependency is a directed relationship between two entities where changes in one entity may cause changes in the other (depending) entity. Note: The term dependency is used in the UML with a slightly different meaning: a (UML) dependency is an element of a UML model which represents the relationship between two other elements of the same model. Our definition, however, applies to dependencies between entities of software artifacts, no matter whether they are represented within a model or not. Above definition describes a semantic dependency between two entities. A syntactic dependency between two entities means that the depending entity contains a syntactic reference to the other entity. Note: A semantic dependency may not lead to a syntactic dependency in all cases. For example, a class A may make assumptions about implicit conventions used in a class B concerning the meaning of particular attribute values but may not interact with class B directly. However, a syntactic dependency in general leads to a semantic dependency between the entities involved. (A pathological exception would be e.g. a class A declaring an attribute of type B but not using it).

26 10 Chapter 1: Introduction Direct and indirect dependencies Direct dependency Indirect dependency Dependencies define a relation between entities, i.e. between two entities there exists at most one direct dependency in each direction. A dependency relation can be transitive (like compile dependencies 1 ): if an entity A depends on entity B which depends on entity C then A indirectly depends on C. We further distinguish between class dependencies and model dependencies. Client and supplier class Class dependencies In the context of dynamic testing and testability we are interested in source code dependencies, since it is source code which is executed during testing. Within object-oriented systems, source code dependencies manifest as class dependencies. Note: By classes we mean concrete classes, abstract classes, and interfaces. In the context of a class dependency from class A to class B, we call A the client class and B the supplier class. Model dependencies The specification and design of a system is usually based on models. A common notation to describe such models is the UML [UML01] which provides e.g. use case models and class diagrams. Dependencies between model entities are represented e.g. as class associations or use case relationships. Such model dependencies are the origin of class dependencies at the code level later on which makes them interesting to us as well. Note: Dependencies between model entities can be further traced back to dependencies between entities of the real world. Evaluating (and perhaps changing) dependencies between real world entities is out of scope for this work. Our focus In this work, we mainly focus on syntactic class dependencies and only to some extent on syntactic and semantic model dependencies. 1 One class requires another class to compile correctly.

27 Chapter 1: Introduction 11 Effect of dependencies According to Les Hatton [Hatt99], 1) coupling caused by dependencies and 2) complexity are those characteristics of today s software systems which contribute most to increasing test difficulties. Dependencies indeed have a negative impact on all test tasks which will be discussed in more detail in Chapter Open Problems A software developer who wants to evaluate and improve testability faces a number of problems because basic guidance is missing. Lack of systematic approach A general problem is the lack of a systematic approach to design and implement testability into software products. In our view the reasons for this situation are: Designers and developers do not face the test problems caused by their designs and implementations. Therefore they do not design and implement with testability in mind. Often the designers and developers introduce new dependencies into existing designs and implementations during the course of adding functionality while it is not them but the test and maintenance engineers which suffer from the resulting negative impact on testing. The research on testability is still in its infancy. Researchers are often unaware of relevant related work. Testability engineering has not been yet established as a research field of its own (like usability engineering). Even a common terminology is still missing. A systematic approach towards testability, however, is necessary to avoid a waste of resources and to enable continuos improvement. Problems with controlling dependencies Existing approaches to control dependencies not sufficient Approaches to control the effect of dependencies on testability do exist, but they are not sufficient: Principles of good design have been developed from an implementation point of view and do not account for require-

28 12 Chapter 1: Introduction ments which have their origin in tasks specific to testing (e.g. the ability to test a class in isolation). Existing design metrics like coupling [Fent96], abstractness and stability [Mart97] are metrics at the package or class level and do not help to identify individual dependencies which are critical for testing. Additionally, existing design metrics are general-purpose metrics: the relationship to testability is unclear and they are e.g. insensitive to specific test problems caused by certain categories of dependencies (see Section 7.3). Test-First Design (see Section ) helps to identify test problems early (by developing the test code before the functionality to be tested) but the approach is not systematic. Test problems are dealt with when they surface, leading to an iterative process and the need for frequent refactoring. While this may be a practical approach for smaller systems, the required rework effort for implementation and design artifacts is likely to be prohibitive for large and critical software systems. Controlling dependencies is not trivial Controlling software dependencies in order to improve testability is nontrivial because, e.g.: The number of dependencies within a system is usually very large. There are no general criteria available that help to evaluate the effect of individual dependencies on testing. The question is, how to (automatically) identify dependencies with a potentially negative impact on testing. Because dependencies are transitive, a local dependency may have a global effect on the overall system structure and test effort. Such global effects are often unobvious from the local context. Criteria and guidelines are missing for evaluating dependencies during early development activities. 1.5 Outline of Thesis This thesis describes a systematic approach to improve the testability of object-oriented software artifacts by controlling the effect of dependencies on testing. Life-cycle support Our approach covers the software development activities requirements capture, analysis, design 1, implementation, and test as they have been proposed by Jacobson [Jaco92] and fur-

29 Chapter 1: Introduction 13 ther derived by Six et al. [Six03]. For each development activity and related software artifacts (including use case models, class diagrams and source code) we describe constructive and analytical measures to control the effect of dependencies. Constructive and analytical measures Focus The constructive measures are supported by guidelines on how to specify testability requirements and how to deal with dependencies during design. The analytical measures include techniques to evaluate individual dependencies and to highlight potential points of improvement. The focus of our research lies on object-oriented software in the domain of conventional business applications. The software artifacts we consider are modeled using UML and implemented using Java. Out of scope are special testability issues like those related to distributed and real-time systems or telecommunication software. Document Structure This thesis is organized as follows: Part I Part II Part III Part IV Part V Appendix In the remainder of the first part we describe how dependencies effect common test tasks in general and discuss related work in the context of testability and dependencies. In the second part of this thesis we give an overview of our approach and describe for each development activity the necessary steps to control dependencies as well as supporting guidelines, techniques, and metrics. At the beginning of the third part we introduce a dependency graph which is used as a basis to define the metrics (introduced in Part II) formally. The definition of each metric is accompanied by a motivation and discussion. A tool, which supports the computation of metric values, is introduced at the end of the third part. The fourth part contains the details of three case studies concerning the usefulness and validity of the proposed metrics. A summary and notes on future work conclude the thesis. The appendix contains the bibliography and provides an overview of important terms and abbreviations, 1 Architectural design is not considered because it has a global focus and does not describe individual dependencies.

30 14 Chapter 1: Introduction the types of dependencies within Java source code, existing coupling metrics, and the relationship between testability and other software quality characteristics.

31 15 Chapter 2 Dependencies and Test Tasks Direct and indirect dependencies are relevant for testability because they effect common test tasks. Understanding these effects is essential if we want to evaluate and improve testability. This chapter gives an overview of common test tasks and how they are effected by dependencies. Note to readers: This chapter is mainly motivational, it does not introduce new concepts. You may skip this chapter, if you are familiar with test tasks and the effect of dependencies on testing. 2.1 Test Levels, Test Activities, and Test Tasks Test tasks have to be solved at different test levels and within different test activities 1. This section describes typical test levels and test activities. Test levels Common test levels are unit test, integration test, and system test. The test level refers to the test object which can be an individual class, a cluster of classes, a subsystem, or a complete system. The scope of a unit test is only a single class which makes fault isolation easy. The goal of an integration test is to test the interaction of two or more classes which may involve polymorphism as well. Normally, integration testing starts, when the classes involved have already been unit tested. During a system test, the entire system (or a major increment) is tested in order to make sure, that it fulfills the intended function- 1

32 16 Chapter 2: Dependencies and Test Tasks ality as specified by the requirements. Often this is done via the (graphical) user interface of the system. Test activities The overall test activity consists of a number of (sub-)activities which can be found at each test level. This set of (sub-)activities includes: Test planning, the identification and specification of the required tests and resources as well as the scheduling of test sub-activities and test tasks. Test design, i.e. the specification of the test cases. Test preparation, i.e. the preparation of all what is needed to execute the test cases including a test framework, test tools, and stubs. Test execution, where the actual testing takes place. Test result analysis, which subsumes the collection and evaluation of test results. Test follow-up, the isolation and removal of errors. Test maintenance, which means to keep the test cases upto-date after changes to the requirements, design, or implementation. Test tasks Within each sub-activity a different set of concrete test tasks has to be solved. The test tasks include: Define the test order. Define the test cases to fulfill a test coverage criterion. Implement stubs, mocks and drivers. Build the system (i.e. compile and link all required classes). Instantiate and initialize the objects involved in the test. Observe and analyze the test results. Locate errors. Identify the classes that have to be retested after a change to a particular class (change impact identification). Maintain test cases after program changes. Identify reusable test cases. Automate the test execution. The classes under test are the main input to most of the test tasks. Dependencies between these classes therefore have an effect on the ease to perform the test tasks.

33 Chapter 2: Dependencies and Test Tasks Effects of Dependencies on Test Tasks In the following we describe the test tasks in more detail and how dependencies effect the ease to perform them Define Test Order A good test order helps to minimize the required number of stubs and drivers during unit and integration testing. The actual test order depends on the test strategy (e.g. bottom-up or topdown) and the class dependencies existing within the system. Effect of dependencies If the classes and dependencies form a tree structure, it is trivial to identify a meaningful test order 1, and the degree of freedom in choosing the test order is large. Adding additional dependencies reduces the freedom in choosing the test order. Additional dependencies which introduce dependency cycles make it more difficult to identify a meaningful test order [Kung94b] [Kung94c] [Over94] [Wint99] and may require the implementation of stubs (see Section 2.2.3). Note: Adding dependencies also has an impact on the overall test process because it reduces the opportunities for parallel testing Define Test Cases Each test case has to describe the initial set of the classes involved, the initial state of the objects, the input parameters, global variables, and required method calls (or user interactions) as well as the expected test results like GUI output, return values, object states, and exceptions thrown. For a system of reasonable size it is practically impossible to cover all possible input combinations with help of test cases. Therefore, different test coverage criteria have been developed to help in finding a small but sufficiently large set of test cases with a high probability to uncover most of the errors. 1 In this case, the test order starts with leaf nodes and continues with nodes upwards the tree where all supplier nodes have already been tested.

34 18 Chapter 2: Dependencies and Test Tasks Effect of dependencies Adding a new dependency between a client class (the class under test) and a supplier class may for example a) increase the number of the client class s methods and/or method parameters or b) make the control flow of the client class dependent on the state of the supplier class, which likely increases the required number of black-box test cases. If the new dependency manifests as a change within a method body of the client class, additional white-box test cases may be required as well, especially if the state of the supplier class is relevant for the control flow of the class under test. CUT = class under test Note: In the remainder, we will use the abbreviation CUT for the class under test Implement Stubs, Mocks, and Drivers Stubs, mocks [Link01], and drivers are simple substitutes for classes (or interfaces): Stubs allow to test a class in isolation or to substitute classes which haven t been tested yet. Mock objects additionally improve the ability to observe (intermediate) test results 1. Drivers are used to control the CUT directly. The need to implement stubs, mocks, and drivers depends e.g. on the integration strategy chosen. Effect of dependencies Introducing new dependencies increases the number of stubs required to test classes in isolation. Dependencies introducing dependency cycles require the implementation of stubs if the strategy is to break the cycles during testing Build System Before the first test run, all executables of the classes involved in the test have to be built from the source code. If the CUT or its supplier classes have been changed before re-running a test, existing executables have to be rebuilt as well. 1 Mock objects may report intermediate test results in terms of the (sequence of the) method calls to the mock object and the corresponding values of parameter objects.

35 Chapter 2: Dependencies and Test Tasks 19 Effect of dependencies Adding dependencies means that more classes have to be recompiled on average after program changes (see [Cold96]) which may increase the duration of the compile-test-debug cycle significantly Instantiate and Initialize Objects Involved in Tests Objects of the classes involved in a test have to be instantiated and often initialized during test setup. Initialization may include the establishment of appropriate object links. Effect of dependencies An additional dependency to a supplier class often increases the number of objects that have to be instantiated and initialized. For example, if an instance of the supplier class has to be provided as a parameter object to the method under test, the test code has to create this supplier instance. If the CUT creates the instance of the supplier class itself, the test code is freed from this responsibility, but such direct creation of supplier instances can cause other test problems (see Section and Section 7.3.1) Execute Tests Test case execution means to call a method of a class or to invoke user operation(s) provided by a graphical user interface (GUI). This can be done manually or automated. Effect of dependencies Adding a dependency from a CUT to a supplier class which involves time consuming operations like user input, access to a database, or access to the Internet increases the overall time needed for test execution Observe and Analyze Test Results During and after test execution the intermediate and final results need to be observed. Test results include values of return parameters, the (hidden) state of objects, exceptions that have been thrown, and other kinds of system output.

36 20 Chapter 2: Dependencies and Test Tasks The test results observed have to be compared with expected test results. In case of failures it is necessary to describe them with as much details as necessary to facilitate isolation of errors. Effect of dependencies The more classes a CUT depends on the more instances will be involved in test execution and the more intermediate and final object states have to be observed as part of the overall test result (in general) which increases the related effort Isolate Errors In case of a wrong test result it is necessary to track the failure down to the faulty class(es) and code statement(s): analytically based on supplemented information (like system traces) or empirically by adding, removing, exchanging or additionally testing objects of other classes involved. Effect of dependencies A higher number of dependencies makes fault isolation more difficult because an error may hide in a larger number of classes [Cold96]. Cyclic dependencies pose a special problem because they lead to re-entrance situations [Szyp97] which make it harder to understand the code and to isolate faults Identify Change Impact During maintenance, test cases have to be rerun in order to make sure, that no new errors have been introduced while fixing errors or adding new functionality. The number of test cases which have to be rerun during the regression tests can be reduced by identifying the classes which are actually effected by the implementation change. Effect of dependencies A higher number of dependencies causes more effort to determine the set of classes effected by the change Maintain Test Cases Test cases related to a given class may need to be updated if this class or one of its supplier classes changes.

37 Chapter 2: Dependencies and Test Tasks 21 Effect of dependencies Adding dependencies to a system reduces the independence of unit test cases and increases the number of test cases which have to be maintained after an implementation change Automate Test Cases Test automation means that some or all steps necessary during test execution, test result observation, and analysis are automated. This requires that test results are presented in a way which facilitates comparison with expected results without the need for human intervention. Effect of dependencies Adding a dependency may have a negative effect on testing if the supplier class causes problems for test automation. 2.3 Effects of Dependency Cycles A dependency cycle has a severe effect on testing which depends on the strategy chosen to deal with the cycle: If the cycle is not broken (using stubs) it is impossible to test the classes involved in isolation. Instead they have to be tested altogether at once. Additional stubs have to be implemented if the cycle shall be broken. A dependency cycle may cause a re-entrance 1 situation during program execution which make it more difficult to understand the program and to isolate faults [Szyp97]. It is therefore strongly recommended to avoid dependency cycles whenever possible [Lako96]. 2.4 Effects of Indirect Dependencies Indirect dependencies to supplier classes have several effects on testing: 1 Re-entrance means that a method of an object A is invoked by another object while another method of object A is still executing.

38 22 Chapter 2: Dependencies and Test Tasks More classes have to be compiled before test execution which increases the time to rebuild the system after changes. A similar effect occurs in component-based systems where re-deployment of components after implementation changes is time-intensive. The indirect supplier classes have to be instantiated and initialized during test setup (if the direct supplier classes do not take responsibility of that). Fault isolation becomes more difficult. A fault may have to be tracked down a long sequence of method calls until it is attributable to a particular class. Indirect supplier classes may slow down the test progress e.g. if they involve time-intensive computation or if they are not test-ready. The state of the indirect supplier classes may be relevant for test result analysis which increases the related effort.

39 23 Chapter 3 Related Work In this chapter we describe related work in the context of testability, dependencies, and metrics. 3.1 Introduction Testability engineering has not yet been established as a discipline within software engineering: Certain aspects of quality have been the focus of very little research. For example, although it is recognized as an issue in object-oriented development, testability has rarely been addressed. [Bria99c] Testability is a compelling concept but in its infancy. [Whit00] One reason for this situation is, that many researchers are not aware of work done by other researchers, leading to a lack of common terminology and slow research progress. Experience reports related to published approaches and techniques are almost completely missing. Quite a large number of software engineering books or articles mention testability or address the topic to some extent, but there is only a limited number of publications that really go into the detail. Even fewer publications study the effect of dependencies on testing. In the following sections we discuss 1) contributions concerning dependencies in the context of testability as well as 2) research focusing on dependencies without explicitly considering testability aspects.

40 24 Chapter 3: Related Work 3.2 Closely Related Work Few authors have investigated dependencies in the context of testability so far Metric Average Component Dependency Lakos Related work John Lakos [Lako96] proposes a metric called Average Component Dependency (ACD) which is basically the average of the number of classes each class directly and indirectly depends on. Metric ACD is used to evaluate testability because it allows to predict the average effort required to recompile a system after a program change (before rerunning the test cases). Lakos also discusses the effect of cyclic dependencies on the dependency structure and how to avoid dependency cycles. Work related to the approach by Lakos includes reports on the use of metric ACD [Maso99] as well as tools which collect this metric like Ignominy [Tuur01] and JDepend [Clar01]. Discussion Lakos uses metric ACD to evaluate the effect of an individual dependency on the dependency structure. This approach is purely manual and therefore practically limited to a small subset of the dependencies existing within a system. Additionally, Lakos does not take into account test tasks other then recompilation Test-First Design Beck, Link, et al. One of the main techniques used in Xtreme Programming is called Test-First Design (or Test-Driven Development) which has been described in more detail by Johannes Link [Link02] and Kent Beck [Beck03]. Test-First Design means that the test cases are developed before the actual method bodies are implemented. Early feedback to programmers about test problems caused by class dependencies helps to improve testability from the beginning [Beck01]. Discussion Test-First Design is the first popular approach in the software engineering field which contributes to testability. Unfortunately, testability is still a by-product of this approach and not the result of dedicated efforts: test problems are dealt with when they sur-

41 Chapter 3: Related Work 25 face during the implementation and test activity. Testability is not designed into the product systematically. No specific process steps are defined to achieve testability. Instead, Test-First Design is a trial-and-error approach to testability which heavily relies on refactoring. While this may be practical for smaller systems, the time and effort needed to rework implementation and design artifacts is likely prohibitive for large and critical software systems. Additionally, there is no evidence yet that Test-First Design leads to better software quality than conventional development processes. Some first experiments have been carried out to investigate this question: Müller et al. [Muel02] report, that Test-First Design did produce less quality. Other authors report, that Test-First Design performed better than an ad-hoc approach to testing [Geor02] [Maxi03]. However, a comparison to systematic testing would have been more relevant Metric for Class Interactions Baudry et al. Benoit Baudry et al. focus on class interactions in the context of testability [Baud02]. A class interaction (according to Baudry) occurs, when a client class accesses one of its supplier classes concurrently via two (or more) distinct chains of associations. Such class interactions increase the possibility of side effects (which are a source of errors) and should be tested during integration testing. Baudry et al. define a metric to evaluate the test complexity caused by class interactions and describe design guidelines that help to avoid concurrent class access. Discussion The problem addressed is relevant for integration testing and the proposed design guidelines are practical. Not sufficient yet is the motivation and validation of the proposed testability metric which is based on a (overly) complex formula Short Contributions Some issues related to dependencies in the context of testability have been mentioned in work which does not have testability as a main focus: Implicit dependencies increase the risk of missing important test cases [Fowl01] [Koen99].

42 26 Chapter 3: Related Work Dependencies on static attributes with complex initialization may cause problems for test automation in subsequent test runs [Free02]. Separating object creation from object use (e.g. using factory methods or factory classes) facilitates test case reuse [Ruep97] [Dorm97]. Using a factory method e.g. allows to break an indirect dependency between the test class and a supplier class of the class under test by overriding the code responsible for object creation. Static dependencies on a singleton class 1 make it more difficult to test the client classes in isolation [Rain01]. Hard-wired dependencies to system resources [Koen99] (like those introduced by using commands like println() in C++) make it more difficult to test the client classes in isolation. 3.3 Other Related Work Several contributions concentrate on dependencies, but do not consider testability explicitly Coupling Metrics Fenton and Pfleeger One of the first coupling metrics for object-oriented software has been defined by Norman E. Fenton and Shari Lawrence Pfleeger, called coupling between object classes (CBO) [Fent96]. This metric is defined as the number of other classes to which a class is coupled. Other coupling metrics vary in the level of granularity and the coupling aspects they take into account. A framework for coupling measurement has been described in [Bria96]. Discussion A coupling metric measures the degree to which a given class is coupled to other classes within the system. Coupling metrics can be used to evaluate the effect of the dependencies of a given class on its testability. However, the shortcomings of existing coupling metrics are: 1 Coupling metrics are metrics at the class level, not at the dependency level. Therefore they are not able to highlight individual dependencies which are critical. The developer 1 A client class needs to call a static method of the singleton class if it wants to get a reference to its only instance.

43 Chapter 3: Related Work 27 has still to decide which dependency to remove in order to reduce coupling. 2 The majority of the coupling metrics neglect indirect dependencies and their effects on test tasks like test setup and fault isolation. 3 If coupling metrics take into account indirect dependencies, they are insensitive to dependencies which cause a dependency cycle (each class within a dependency cycle has just the same coupling value) and insensitive to classes which are crucial for the overall number of indirect dependencies (like hubs 1 ). Therefore they do not help to identify the cause of high coupling values. Example 1 The dependency from class H to class B in Figure 2 has the biggest impact on overall indirect coupling: it causes an increase of 30% compared to the system without this dependency (Table 1). The classes involved in this dependency do neither have the highest direct coupling values (metric CBO 2 ) nor the highest indirect coupling values (metric CBOi 3 ). A B C D E F G H I Figure 2 Dependency structure 1 Hubs are classes that, if removed from the system, break the dependency graph into independent dependency subgraphs. 2 The number of other classes a class depends on. This definition is used by the IDE Together (from Borland Inc.) to calculate the metric value. 3 The number of other classes a class depends on directly or indirectly.

44 28 Chapter 3: Related Work class CBO CBOi A 3 8 B 2 3 C 3 6 D 2 5 E 2 3 F 2 4 G 2 4 H 1 3 I 0 0 Table 1 Classes and values of metric CBO 4 Coupling metrics are general purpose metrics. They are insensitive to characteristics of dependencies which make a difference for testing (for example related to the ability to test a class in isolation). This discussion will be continued in Section ISO Standard ISO The draft standard [ISO9126-2] defines a system-level testability metric similar to coupling metrics which does not consider the coupling between classes but the number of dependencies to other systems. Discussion The metric mentioned above does not help to identify critical individual dependencies (because it is a system-level metric) Design Guidelines Liskov and Martin Barbara Liskov and Robert C. Martin described and proposed a number of design guidelines related to coupling: A client class should be able to collaborate with any subclass of its supplier classes ( Liskov Substitution Principle ) [Lisk88]. A high level component should not depend on the implementation of low level components ( Dependency Inversion Principle ) but on the component interfaces [Mart96c].

45 Chapter 3: Related Work 29 The abstraction of a package (i.e. the degree to which a package consists of interfaces and abstract classes) should be in proportion to its stability (i.e. the difficulty of changing it based on incoming and outgoing dependencies) [Mart97]. This guideline is a refinement of the previous guideline. Martin additionally defined two simple metrics to measure abstractness and stability: Abstractness: the ratio of the number of abstract classes (and interfaces) to the overall number of all classes within a package. Stability: the ratio of the number of incoming dependencies to the overall number of (incoming and outgoing) dependencies of a package. Discussion The metric abstractness takes into account the type of dependencies because it distinguishes between dependencies to concrete classes and dependencies to abstract classes or interfaces. The shortcomings of the metrics abstractness and stability are: They are defined on package level. The developer still has to identify a meaningful set of classes and dependencies which should be refactored to obtain good metric values. The metric abstractness is based on a simple heuristic and does not take into account whether the classes of other packages actually use the interfaces defined within a given package or not. (It is easy to change the value of the metric abstractness by simply adding some interfaces to a package.) The relationship between the metrics abstractness and stability and the software characteristic testability has not been discussed by Martin Law of Demeter Lieberherr et al. Lieberherr et al. [Lieb88] defined a design guideline called Law of Demeter to reduce direct coupling: A client class should not access the ( foreign ) supplier classes of its own direct supplier classes. The number of violations of this guideline can be used as a design metric. Discussion This guideline helps to reduce direct dependencies. However, reducing direct dependencies alone is not sufficient if indirect dependencies still remain.

46 30 Chapter 3: Related Work

47 31 Part II Approach This part describes our approach to improve the testability of object-oriented systems by controlling dependencies within specification models, design models, and source code.

48 32

49 33 Chapter 4 Overview of the Approach Testability has to be considered as early as possible within the development process. If testability problems are detected to late, then it is often impractical to solve them because the required effort to change major parts of the design and implementation is to high. Bruce F. Webster therefore describes one important pitfall of software development as thinking about testing after the fact [Webs95]. Today s software development projects lack a systematic approach to design testability into the product 1 : Software DFT [design for testability] is necessary but little is known at this time. [Voas97h] Testing should play a much bigger role in design. [Hatt99] If a company instead follows a systematic approach to deploy testability within its products it gains competitive advantages: In the face of intense competitive pressure, a comprehensive and rational strategy to achieve high testability will be a strategic advantage - not a bottleneck. [Bind95c] This chapter provides an overview of our approach to improving the testability of object oriented software and describes its main characteristics. 4.1 Systematic Control of Test-Critical Dependencies Our approach is systematic by defining a sequence of steps which have to be performed in order to control the effect of software dependencies on testability. The steps cover the entire lifecycle and are both constructive and analytical. 1 The author only knows about design for testability efforts in the telecommunication software domain.

50 34 Chapter 4: Overview of the Approach Controlling test-critical dependencies The basic idea behind our approach is to identify and control those class dependencies which have a major impact on testing, called test-critical dependencies. A dependency is test-critical, if it imposes a more negative impact on the ease to achieve test goals than the majority of all other dependencies. 4.2 Life-Cycle Support Our approach supports testability deployment throughout the development activities requirements capture, analysis, design, implementation, and test. These activities, which have been defined in [Jaco92] and refined in [Six03], can be found in practically all software development processes. For each development activity our approach describes the steps necessary to control the effect of dependencies on testability: requirements capture: reduce the problem complexity with special emphasis on use case dependencies and domain classes. analysis: specify testability requirements and identify testcritical classes and dependencies. design and implementation: identify, analyze, and refactor test-critical dependencies. testing: evaluate the degree of testability finally achieved. The main focus in this thesis lies on the testability issues during design and implementation activity. Note: Our approach does not include a comprehensive process definition in terms of roles, responsibilities, and process steps. However, some selected process issues which should be addressed in order to make testability efforts successful are mentioned in Section Guidelines, Metrics, and Tool Support The constructive and analytical steps of our approach are based on guidelines, metrics, sensitivity analysis, and tool support. Guidelines The constructive steps are based on design guidelines which help to avoid dependencies with a negative effect on testing.

51 Chapter 4: Overview of the Approach 35 Some of the guidelines refer to distinct categories of classes w.r.t. their role during testing. Metrics and sensitivity analysis The analytical steps of our approach are based on a set of new metrics which are linked to test tasks and help to evaluate individual class dependencies. A sensitivity analysis based on the metric set is used to evaluate the degree to which a dependency effects characteristics of the overall system and in this way to highlight potential areas of improvement. Tool support A metric tool implemented in the context of this thesis allows to identify class dependencies within Java source code and to calculate our set of metrics. Figure 3 shows the building blocks of our approach and the input relationships. description of necessary steps testability deployment design guidelines design for testability metric analysis sensitivity analysis metric definitions metric tool analysis documents design code development artifacts input Figure 3 The building blocks of our approach

52 36 Chapter 4: Overview of the Approach

53 37 Chapter 5 Requirements Capture During requirements capture the software developer has the unique chance to influence the user requirements towards a smaller complexity of the application problem and towards increased problem testability. This chapter describes what can be done during requirements capture 1) to reduce the complexity of the application problem with special emphasis on use cases, and 2) to control the influence of external systems on testability. 5.1 Introduction In the following we give a general overview of the requirements capture activity, introduce important requirements capture concepts, and discuss briefly how testability relates to this activity in general Overview of Requirements Capture Activity The requirements capture activity is devoted to the identification and specification of user requirements. Its result, the requirements specification, describes which functionality shall be implemented, for what reasons, and on which objects and classes the functionality operates. Implementation issues are not considered during requirements capture. The requirements specification usually contains a use case model and a domain model (i.e. a class model). They are often supplemented by models of system behavior and non-functional requirements. At the beginning of the requirements capture, the requirements engineer acquires an overview and basic understanding of the application domain. A discussion with system users about spe-

54 38 Chapter 5: Requirements Capture cific work scenarios follows, which leads to the definition of system actors and use cases. While studying these scenarios and use cases the requirements engineer identifies objects and classes as a starting point for the domain (class) model. It is an important characteristic of the domain class model, that it does not contain any operations. Since operations are derived from the use case model, they are a kind of implementation issue which should not be part of the requirements specification. The domain class model is therefore similar to an entity relationship model [Chen76], with operations added incrementally during later development activities. Additional models like GUI mock-ups or state models for the behavior of complex domain classes may complete the requirements specification, which is validated at the end of the requirements capture Important Concepts Important concepts of the requirements capture activity are domain class models and domain classes, textual use case descriptions, use case frequency and use case criticality. Domain Class Model Domain class model Domain class A domain class model is an object-oriented class diagram which represents real-life entities of the problem domain and their relationships. An individual domain class represents a relevant object or fact of the problem domain. Domain classes contain attributes but no operations (which are added during subsequent development activities). The domain class model is developed and maintained by the requirements engineer in parallel to identifying the use cases and involved objects. Use Cases Use case Use cases are a common notation to specify functional software requirements [Jaco92] [UML01]. Each individual use case describes one self-contained part of the system functionality which generates a result in the context of a business process for at least one system user. The interaction between the system and the user can involve other users or other external systems, too.

55 Chapter 5: Requirements Capture 39 Actor Use case model Both human users and external systems interacting with the software system are called actors. The relationships between individual use cases can be described graphically within so-called use case models. Textual Use Case Descriptions While the UML [UML01] provides notations to create use case models, it does not tell how to actually describe individual use cases. Textual use case descriptions are one way to do so [Armo00] [RUP99]. We use a schema proposed in [Six03] to describe the flow of control within a use case, which consists of one precondition, one main flow, one postcondition for the main flow, zero to many alternative flows, and zero to many exceptional flows and related postconditions. Figure 4 shows an empty textual use case description based on the schema. use case xyz actors... precondition... main flow... alternative flow AF1... alternative flow AFn... postcondition... exceptional flow EF1... postcondition... exceptional flow EFm... postcondition... additional comments, e.g. on use case priority, risks, and frequency end xyz Figure 4 Schema for the textual specification of a use case

56 40 Chapter 5: Requirements Capture Precondition and postcondition Alternative flow and exceptional flow A precondition of a use case specifies what has to be true in order that the use case can be executed and usually refers to domain objects. A postcondition defines possible end states of the use case. To keep the description of the main flow simple, possible deviations from the normal flow of events are described in sections called alternative flow and exceptional flow. An alternative flow means that the use case can be finished successfully and the postcondition of the main flow applies. An exceptional flow does not guarantee the success of the use case which motivates the specification of a separate postcondition. Use Case Frequency and Criticality Textual use case descriptions may contain additional information on the criticality of the use case and the expected use case frequency. The use case criticality depends on the risk for business, life etc., in case that a failure occurs in the context of a given use case. The use case frequency specifies the expected number of executions of a use case during a specified period of time and system usage Requirements Capture and Testability There are two main tasks w.r.t. testability during requirements capture: 1 check whether the requirement specifications are testable and 2 improve problem testability. A requirement specification is testable, if it is understandable, correct, non-ambiguous, non-compound, and expressed in a quantitative manner. The entire requirements specification is testable, if it consists solely of testable requirements which are complete and consistent. Without testable requirement specifications it is difficult or impossible to define useful test cases. More guidelines on how to define testable requirements are described in [Bahi], [Drab99], [Robe96], and [Stac97]. A common technique to check non-formal requirements are reviews [Gilb93]. Dedicated checklists should support the reviewers in detecting untestable requirement specifications. Another approach to detect untestable requirements is to create test artifacts (like system or acceptance test plans) already during requirements capture [Glas92].

57 Chapter 5: Requirements Capture 41 In the remainder of this chapter we describe how problem testability can be improved. In particular we introduce an approach to control use case dependencies and their effect on testing. 5.2 Use Case Dependencies There are different kinds of dependencies between the use cases of a software system: include and extend relationships, generalization relationships, and logical dependencies. Include-, extend-, and generalization relationships between use cases are modelled explicitly when using the UML notation. Logical dependencies [Armo00] are only modelled implicitly within (textual) use case descriptions and are not part of the UML standard. Logical dependency A use case A logically depends 1 on a use case B if the precondition of A matches or at least contains a postcondition of B or if a postcondition of B exists, which leads to an alternative or exceptional flow within A. The logical dependency between use cases is transitive, i.e. there are direct and indirect logical dependencies. One goal during the specification of use cases is to keep them as independent of each other as possible [RUP99]. However, for any practical software application it is not possible to avoid all dependencies, e.g. because different use cases operate on a shared set of domain classes. 1 This view is a simplification. For example, the precondition of a use case A may be equal to the postconditions of more than one use case, e.g. equal to the postconditions of use cases B and C. In this case, use case A does not depend on one particular use case and the dependency is somehow weaker because use case C can be used alternatively to achieve the precondition of use case A.

58 42 Chapter 5: Requirements Capture 5.3 Use Case Dependencies and Testing Use cases are a well suited and widely used basis for defining system or acceptance test cases from the customer perspective. A common approach to test an individual use case is e.g. to cover each of the associated scenarios, i.e. to cover each path within the control flow of the use case. In this case the test effort depends on the complexity of the control flow of the use case. In the remainder of this section we describe the effect of use case dependencies on testing. General effects of use case dependencies If a use case A depends on use case B, the following effects on testing can be observed: Test setup for A is more difficult (compared to the case where A does not depend on B). Testing B must have been successful before it is possible to test A which has two consequences: There is a restriction on possible test orders, reducing the ability to test in parallel. If a test for B fails and its postcondition can t be achieved, then the test cases for A can t be executed. A change in B may require a change in A and consequently a change in the test cases for A. This increases the test case maintenance effort. Effects of include- and extend relationships Include and extend relationships between use cases lead to dependencies between the software units implementing the corresponding functionality. The business logic covered by the use cases will be represented by a specific type of classes 1 during analysis and refined, respectively implemented, within these classes during design and implementation. Hence, dependencies between use cases are very likely to be transformed into dependencies between the corresponding classes, thus increasing the test effort in general. Additionally, include and extend relationships often have to be mapped onto user interface navigation features which have to be tested during GUI-testing. 1 Control classes represent the business logic of use cases (see Section 6.1.2).

59 Chapter 5: Requirements Capture 43 Effects of undocumented use case dependencies If use case dependencies are not documented, important use case interactions may be missed and therefore not tested. 5.4 Specify Use Case Criticality and Frequency The effort spent on testability improvement should concentrate on those parts of an application where test problems and undetected faults would cause the most severe impact. Testability improvement is therefore another motivation to specify for each use case its criticality and expected frequency. This information will be used during subsequent development activities to focus testability evaluation and improvement efforts. 5.5 Control Use Case Dependencies For most application systems, avoiding all dependencies between use cases is neither desirable (because the model semantic knowledge) nor feasible. To control dependencies is difficult, because they are not under full control of the software developer. Largely they are specified by the customer alone or by a joint team of customers and requirements engineers. In this section we describe how to keep two origins of use case dependencies under control: mandatory domain class associations (see Section 5.5.1) and references from use case descriptions to domain classes (see Section 5.5.2) Control Mandatory Domain Class Associations Mandatory associations An association between class A and class B is mandatory 1 for A if the lower bound of the multiplicity at the B end is greater than zero. In other words: Class A has a mandatory association to class B if each instance of A is always linked to some instance(s) of B. Note: The term mandatory association is not part of the UML and only used as a shortcut. 1 The opposite to a mandatory association is an optional association.

60 44 Chapter 5: Requirements Capture Effects of mandatory associations on use case dependencies and testing A mandatory association results either in an (explicit) include relationship or in an (implicit) logical dependency between use cases, both leading to test case dependencies. Example 2 There are two domain classes, Account and Person (Figure 5). Class Account has a mandatory association with class Person (because each account must be owned by one person). Use case Create Account is responsible for creating instances of class Account and use case Enter Person for creating instances of class Person. Note: the use case diagram does not indicate a relationship between these use cases. Account * 1 owner Person Figure 5 Domain model The software system implementing the use cases has to guarantee, that the mandatory association is always fulfilled. It is a good strategy to assign this responsibility to the use case Create Account. The main flow of use case Create Account specifies, that at least one link to an instance of class Person must be established. This has two consequences: The use case Create Account depends on use case Enter Person which has to be reflected by the precondition of use case Create Account or by an include relationship from use case Create Account to use case Enter Person. Testing the use case Enter Person must have been successful before use case Enter Account can be tested. Effects of transitive mandatory associations A transitive mandatory association between two domain classes may cause a transitive dependency between the corresponding use cases. Especially long chains of mandatory associations are therefore critical: if they result in long chains of transitive use case dependencies the test problems (described in Section 5.3) increase with the length of the chains. For example, during test

61 Chapter 5: Requirements Capture 45 design and test setup the tester has to deal with a large number of domain classes which might have little to do with the actual test case. Check mandatory associations Check, if each mandatory association is really justified. Possible motivations for specifying a mandatory association from a class A to a class B (assuming that there is only one association between both classes for the sake of simplicity) are: 1 The instances of B represent key data which are required for identifying the instances of A (e.g. a flight ticket which needs to be identified by its associated flight and seat). 2 Each use case involving instances of class A can only be executed successfully, if these instances are linked to instances of B. 3 Some (but not all) use cases involving instances of class A can only be executed successfully, if these instances are linked to instances of B. In case 1) and 2) modelling a mandatory association is justified. In case 3) the mandatory association should be reconsidered: A better alternative is to define the existence of a link to an instance of class B as a precondition in those use cases which actually require it for successful execution and to make the association optional instead. Example 3 The domain class model of Figure 5 is the basis for a banking application and the use case diagram of Figure 6. The use cases Open Account, Deposit Money, and Withdraw Money indeed require, that one person is specified as the owner of the account. The use cases Monthly statistics and Annual statistics do not refer to individual persons, but provide statistics for all accounts administrated by a bank. For testing the functionality of the two later use cases it is therefore not necessary, that a person is attached to each account. Based on this observation, the multiplicity at the Person end of the association should be changed from 1 into 0..1.

62 46 Chapter 5: Requirements Capture Enter Person Open Account Deposit Money Withdraw Money Client Monthly Statistics Clerk Annual Statistics Figure 6 Use case diagram for banking application Control References from Use Cases to Domain Classes A (textual) description of a use case in general refers to one or more domain classes by mentioning their class name. Effects of use case references on testing A reference from a particular use case description to a domain class has different effects on testing: Instances of the domain class have to be created before the test execution starts, e.g. by executing the use case(s) responsible for the instantiation (which leads to one ore more additional use case dependencies). If the domain class is mentioned in the postcondition(s) of the use case its instances additionally have to be checked during test result analysis. The class implementing the logic of the use case is harder to test because of its dependency on the class realizing the domain class 1. If two use case descriptions reference the same domain class, this often leads to a use case dependency between both. To study such effects more closely we use a CRUD matrix. 1 The class of the implementation realizing the domain class is called entity class (see Section 6.1.2).

63 Chapter 5: Requirements Capture 47 Create CRUD Matrix The columns and rows of a CRUD matrix [Armo00] represent use cases and domain classes. The entries in the matrix indicate the type of access of a use case to instances of a domain class. The template of a CRUD matrix, shown in Figure 7, uses the following abbreviations of matrix entries: c - creates instance of a domain class r - reads attribute of a domain class instance u - updates (i.e. writes) attribute of a domain class instance d - destroys instance of a domain class. use case 1 use case 2 use case 3 domain class A c u ru domain class B c ru domain class C c rd domain class D c ru Figure 7 Schema of a CRUD matrix To create a CRUD matrix means to search textual use case descriptions for phrases indicating an access to an instance of a domain class and to insert an appropriate entry into the matrix for each access found. Note: An update or a delete access on a domain class usually requires a read access upfront. CRUD matrix and use case relationships During the construction of a CRUD matrix we consider include-, extend-, and inheritance relationships between use cases in the following way: If use case A includes use case B then flatten A. If use case A extends use case B then flatten B. If use case A inherits from use case B then flatten A. To flatten a use case A means that (the relevant parts of) the descriptions of use cases referenced within the description of A are copied into it.

64 48 Chapter 5: Requirements Capture Example 4 Figure 7 shows a CRUD matrix for some use cases of the banking application. While the use case diagram (Figure 6) does not display any dependencies between the use cases, the CRUD matrix indicates e.g., that the use case Get Balance at least depends on the use case Open Account, because it reads an instance of class Account which is created by the use case Open Account. domain class use cases Enter Person Open Account Deposit Money Withdraw Money Get Balance Person c r r r r r Account c ru ru r d Close Account Figure 8 CRUD matrix for some use cases of banking application Evaluate CRUD matrix The CRUD matrix helps to predict possible use case dependencies. This prediction is only rough, because the matrix describes access to domain classes only at the class level (not on the attribute level) and does not take into account actual pre- and postconditions or other conditions describing logical dependencies. The motivation to evaluate all dependencies between use cases and domain classes roughly is to search for obvious potential improvement. (Creating an explicit model of logical dependencies between use cases would be more accurate but too expensive in most cases.) Evaluate CRUD matrix columns Evaluate CRUD matrix rows The number of entries within a column of a CRUD matrix (i.e. the number of domain classes accessed by a given use case) helps to predict the effort of test setup and test result analysis based on the number of domain classes involved and to predict the coupling of the corresponding control class to entity classes of the design and implementation. A small number of entries within a column is desirable from a testing point of view. The number of entries within a row of a CRUD matrix (i.e. the number of use cases which access a given domain class) helps to evaluate the amount of potential use case dependencies. A high potential of use case dependencies and interaction is also a

65 Chapter 5: Requirements Capture 49 high potential for software errors which leads to a higher test effort. A smaller number of entries within a row, especially a smaller number of update entries (with the largest potential for use case interactions), is therefore beneficial from a testing point of view. Note: Reducing the number of entries within the rows and columns of the CRUD matrix not only has a positive effect on testing but on the usability of the system as well: the number of domain objects the user is confronted with in the context of a given use case is smaller and he has to take less care about use case dependencies. This is one aspect, where testability and usability are in alignment. Another type of dependencies specified during requirements capture concerns external systems. 5.6 Control Dependencies to External Systems A user requirement may specify, that the system to be built has to interact with one or more external systems supplied by the customer or third-party companies. Even if an external system needs not to be tested by the software developer, it may cause testing problems for the system to be built, e.g. because it does not provide any logging functionality, it is difficult to stub the external system, the data exchange is difficult to capture and replay (with capture-and-replay tools 1 for the purpose of regression testing), the data exchange format is difficult to analyze (e.g. EDI- FACT 2 format), or the availability of the external system for test purposes is unclear. We recommend to check each external system whether it supports testability sufficiently. If not, try to negotiate e.g. a different data exchange format or try to avoid the dependency at all. 1 Capture-and-replay tools observe and protocol system interaction at the (graphical) user interface level. 2 EDIFACT (Electronic Data Interchange For Administration, Commerce and Transport) is a standard of the United Nations (UN) for the electronical exchange of business documents and messages.

66 50 Chapter 5: Requirements Capture

67 51 Chapter 6 Analysis The evaluation and improvement of testability is not for free. Related efforts should therefore concentrate on the most relevant system parts from a testing point of view. Our aim is to identify those system parts during the analysis activity. This chapter describes how to identify classes and dependencies which are critical for testing and require special attention and how to specify technical requirements concerning testability (called testability requirements ). 6.1 Introduction In the beginning we give an overview of the analysis activity, introduce important analysis concepts, and discuss briefly how the analysis activity relates to testability in general Overview of Analysis Activity Analysis specification The analysis activity bridges the gap between the requirements specification (which has been defined using the vocabulary of the customer) and the implementation [Jaco92]. During this activity, the requirements specification will be refined, restructured, more precisely specified and transformed into an analysis specification [Six03]: Analysis class model The functionality specified by the use cases is transformed into responsibilities of interface and control classes. These classes, together with classes from the domain class model as well as classes resulting from technical requirements (like authentication and access control) comprise the analysis class model. Operations The operations of the analysis class model are identified and specified based on the use case descriptions.

68 52 Chapter 6: Analysis Behavioral diagrams Complex behavior of use cases is described with help of interaction- and state diagrams. Analysis packages Analysis packages help in organizing the artifacts of the analysis model in manageable pieces [RUP99] and provide an architectural view on the analysis specification. Technical requirements Additional technical requirements, e.g. concerning the reusability and maintainability of the software or the use of particular class libraries, may supplement the analysis specification. The use case model from the requirements specification and the analysis specification are the input for the subsequent design activity Important Concepts A layered architecture is useful in many application domains. In this work we assume a layered architecture based on three categories of classes according to [Jaco92] [Six03]: interface classes, control classes, and entity classes. Interface classes Control classes Entity classes Interface classes are responsible for constructing the user interface, translating user input into system actions, and encapsulating access to or from external systems. Control classes are responsible for the business logic of the application. Entity classes represent the domain classes and provide the persistency of data. All classes of a particular category together represent one layer of the system architecture Analysis and Testability During the analysis activity, our main concern with respect to testability is to identify those analysis classes and related dependencies which are relevant to testing as well as to specify the required level of testability. This information will be used during design and implementation to guide our efforts in evaluating and improving testability. Note: The considerations in this chapter do not only apply to classes alone, but also to clusters of classes or subsystems. For the sake of simplicity, we do not distinguish between subsystems and classes as long as no confusion may occur.

69 Chapter 6: Analysis Identify Classes Relevant to Testability A prerequisite for the specification of testability requirements and the identification of test-critical dependencies is to know the classes which require special attention with respect to testability. Among the classes to be tested some may be critical for the overall test and development process and some may be hard to test. Other classes, which are not actually tested, may be nevertheless relevant to the testability of the system. We therefore want to identify classes within a system belonging to three different sets: test-critical classes, hard-to-test classes, and test-sensitive classes Identify Test-Critical Classes Test-critical class A class is test-critical, if a test problem within this class may have a serious impact on the overall test process and development success. To identify test-critical classes, search for classes implementing critical functionality like control classes or entity classes realizing the business logic of a critical use case, classes implementing a critical non-functional requirement, or any other class which has the potential to cause serious damage to life and business if it fails. Additionally, also search for classes which may have a potentially serious impact on the test and development process, e.g. because they are implemented using technology previously unknown which may create serious test problems, they are intended for heavy reuse and have to be tested frequently (and any delays in the testing process are multiplied by the number of test runs), or their implementation and test lies on the time critical path (e.g. because they are part of an early system increment 1 or because they are involved in the test of other critical functionality) and any problems would delay the overall test process. 1 Iterative development processes like RUP [Coll97] divide the work into smaller slices. Each slice is an iteration that results in an increment.

70 54 Chapter 6: Analysis Identify Hard-To-Test Classes Hard-to-test class A class is hard to test, if it lacks testability features necessary for testing or if its functionality is excessively complex. To identify hard-to-test classes, search e.g. for classes where the following applies: the input domain of complex operations is very large and a thorough test requires more tests than practically possible, the observability is poor, i.e. the output range of a complex operation is small and does not allow to detect faults in any case, it is impossible or practically unfeasible to automate the test oracle, it is difficult to simulate expected failure modes and exceptions the class has to deal with, its source code, needed for white-box testing, instrumentation, or debugging purposes, is not available (e.g. because the classes have been bought from a third-party provider). Check at least each test-critical whether it is hard to test and specify at least for each hard-to-test class the required level of testability Identify Test-Sensitive Classes Test-sensitive class A class is test-sensitive, if it is not subject to direct testing 1 but if it is involved in the test of other classes and has a negative impact on testing. To identify test-sensitive classes, search e.g. for classes which require human input to provide its services (which slows down test progress), encapsulate the access to a resource which slows down test progress (like a database or Internet), do not provide sufficient controllability (because it is difficult to set their state during test set-up). A third-party framework may be test-sensitive, if it enforces the application code to follow the framework-specific design rules which contradict testability, e.g.: The framework makes it difficult to use stubs instead of supplier classes because it restricts parameter objects to a particular type which is not visible from a less-restrictive parameter type declaration. 1 Testing a class directly means to develop dedicated test cases for testing the class. Testing a class indirectly means that it is only tested in the course of testing one of its client classes.

71 Chapter 6: Analysis 55 Example: The parameter p of a method m(object p) is declared to be of type Object, but within the method body of m the instanceof 1 operator is used to check whether the parameter object p is an instance of a class different from Object. The patterns of interaction between the application code and the framework are complex which makes fault isolation difficult. Example: An application class has to be defined as a subclass of an abstract framework class and to implement abstract methods of this framework class. In this case, not only the application code calls the framework code but vice versa as well which increases the interdependencies between application and framework during program execution. The framework requires an application class to implement or call static methods which reduces the ability to test the involved classes in isolation (the effect of access to static methods will be discussed in more detail in Section 7.3.1). Example: A persistency framework requires for each entity class that 1) the default constructor is declared private and that 2) it defines a public static method create() which is responsible to involve a persistency mechanism. Dependencies to test-sensitive classes are relevant for system testability and need to be identified Using Prototypes to Identify Classes Relevant to Testability If the complete software product or a major part of it is developed from scratch, we recommend to implement and test an initial increment or prototype very early during architectural design 2 in order to identify hard-to-test and test-sensitive classes. This first increment or prototype should include classes from all relevant architectural layers. During the test of the increment or prototype special attention should be put on the ease of test 1 The instanceof operator is a boolean operator in Java which evaluates to true if the left-hand operand is an instance of the right-hand operand, i.e. if an object belongs to a given class / interface or to a subtype of this class / interface. 2 Architectural design is a high-level design created during the analysis activity where fundamental design decisions concerning the architecture are specified.

72 56 Chapter 6: Analysis setup (i.e. the creation of test data and objects) and on the possibility to automate test cases. Especially framework classes should be checked for testability. In some cases it may be necessary to redesign the architecture Relationships Between Different Class Categories Figure 9 shows an example of the relationship between the sets of test-critical classes, hard-to-test classes, and test-sensitive classes in the context of a system built from application classes and third-party classes. application classes test-sensitive classes tested classes test-critical classes hard-to-test classes third-party classes Figure 9 Example of relationship between different class categories 6.3 Identify Test-Critical Dependencies The identification of test-critical, hard-to-test, and test-sensitive classes is now used to identify dependencies which are critical for testing and which need special treatment. Test-critical dependency A dependency is called test-critical, if its existence or type 1 (potentially) has a large effect on the overall test process (compared with other dependencies within the system). To identify test-critical dependencies search e.g. for dependencies between test-critical client classes and hardto-test or test-sensitive supplier classes or dependencies introducing dependency cycles between analysis classes or analysis packages. 1 Dependency types will be described in Chapter 7.

73 Chapter 6: Analysis 57 Test-critical dependencies should be removed during analysis whenever possible, otherwise they require special treatment in later development stages. 6.4 Specify Testability Requirements The purpose of testability requirements is to tell a software engineer during subsequent development activities, which classes have to be designed and implemented with testability in mind and which level of testability must be achieved thereby. Testability requirements as a concept can be found e.g. in the context of the OPEN process [Fire01]. This section adds some new guidelines on how to define them. Testability Requirements by Customers and Developers Testability requirements improve the efficiency of testability improvement efforts during subsequent development activities by focusing them on the most relevant parts of the system. Testability requirements can belong to the customer requirements and/or the technical requirements: Testability requirements belong to the customer requirements if they are specified by the customer. For example, a purchaser of a telecommunication equipment may have specific demands concerning the ability to diagnose system failures. In this case, the testability requirements are specified during the requirements capture activity. If testability requirements are specified by the software developer, then they belong to the technical requirements which are specified during the analysis activity. Scope and Scale Scope Scale Testability requirements should be specified at least for hard-totest classes and test-sensitive classes, either on a global level, package level, or for each relevant class on an individual level. We propose to use a simple scale to specify the required level of testability, for example a scale with the following values: do not care, low, medium, and high.

74 58 Chapter 6: Analysis Testability Requirements and Necessary Trade-offs During the specification of testability requirements it is necessary to consider the costs of testability and possible conflicts with other software quality characteristics like performance requirements. For example, while the effect of testability on performance is usually limited, performance-tuning has a more severe impact on testability (see Appendix D). If testability is in conflict with performance, it is a good strategy to give testability requirements the higher priority during earlier stages of design and implementation and to improve performance afterwards. Clearly, it is only meaningful to specify testability requirements, if the savings earned at least outweigh their costs. While the costs for achieving testability only have to be invested once, the savings are multiplied by the (usually large) number of unit tests, maintenance tests, and debugging activities.

75 59 Chapter 7 Design The design model contains more classes and dependencies than the analysis class model as the result of accounting for non-functional requirements and managing complexity. Additionally, the direction of dependencies and the mechanisms used to realize the dependencies is specified during design. This chapter introduces different categories of design dependencies with different effects on testing and provides a number of design guidelines and hints on techniques which help to control dependencies during the design activity. 7.1 Introduction Like in the previous chapters, we start with an overview of the design activity, describe important design concepts, and discuss briefly how testability relates to the design activity Overview of Design Activity Design activity Design specification The analysis specification and those parts of the requirements specification which are not covered by it are the starting point for the design activity. All functional and non-functional requirements are transformed during the design into classes with welldefined responsibilities. Breaking the system into classes enables to manage complexity. The functionality and interaction of the classes is specified at a level of granularity which allows to implement them directly using a programming language. Care has to be taken, that the technical and quality requirements specified on an architectural level are correctly refined during design. The result of the design activity is the design specification which describes the responsibilities, relationships, and interactions of the classes constituting the system.

76 60 Chapter 7: Design Design class model The main component of the design specification is the design class model which contains more details than the analysis class model. Interaction diagrams and state diagrams supplement the design class model to facilitate its direct implementation. The implementation itself is not part of the design activity. [Six03] Important Concepts Classes Interfaces Concrete class Final class Final method Final attribute Type Layered system architecture The basic building blocks of an object-oriented design are classes. Interfaces are treated within this document as a special type of classes without any method implementations and instance attributes. For the sake of simplicity, we often use the term class to denote both classes and interfaces. If a class needs to be distinguished from an interface, we call it a concrete class. A class can be declared final in Java which makes is impossible to derive a subclass from it. Java allows to declare a method final, which makes it impossible to redefine it within a subclass. If an attribute is declared final in Java, its value can not be changed after initialization. A class can be used to define the type of attributes, parameters, or return values. Between classes and types there is a m-to-n relationship: A given type can be realized by more than one class and a given class can realize more than one type. For example, a class can implement more than one interface, a subclass implements the type of its superclass(es). We assume a layered system architecture including interface classes, control classes, and entity classes Design and Testability Considering testability during the design activity is very important in order to achieve a testable system implementation. Not to care about testability during design usually can t be compensated during subsequent development activities. There are three main tasks w.r.t. testability during design:

77 Chapter 7: Design 61 1 To transform the testability requirements defined in previous development activities into testable classes and testability features. 2 To define the responsibilities of the classes in a way that facilitates testing. 3 To define the interactions of the classes in a way that facilitates testing. In the remainder of this chapter we mainly focus on the third task: first we describe different categories of class dependencies and how the differ in their effect on testing, then we propose new guidelines on how to avoid untestable class dependencies. 7.2 Design Dependencies Design dependency The term design dependency is a shortcut for a dependency between two classes belonging to an object-oriented software design. Static versus dynamic dependencies The dependencies modelled within an UML class diagram are usually syntactic dependencies and static. Additionally, dynamic dependencies may be caused by polymorphism. Example 5 Class A has a static dependency on class B. At run-time, an instance of A may use an instance of class B1 which is a subclass of B. This dynamic dependency can be modeled in the UML as a use dependency, using the predefined UML stereotype «use» as shown in Figure 10. (The link between an instance of A and B1 is established by another class, class C.) C A B «use» B1 Figure 10 Example of a dynamic dependency

78 62 Chapter 7: Design Focus The focus within this thesis is on static dependencies because they are a main concern during design and more fundamental than dynamic dependencies. Sources of static dependencies Categories of static dependencies Static dependencies A design defines a (static) dependency between a client class A and a supplier class B if 1 A inherits from B, 2 A has an association which can be navigated 1 to B, or 3 a type declaration within A contains a reference to B. Note, that the dependencies are modelled explicitly in case 1 and 2, but only implicitly in case 3. We distinguish between three different categories of static dependencies: hard-wired dependencies, semi-hard-wired dependencies and type dependencies. These categories are motivated (in the context of this work) by their different effect on the ability to test a class in isolation (which will be discussed in Section 7.3) and will be used to define design guidelines and metrics in the remainder of this document Hard-Wired Dependency Motivation Changing the supplier class Changing the CUT During unit testing, a tester often wants to redefine implementation details of a supplier class in order to gain more control over the client class (the CUT), the supplier class, and other, indirect supplier classes during test setup and execution. For testing purposes a tester may want, for example, to predefine the return value delivered by a method of the supplier class to enforce a specific execution path of a client method, to define the value of a non-private attribute of the supplier instance to enforce a specific state (of the supplier instance) as required for test setup, or to break dependencies of the supplier class to other indirect supplier classes of the CUT in order to simplify testing. Changing or substituting the supplier class for the sake of testing is not always possible without changing the source code of the 1 The navigability of class associations should be specified during the design activity. If the navigability is however not specified, we assume that it is supported in both directions of the association.

79 Chapter 7: Design 63 CUT. Changing the CUT for testing purposes, however, should be avoided because it breaks the consistency between the CUT and its final, operational version and therefore reduces the significance of the test results. Another reason why creating different versions of classes for testing purposes is undesired is its negative (i.e. increasing) effect on the administration effort and the danger that a wrong class version is integrated into the final product. Hard-wired dependencies are the reason why details of the supplier classes can t be redefined without changing the CUT and therefore cause the greatest restrictions on testing a CUT in isolation. Definition Hard-wired dependency A dependency from a client class to a supplier class is hardwired, 1 if it is impossible to redefine 1 any implementation detail 2 of the supplier class and 2 if it is impossible to define 3 the value of any non-private attribute of the supplier class the client class references, without changing the implementation of the client class (or any of its superclasses) or the supplier class. Identifying hard-wired dependencies To identify hard-wired dependencies, search for dependencies where at least one of the following four facts holds: 1 The supplier class is a (direct or indirect) superclass of the client class. In this case, it is impossible to substitute the superclass without changing the source code of the client class 4. 2 The client class creates an instance of the supplier class directly. 1 To redefine an implementation detail means to overwrite a method used by the client class or to substitute the supplier class. 2 An implementation detail is a method, a static initializer or instance initializer (see Appendix B.4.1 and Appendix B.4.2). 3 Same as to set an arbitrary value. 4 Alternatively, the source code of one of the intermediate superclasses could be changed as well to break the dependency to the indirect superclass.

80 64 Chapter 7: Design In Java, a client class creates an instance of the supplier class directly by calling one of its constructors like new SupplierClass(). In this case, it is impossible to substitute the supplier class without changing the source code of the client class. A client class may also create an instance of a supplier class by calling a pseudo-constructor, i.e. a static operation which returns a new instance of the supplier class. Examples of pseudo-constructors are the method getinstance() of the Singleton-pattern [Gamm94] and the method erzeuge() of the persistency framework described in [Six03]. As a heuristic to identify pseudo-constructors of a class A we search for static methods with a return value type equal to A or equal to an interface implemented by A. Note: A common practice is not to model the specific way of object creation during the design activity. This practice ignores, that object creation contributes significantly to the number of hard-wired dependencies within a system. 3 All methods of the supplier class accessed by instances of the client class can t be overridden and all attributes of the supplier class accessed by the client class or its instances may never have its value set. In the context of Java this means, that all methods of the supplier class called by the client class (or its instances) are static or final and all attributes accessed by the client class are final. Notes: Whether a method, which is not called by the client class, is static or final is not relevant. Whether an attribute, which is not accessed by the client is final, is not relevant, too. Final methods and attributes are not part of the UML. However, if final declarations are used during implementation, then they should be treated as a design issue and modeled in class diagram e.g. with help of UML stereotypes. During design, method calls are seldom modeled in detail, e.g. using interaction diagrams. Still it is possible to identify hard-wired dependencies in some cases, if e.g. the supplier class contains only static methods or if

81 Chapter 7: Design 65 the client class is intended to access only a specific subset of methods (of the supplier class) which are all static. 4 it is impossible to create a subclass of the supplier class. In Java, a class can t be subclassed if it is declared to be final. Note: Final classes are not part of the UML. However, if final classes are used during implementation, then this should be treated as a design issue. If a client class depends on a supplier class which can t be subclassed, there is no way to use a subclass of the supplier class to substitute it Semi-Hard-Wired Dependency A semi-hard-wired dependency is a weak form of a hard-wired dependency and causes less restrictions on testing a class in isolation, i.e. the tester is able to (re)define some (but not all) implementation details and attribute values of the supplier class for the sake of testing. Definition Semi-hard-wired dependency A dependency from a client class to a supplier class is semihard-wired, if 1 some (but not all) implementation details of the supplier class can t be redefined or 2 some (but not all) values of non-private attributes of the supplier class can t be defined the client class references, without changing the implementation of the client class (or any of its superclasses) or the supplier class. Identifying semi-hard-wired dependencies To identify semi-hard-wired dependencies, search for dependencies where both of the following facts hold: 1 The dependency is not hard-wired. 2 At least one method of the supplier class accessed by the client class (or its instances) can be overridden or at least one attribute of the supplier class accessed by the client class (or its instances) may have its value set. In Java, this means, that at least one method of the supplier class called by the client class (or its instances) is

82 66 Chapter 7: Design Type Dependency not declared static or final or at least one attribute accessed by the client class is not declared final. A type dependency does not cause any restrictions on testing a class in isolation. Definition Type dependency A dependency from a client class to a supplier class is called a type dependency, if it is possible 1 to redefine all implementation details of the supplier class and 2 to define all values of non-private attributes of the supplier class the client class references, without changing the implementation of the client class (or any of its superclasses) or the supplier class. In other words: the client class does not depend on the implementation of the supplier class but only on the type specified by it. Identifying type dependencies To identify type dependencies, search for dependencies where both of the following facts hold: 1 The dependency is not hard-wired. 2 All methods of the supplier class accessed by the client class (or its instances) can be overridden and all attributes of the supplier class accessed by the client class (or its instances) may have its value set. In Java, this means that none of the methods of the supplier class called by the client class (or its instances) are static or final and that none of the attributes accessed by the client class are final. While a hard-wired dependency is often introduced by accident, a type dependency is usually the result of active system design. For example, the system design should make it impossible to unintentionally create a hard-wired dependency from a class outside a package to a class within the package (see Section 7.5).

83 Chapter 7: Design Design Dependencies and Testing Dependencies between a CUT and its supplier classes have many effects on the time and effort needed for testing as well as on the complexity of the test tasks to solve (see Chapter 2). This section discusses, how hard-wired dependencies, semi-hardwired dependencies, and type dependencies differ in their effect on testing Effects of Hard-Wired Dependencies Hard-wired dependencies have a significant effect on unit testing. Unit testing and dependencies Use of stubs and mock objects Hard-wired dependencies and unit testing Unit testing means to focus on a small part of a software system. During unit testing it is much easier to isolate faults compared to system testing (which is one of the main motivations to perform unit testing). Dependencies of the CUT on supplier classes make testing more difficult: A supplier class may not be ready for testing or difficult to control. Test set-up involves more classes and is therefore more difficult. The time needed for testing increases, e.g. in case of dependencies to classes involved in user interaction, access to data bases, or internet protocols. Fault isolation is more difficult. The advantages of a test in isolation are lost to some extent. A common solution to these problems is to substitute the supplier classes by stubs or mock objects [Bind99]. Mock objects have the additional advantage that they help to detect failures earlier [Link01]. In order to be able to substitute the supplier class by a stub or mock object 1) the stub or mock object has to implement the type of the supplier class and 2) the CUT has to use a reference to the stub or mock object instead of a reference to an instance of the supplier class. A hard-wired dependency from the CUT to a supplier class makes it impossible to test the CUT independently from implementation details of the supplier class. This has the following effects: It may be impossible to use a stub or mock instead of the supplier class or

84 68 Chapter 7: Design (if it is possible to use a stub) the stub can t redefine any implementation details of the supplier class that are used by the client class. Example 6 The CUT creates an instance of a supplier class itself. The tester is therefore not able to use a stub instead of the created supplier class instance without changing the CUT s implementation. Example 7 The instance of the supplier class is delivered to the instance of the CUT as a parameter object. Therefore it is possible to substitute the supplier instance by a stub object of the same type. Unfortunately, the CUT calls static operations of the supplier class which can t be redefined by the stub class. The more dependencies from a CUT to its supplier classes are hard-wired, the weaker is our ability to test the class in isolation or to substitute dependencies to test-critical classes. Effects of inheritance Inheritance, as one source of hard-wired dependencies, is a special case with respect to testability: inheritance may have a positive effect on testing, if the client class (the CUT) is able to inherit test cases from the superclass. On the other side, there are negative effects of inheritance on testing as well: It is impossible to stub the superclass (without major changes to the implementation of the client class). It may be difficult to understand the interaction between the CUT and its superclass(es) as well as to isolate faults because of complex interactions between the superclass and its subclasses. Complex interactions result e.g. from method calls going up and down an inheritance hierarchy several times (Yoyo-Effect) [Taen89].

85 Chapter 7: Design Effects of Semi-Hard-Wired Dependency A semi-hard-wired dependency from the CUT to a supplier class reduces the ability of a stub to redefine implementation details of the supplier class. In this way, the effect of a semi-hard-wired dependency is similar to the effect of a hard-wired dependency on testing, but to a smaller extent Effects of Type Dependency A type dependency causes no restrictions on testing the client class in isolation. The type of the supplier class can be specified either by 1 an interface, 2 an abstract class, or 3 a concrete class. In the first case, a stub or mock object (intended to substitute the supplier class) may belong to any class implementing the type specified by the interface which means highest flexibility. In the second and third case, a stub or mock class (intended to substitute the supplier class) must be a subclass of the (concrete or abstract) supplier class. Therefore, the stub or mock class can t inherit at the same time from e.g. a test framework class if this was intended (and if multiple inheritance is not available within the programming language used like Java). Interfaces and changeability Note: The frequent use of interfaces to define the types of supplier classes reduces design changeability: when an interfaces changes, all classes implementing the interface have to be changed as well. Alternatively, abstract classes can be used to specify types. If an abstract class is changed, e.g. by introducing a new method, the abstract class may implement a default behavior of this new method, making it unnecessary to change all classes inheriting from the abstract class. 7.4 Control Design Dependencies It is neither possible nor meaningful, to avoid all dependencies within a system. Therefore it is necessary to control the number and type of the dependencies. In the following subsections we describe design guidelines which help to avoid dependencies with a large negative impact on testing.

86 70 Chapter 7: Design General Design Principles A number of general design principles address dependencies: Reduce coupling and increase cohesion. Avoid extensive use of inheritance. Avoid cyclic dependencies. Use a layered design. Following these guidelines makes it easier e.g. to define work packages which can be assigned to different people and to understand, reuse, and test the classes involved. However, these design guidelines are mainly motivated by implementation and maintenance issues but not by testing issues. They do not take into account specific requirements coming from testing tasks and are not sufficient to guarantee testability. For example: An indirect dependency is much more relevant to testing than to implementation tasks because its effect on test setup, test design, and fault isolation (see Section 2.4) is not reduced by encapsulation and information hiding. Conventional coupling, however, is only concerned about direct dependencies. The effect of a hard-to-test or test-sensitive supplier class on the implementation of a client class is limited: the main task of the software engineer (who is coding the functionality of the client class) does not involve frequent setup, execution, or test of the supplier classes which means that negative effects of the supplier classes do not become obvious enough. The ability to substitute a supplier class is an important requirement during unit testing but seldom during implementation. In the following sections we present new design guidelines that focus on dependencies and testability issues Limit Indirect Dependencies From an implementation point of view, a client class should not know the classes the supplier class uses to deliver its functionality 1. If it does, however, the functionality is not impaired (i.e. there is no immediate negative effect on the implementation) while the effect on testing may be severe (see Section 2.4) but not obvious at the time of implementation. 1 Principles like encapsulation and information hiding support this.

87 Chapter 7: Design 71 Guideline: Limit the average number of indirect dependencies of all classes to be tested. Realization: The number of indirect dependencies can be reduced in the following ways: Re-assign functionality to classes in a way that reduces the number of indirect dependencies. Use interfaces to break chains of static syntactic dependencies (this will be discussed in more detail in Section 8.3.6), which is especially recommendable for services visible at package boundaries. Use reflection mechanisms. (However, this has the disadvantage of reducing the ability to perform meaningful static analysis of the code.) Note: For techniques to break compile-time dependencies in the context of C++ see [Lako96] Test-Critical Dependencies Shouldn t be Hard-Wired Within a test-critical dependency it should be possible to substitute the supplier class. If the test-critical dependency is hardwired, however, this substitution is impossible. Guideline: Test-critical dependencies shouldn t be hard-wired. Realization: In order to avoid hard-wired dependencies between test-critical client classes and hard-to-test or test-sensitive supplier classes, a hard-to-test or test-sensitive class should have the following characteristics: Its type is specified by an interface or abstract class. It does not contain static methods. A separate factory class is used to create its instances. It does not implement the singleton pattern. Instead, the access mechanism to the single instance is separated from the class providing the functionality. Note: This can be achieved e.g. by using a system level (or package-level) object 1 which provides access to the single instance. Its visibility scope is as small as possible (and as large as necessary) to avoid unintended dependencies.

88 72 Chapter 7: Design Involve Hard-To-Test and Test-Sensitive Classes on Demand If a client object (an instance of a CUT) shall use a service of a hard-to-test or test-sensitive supplier class it is necessary to create an instance of this supplier class first and then to establish a link to this instance. There are different strategies to do so: 1 Create the supplier instance at system startup and establish a direct link to it directly after the construction of the client object. 2 Create the supplier instance at the time when it is actually needed by the client. Following the first strategy delays the overall test progress needlessly because all test runs involve an instance of the hard-totest or test-sensitive supplier class, even if this is not actually required by each test case. Following the second strategy instead helps to reduce the number of test cases involving an instance of the hard-to-test or testsensitive class. The second strategy also facilitates test setup, if not all operations of the CUT require access to the operations of the hard-to-test or test-sensitive class. Guideline: Create instances of hard-to-test or test-sensitive classes on demand. Realization: Establish a link from the client class (the CUT) to a factory object [Gamm94], or a proxy [Gamm94] which allows the client class to get access to an instance of the hard-to-test or test-sensitive class on demand. (Additional advantages of using a factory class will be discussed in Section ) Ensure Substitutability Make sure, that it is possible to substitute the supplier classes of the CUT during testing. Using a stub instead of a supplier class is impossible e.g. if the CUT (or another supplier class of the CUT) makes hidden assumptions about the implementation or type of the supplier class. 1 The general approach of a system level functionality to get access to instances is used in component based approaches where it is called a directory service.

89 Chapter 7: Design 73 Example 8 The CUT A knows instances of type IB and creates instances of class C to process them. Class C assumes, that the supplier class is actually of type B and tries to cast instances of type IB to type B. When trying to use a stub instead of class B, the CUT breaks, which is not obvious from the source code of the CUT itself. A «interface» IB C B Figure 11 Example for problems with substitutability Guideline: Enable the substitution of supplier classes by stubs or mocks 1. Realization: Avoid cast operations and operators like instanceof if they introduce restrictions on the allowed types of parameter objects which are not obvious from the method signatures. Document assumptions about supplier classes, which are not obvious from the source code alone Isolate Access to Technical Infrastructure Access to the technical infrastructure should be transparent in application layers in order to make application development and maintenance easier. However, a full transparency is not provided by frameworks or APIs managing the access in all cases. 1 This guideline is weaker than the Liskov Substitution Principle because it does not require that the semantics of the substituting class are fully equivalent to the substituted class [Lisk88].

90 74 Chapter 7: Design Example 9 The persistency framework described in [Six03] releases the developer of the business logic from the burden to deal with persistency and database issues to a large extent. However, starting and committing transactions remain in the responsibility of the application developer. If it is impossible to achieve full transparency, however, it should be possible to ignore the access to technical infrastructure within test cases concerning the business logic. Guideline: Isolate direct access to technical infrastructure within a class in separate methods. Realization: Encapsulate the access of a client class to technical infrastructure within dedicated methods of this class. For testing purposes it is then possible to derive a subclass of the CUT and to overwrite all methods that access the infrastructure Put Special Attention on Object Construction Object construction (based on constructor methods) leads to hard-wired dependencies within a system and should be treated carefully. Guideline: Separate object construction from object use, especially in the case of entity classes, hard-to-test classes, test-sensitive classes (see Section 7.4.3), and classes encapsulating important system resources. Realization: One solution is to use a factory class [Gamm94] which is dedicated to create class instances. Using it has additional advantages in the context of testing: The application is able to keep track of all instantiated objects. This enables to monitor resource consumption and aids debugging during testing and maintenance based on logging features. It facilitates to instantiate and use stubs during testing. Note: Put special attention on the mechanism to access a factory class in order to avoid global dependencies.

91 Chapter 7: Design Make Entity Classes Self-Contained Testing an entity class is more difficult, if the class requires other entity classes to provide its service. Example 10 Within a business information system, the entity classes and their associations are made persistent. Special purpose classes called association classes are used to represent n-to-m associations. If an instance of an entity class needs to access instances of another entity class linked to it (as part of a n-to-m association), an instance of the association class has to query the database and retrieve the information about existing persistent links. The test of an entity class involved in an n-to-m association therefore involves the database. Testing the entity class in isolation requires to stub the association class or database which causes additional effort. Guideline: Within entity classes, minimize the number of methods relying on methods of other entity classes. Realization: Put operations involving more than one entity class into a separate class (e.g. a control class). 7.5 Techniques to Control Design Dependencies Adhering to the guidelines described in the previous section will not be sufficient to prevent all test problems caused by dependencies. Techniques like the early creation of test artifacts, design reviews and application of design metrics additionally help to identify potential test problems Early Creation of Test Artifacts The early creation of test cases helps to identify test problems caused by design dependencies before the implementation activity starts. Even more valuable is the creation of a prototype

92 76 Chapter 7: Design or initial increment which implements the most important design decisions and allows to execute test cases Design Reviews Design reviews are a common technique to improve the testability of a software design. The use of checklists concerning testability issues improves the efficiency of the reviews [Jung99]. One source for checklist items are e.g. the guidelines described in the previous section. Another approach to improve the effectivity of reviews in terms of the detection of test problems is to ask the reviewers to perform real test tasks (like defining test cases) as part of the review [Lait00] Design Metrics Design metrics (like those described in Chapter 8) can be used to evaluate a software design. If the software design is available in a machine-readable format, the design metrics can be collected automatically. The use of design metrics is however limited in practice, if the design consists of unconnected island diagrams, if it is incomplete or not detailed enough.

93 77 Chapter 8 Implementation Considering testability during design does not obviate the need to consider testability during implementation because the later activity adds new details and dependencies to the system which are potential sources of test problems. This chapter introduces a new set of metrics informally and describes how these metrics are used to identify dependencies critical for testing. Note: A more thorough definition and discussion of the metrics at this point would reduce the cohesiveness of this chapter s thread and is therefore postponed to Chapter Introduction Again we start with an overview of the implementation activity, important implementation concepts, and a brief discussion of how testability relates to the implementation activity Overview of Implementation Activity The starting point of the implementation activity are the results from the design. The granularity of the design usually makes implementation a straightforward task. The classes modeled during design are implemented as source code using a programming language, then compiled (and linked together) into one or more executables. Some process models treat unit testing as a part of the implementation activity which means that unit testing is performed by the programmers and not by a separate test team. However, we treat unit testing, together with integration and system testing, as a separate activity.

94 78 Chapter 8: Implementation Note: We will discuss some maintenance issues related to testability in this chapter, because they are similar to the issues arising in the context of the implementation activity Implementation and Testability Taking care of dependencies during design does not guarantee sufficient testability during later development activities because: A design often consists of unconnected diagrams which does not allow to perform an automated system-level analysis. In this case, the system-level analysis has to be performed during the implementation activity. The design does not provide enough details to identify all testing problems. New dependencies are encountered during the implementation when the level of detail increases compared to the design activity. For example, new parameters or new details within method bodies may lead to new dependencies going beyond those modelled during design. The impact of local dependencies on global testability is not always obvious at the time they are introduced. In this way, developers may remain unaware of the consequences and may introduce additional, test-critical dependencies accidently during the implementation activity like cyclic dependencies [Maso99]. Design erosion: customers may ask for new functionality near the end of the development project or during the maintenance activity. Realizing this functionality in a clean way often requires major design changes which is usually impractical. Without such design changes, however, the new functionality does not fit smoothly into the architecture as it has been designed in the beginning, leading to cross-cutting code, design erosion, and problematic dependencies. Therefore it is meaningful to control the effect of dependencies on testing during the implementation activity, too. 8.2 Implementation Dependencies The term implementation dependency is a shortcut for class dependencies at the implementation level. The types of dependencies, which have been distinguished during design, are found during implementation as well. Since implementation artifacts contain more details than design artifacts, they also contain more potential sources of dependencies.

95 Chapter 8: Implementation 79 Static dependencies We refine our definition of a static dependency (from Section 7.2) as following: A static dependency between a client class A and a supplier class B is a syntactic relationship which is caused by at least one program statement of A containing a reference to B. We limit our analysis of dependencies to syntactic dependencies (as the opposite of semantic dependencies). The reasons are: Identifying semantic dependencies automatically is difficult and a field of ongoing research on its own. As a design goal, syntactic dependencies should be used to reflect the semantic dependencies between the classes of a system as far as possible. Otherwise, the system is more difficult to understand and the opportunities for running an automated analysis are reduced. Automatic identification of semantic dependencies is computation time intensive which does not fit well with the already computation time intensive 1 calculation of metrics used in this work. Sources of static dependencies In the context of Java, a static dependency is caused by different types of statements: 1 generalization statements, 2 type declarations, and 3 access statements Generalization statement Type declaration A generalization statement declares an inheritance or implements relationship between two classes (with the subclass or the class implementing an interface being the client class). A type declaration within a client class contains a reference to a supplier class in order to specify the type of an attribute, a local variable, or an (multidimensional) array, a method parameter or return value, an exception thrown by a method, an exception catched by a catch clause within a method, the result of a cast operation, or an operand of the instanceof 2 operator. 1 System-level metrics are calculated (n+1)-times where n is the number of dependencies within the system. 2 The instanceof operator is used in Java to evaluate a given instance, whether it belongs to a given type.

96 80 Chapter 8: Implementation Access statement An access statement within the client class describes the access to a member of the supplier class, like reading or writing an attribute of the supplier class, reading or writing a static attribute of the supplier class, calling a method of the supplier class, calling a static method of the supplier class, calling a method defined by an interface, or calling an overridden method of the supplier class (i.e. the superclass). Implementation dependencies and testing The effects of class dependencies identified during implementation are the same as those identified during design (see Section 7.3). 8.3 Control Dependency Structure This section describes necessary steps to keep the overall dependency structure under control - without taking into account the category of the dependencies right now Identify Static Dependencies To identify static class dependencies, a practical approach is to use automated static analysis of the program source code Evaluate Dependency Structure Beside the bare overall number of static dependencies, it is the structure of classes and class dependencies which is relevant to testing. In order to evaluate this structure, we represent it as a so-called dependency graph and define metrics to characterize it. Dependency Graph A dependency graph is basically 1 a directed graph which represents the dependency structure of a software system. Classes 1 Later on, this simple view will be extended to enable reasoning about the type of dependencies.

97 Chapter 8: Implementation 81 are the nodes of the graph and dependencies are represented by arcs. Metric ACD We have defined a system-level metric called Average Class Dependency (ACD) to evaluate the dependency structure of software systems. Metric ACD is defined as the average number of classes a class within the system directly and indirectly depends on. Example 11 Table 2 shows for each class within the dependency graph of Figure 12 the number of direct and indirect class dependencies. The average class dependency for this dependency graph is 29/ 9 = This means, that each class depends on more than 3 other classes on average to provide its functionality. A B C D E F G I H Figure 12 Dependency graph

98 82 Chapter 8: Implementation class number of class dependencies direct direct and indirect A 2 8 B 2 6 C 2 6 D 2 5 E 0 0 F 3 3 G 1 1 H 0 0 I 0 0 Table 2 Number of dependencies Interpreting ACD values Like for most other metrics, there is no universally valid threshold level that exactly discriminates good from bad systems. However, the following hints help to define a project specific threshold value: Collect and use metric data and test problem reports from similar projects to define a project specific threshold value. If historical data aren t available, compare the ACD to the overall number of classes. If a class depends on e.g. more than the half of all other classes within the system on average this does not indicate a good dependency structure. John Lakos proposes to use a dependency graph with the same number of nodes (i.e. classes) but shaped like a balanced binary tree [Lako96] as a reference for comparison. (The choice to use a balanced binary tree as a reference is, however, arbitrary.) If the time required to compile and build the system is a relevant issue, then use the ACD value to predict it and check, whether the maximum time for building the system is exceeded. If the ACD exceeds the threshold value, the overall dependency structure is questionable and more effort should be invested to improve it (following the steps in Section 8.3.4). If the ACD does not exceed the threshold value but is close to it, it is still advisable to check whether simple refactorings are able to improve the dependency structure significantly. If the ACD is sufficiently low, the steps described in Section may be skipped.

99 Chapter 8: Implementation Evaluate Dependency Cycles Dependency cycle A dependency graph may contain cycles 1 called dependency cycles. A dependency cycle is a subgraph 2 (of the dependency graph) with all its constituting classes having a direct or indirect dependency on each other. Example 12 Figure 13 shows an example dependency graph. The shaded area highlights the classes B, D, E, F, and G which are involved in a dependency cycle. A B C D E F G H I Figure 13 Example of a dependency cycle Dependency cycles have a strong negative impact on testing and should be avoided [Lako96]. Since they often have a big impact on the ACD values as well, it is a good strategy to break dependency cycles before starting to evaluate the overall dependency structure (Section 8.3.2) and individual dependencies (Section 8.3.4). Notes: The recommended sequence of the steps carried out during the control of static dependencies deviates from the ordering 1 The relationships causing the dependency cycle at the member level may be acyclic [Wint98]. 2 A cyclic subgraph is also called strong connected component in graph theory.

100 84 Chapter 8: Implementation of the corresponding sections of this work which has been chosen because of didactic reasons. Dependency cycles can be identified based on a graph representation of the system (see Chapter 10). Metrics NCDC and NFD help to evaluate the effect of dependency cycles on testing. Metric NCDC One strategy to deal with a dependency cycle during testing is the following: do not break the cycle and test all classes belonging to it at once. This means, that the classes involved in the cycle are not unit-tested but tested as part of an integration test. Metric NCDC helps to evaluate this test restriction. Metric NCDC is defined as the number of classes within a system, which are involved in a dependency cycle. For example, the value of metric NCDC for the dependency structure of Figure 13 is 5. A high value of metric NCDC means, that a large number of classes can t be tested in isolation because they are involved in a dependency cycle. Metric NFD Another strategy to deal with a dependency cycle is to break it before testing. In order to break a cycle we want to identify its weakest link or a dependency which has been introduced unintentionally but nevertheless contributes to the cycle. A good starting point to identify such dependencies is to search for feedback dependencies. Feedback dependency set A feedback dependency set is a minimal set of dependencies which, when removed, makes a dependency graph acyclic. Example 13 A feedback dependency set for the dependency graph in Figure 13 consists of the dependency from class G to class B. Note: For a complex dependency graph it may not be feasible in practice to identify a minimal feedback dependency set. In this case, a small feedback dependency set is sufficient for our purpose.

101 Chapter 8: Implementation 85 Feedback dependency Metric NFD Each dependency in a feedback dependency set is called a feedback dependency. Metric NFD is defined as the cardinality of the feedback dependency set for the system. Example 14 The feedback dependency set for the dependency graph in Figure 13 consists of one dependency. The value of metric NFD for this dependency graph is therefore 1. A high value of metric NFD means that much effort is required to break the dependency cycles during testing with help of stubs or to refactor the feedback dependencies before testing Identify Test-Critical Dependencies Dependencies with a large impact on testing should be identified and, if appropriate, refactored or removed. Our general approach to identify these test-critical dependencies is based on sensitivity analysis of the dependency structure. Sensitivity Analysis We introduce sensitivity analysis as the evaluation of how sensitive a particular system characteristic is to the existence (or type) of an individual dependency (assuming that the dependency structure has an effect on this characteristic). Note: A precondition to obtain good results from sensitivity analysis is, that the dependency structure is not degenerated 1. (If the system structure is degenerated however, then there is not only a problem with testability.) To evaluate the impact of a particular dependency on a particular system characteristic we use reduction metrics. 1 A degenerated dependency structure means, that each class depends on the majority of all other classes within the system.

102 86 Chapter 8: Implementation Reduction metric for a dependency A reduction metric rm(d) describes the degree to which the value of a system-level metric M is reduced 1 if dependency d is removed from the system. To calculate a reduction metric rm(d), we compare the values of metric M for two versions of the system which differ only in the existence (or type) of the dependency d. In the following we introduce the reduction metrics racd, rncdc, and rnfd. Metric racd Metric racd(d) is calculated as the percentage to which the value of the metric ACD is reduced, if the dependency d is removed from the system. Example 15 Table 3 shows for each dependency the value of metric ACD when the dependency is removed from the system (Figure 14, same as Figure 12) as well as the corresponding values of the racd metric. The dependencies are sorted by their racd values. A B C D E F G I H Figure 14 Dependency graph 1 For most metrics the metric value decreases when a dependency is removed.

103 Chapter 8: Implementation 87 dependency d ACD if d is removed from the system racd(d) [%] D F F I F G B D C D D E G H A B A C F H B E C F Table 3 Dependencies and values of racd A high value of metric racd for a particular dependency means that the dependency has a large effect on metric ACD, i.e. on the overall number of direct and indirect class dependencies. Metric rnfd Metric rnfd is a reduction metric based on metric NFD which is useful to identify dependencies with a large impact on the overall number of feedback dependencies within the system and which are therefore potential candidates for refactoring. Note: Each feedback dependency has a rnfd-value greater zero, but not each dependency with a rnfd-value greater zero is necessarily a feedback dependency (because only a subset of the dependencies within a dependency cycle are feedback dependencies). Metric rncdc Metric rncdc is a reduction metric based on metric NCDC. It helps to identify dependencies with a large impact on the overall number of classes involved in dependency cycles and is therefore well suited to identify dependencies which are potential candidates for refactoring.

104 88 Chapter 8: Implementation Example 16 Table 4 shows the value of the reduction metric rncdc for the dependencies of Figure 15 (same as Figure 12). The dependency between class G and B has the highest rncdc value. If this dependency can be removed, than the number of classes within a dependency cycle is reduced by 100%. A B C D E F G H I Figure 15 Dependency graph dependency d NCDC, if d is removed from the system rncdc(d) [%] A B 5 0 A C 5 0 B D 4 20 B E 4 20 C E 5 0 D F 5 0 D G 5 0 E F 4 20 E I 5 0 F G 3 40 F I 5 0 G B G H 5 0 G I 5 0 Table 4 Dependencies and values of rncdc

105 Chapter 8: Implementation 89 Discussion According to Fenton [Fent96], internal structural software characteristics can t be used to predict external software characteristics 1. However, internal structural software characteristics may be important indicators of potentially problematic areas (i.e. hot spots) concerning external software characteristics. This basically applies to our metrics as well: metrics ACD and racd e.g. measure structural software characteristics whereas testability is mainly 2 an external software characteristic. Still, metric racd was found to be a good indicator of testability hot spots within our case studies (see Part IV) Identify Candidates for Refactoring In the previous section we have described how to identify testcritical dependencies with a large impact on the dependency structure. The next step is to check each test-critical dependency manually and to answer the following questions: 1 Is the dependency really needed? Sometimes, a dependency can be removed without any impact on system functionality and need for replacement. In this case, no further checking is necessary and the dependency can be removed. 2 Does the metric value indicate a test problem? If the metric values does not indicate any relevant design or test problem, the evaluation of the dependency finishes at this point. 3 Does the dependency comply with the system design? If the dependency does not comply with the design (assuming the design is well-done), then it should be refactored and no further checking is necessary. 4 Is the dependency the actual source of the test problem? The metrics aren t always able to identify exactly the dependency which is problematic. Sometimes, the real cause of the problem lies in adjacent or close-by dependencies. Therefore it is necessary to identify the dependency which is 1 External attributes of a product like maintainability can be measured only with respect to how the product relates to its environment. 2 Some aspects of testability like the ease to recompile the system after program changes have a close relationship to structural characteristics.

106 90 Chapter 8: Implementation the actual origin of the test problem by studying the classes involved. If the dependency is indeed the actual source of a test problem, then a) check if an alternative solution is available, which complies with the initial design and b) check the design. If another dependency can be identified as the actual source of a test problem then proceed with evaluating this dependency. Note: Of course, more than one dependency can be the source of a given test problem. 5 Is an alternative solution available? The effect of a possible refactoring must not have a major negative impact on some other software quality characteristic. 6 How much effort is needed to refactor the dependency and to implement an alternative solution? The refactoring effort depends on the number of dependencies and classes which have to be changed and the complexity of the changes: The number of required changes increases, if the dependency is not a local phenomenon but a symptom of an overall bad design. If the supplier class needs to be changed, then other client classes of this supplier class may have to be refactored as well which increases the refactoring effort. The complexity of the refactoring tasks depends on whether they can be automated (e.g. using text processing functionality like search-and-replace) or whether they require a deep understanding of the system implementation and human judgement during each refactoring step Perform Refactorings This section describes ways to remove dependencies and to break dependency cycles. Note: The refactorings in this section are not novel but presented here for the reader s convenience. Removing a dependency We assume a static syntactic dependency between a (concrete or abstract) client class A and a (concrete) supplier class B which is required by A to provide its functionality.

107 Chapter 8: Implementation 91 To break this static syntactic dependency, introduce an interface or abstract class which defines the type of the supplier class and establish a link from the client instance to the supplier instance at runtime. There are different alternative solutions to establish the link: 1 Make another class responsible to establish the link between the client and supplier instance. Add a parameter of the supplier type to a method of the client class (a constructor, or a method that requires access to a server instance, or a dedicated setup method) which allows other objects to set the link. If there are many server classes this approach would result in too many parameters. A variation of this approach is therefore to use a context object [Rain01] (which encapsulates and provides access to a number of server instances) as a parameter. 2 Make another class responsible to establish a link from the client to an instance of a factory class [Gamm94] which allows the client to get access to an object implementing the supplier type. If the supplier class is a singleton [Gamm94], the factory class always returns a reference to the only instance of the singleton. 3 The client class (or a third-party class) determines the supplier class at run-time e.g. by reading a configuration file or by searching a particular directory of the file system, loads this class dynamically into the system and creates an instance from it. Example 17 Figure 16a shows a dependency graph with an ACD equal 2.5, consisting of six classes and five dependencies. We assume, that the dependency between class A and class B is test-critical and is to be removed. Substituting this dependency by a dependency to an interface IB which is implemented by B (Figure 16b) breaks the long chain of syntactic class dependencies and reduces the ACD from 2.5 to To establish the required link between an instance of A and an instance of IB (i.e. B) we follow solution 1) from above and introduce class C to take this responsibility, resulting finally in an ACD value of 1.86 (Figure 16b).

108 92 Chapter 8: Implementation (a) (b) B (c) C B A A A B «interface» IB «interface» IB ACD = 2.5 ACD = 1.67 ACD = 1.86 Figure 16 Refactoring test-critical dependencies General information on how to perform refactorings can be found in [Fowl00]. Further information on how to break static syntactic dependencies in the context of the programming language C++ can be found in [Lako96]. Note: good and executable test cases (if they already exist) are an invaluable help to ensure a correct system functionality after the refactoring. Refactoring dependency cycles If a (feedback) dependency causing a dependency cycle is not required for system functionality, then the dependency can just be removed. If the dependency, however, is required for system functionality, then check whether it is possible to extract interdependent functionality into another (new) class, distinct from the original client and supplier class. Demotion Extracting interdependent functionality into a common supplier class is called demotion [Lako96]. In case of two classes within the dependency cycle, demotion works as follows: the two classes A and B (Figure 17a) are split into four classes where the classes A and B contain the interdependent functionality (Figure 17b). The two lower-level classes can then be combined into one class C to avoid the cyclic dependency (Figure 17c).

109 Chapter 8: Implementation 93 (a) (b) (c) A B A B A B A B C original cyclic dependency intermediate factored dependencies final acyclic dependencies Figure 17 Escalation Demotion to break cyclic dependencies Extracting interdependent functionality into a higher-level client class is called escalation [Lako96]. The required steps (Figure 18) are analogous to those during demotion with the intermediate classes A and B containing the interdependent functionality. (a) (b) (c) A B A B C original cyclic dependency A B intermediate factored dependencies A B final acyclic dependencies Figure 18 Escalation to break cyclic dependencies Other solutions to break a dependency cycle between classes include: Observer pattern the observer pattern [Gamm94] (which does not break cyclic dependencies between class instances), Command pattern the command pattern [Gamm94] (which is able to break cyclic dependencies at the instance level as well), Polling a polling mechanism which means that a supplier instance tests periodically whether a client instance requires some service.

110 94 Chapter 8: Implementation 8.4 Control Hard-Wired Dependencies Hard-wired dependencies have an effect on the ability to test classes in isolation (see Section 7.3.1). This section describes how to control this effect of hard-wired dependencies Identify Hard-Wired Dependencies To identify hard-wired dependencies use the criteria described in Section Evaluate Hard-Wired Dependencies To evaluate the hard-wired dependencies within a system we use the same metric-based approach as described in Section 8.3, but exclude all dependencies which are not hard-wired from the metric analysis. This is done by simply removing all nonhard-wired dependencies from the dependency graph (not from the source code) before starting the metric calculation. A metric, which applies to a dependency graph consisting of only hard-wired dependencies, is denoted by adding a postfix h to the metric name. Metric ACDh To evaluate the effect of hard-wired dependencies on the ease to isolate the classes under test during unit testing we use metric ACDh. Metric ACDh is a system level metric (like metric ACD) and is calculated as the average number of classes a class depends on, based on direct and indirect hard-wired dependencies. A high value of metric ACDh means, that the classes within the system have many direct and indirect supplier classes which can t be substituted (without changing the source code) because of hard-wired dependencies Identify Candidates for Refactoring To identify dependencies with a large effect on the average number of indirect hard-wired dependencies we use metric racdh.

111 Chapter 8: Implementation 95 Metric racdh Metric racdh is a reduction metric based on metric ACDh. A high value of metric racdh(d) means, that refactoring the dependency d substantially improves the ability to test the classes of the system in isolation Perform Refactorings Refactoring a hard-wired dependency means to remove it, to substitute it, or to change its category (e.g. from a hard-wired dependency to a type dependency). This section describes several refactoring alternatives. Note: Most of the following refactoring alternatives have already been described elsewhere. However, they are presented here for the reader s convenience. Removing hard-wired dependencies The solutions described in Section apply to hard-wired dependencies which are a subset of the overall population of dependencies within a system. Converting hard-wired dependencies into type dependencies The specific solution to convert a hard-wired dependency into a type dependency depends on its underlying cause. In the following list we describe different causes of hard-wired dependencies and examples of possible solutions: 1 The client class is a subclass of the supplier class. Possible solution: The client class A implements an interface IA which inherits from an interface IB implemented by the supplier class B. Calls to methods defined within IB are forwarded by the client instance to an instance of the supplier type (an instance of class B or some other class implementing IB). 2 The client instance creates the supplier instance directly by calling a (pseudo) constructor. Possible solutions are (compare with Section 8.3.6): Add a parameter of the supplier type to a method of the client class which allows another object to set the link. Make another class responsible to establish a link from the client instance to an instance of a factory class which

112 96 Chapter 8: Implementation allows the client instance to get access to an object implementing the supplier type. The client class (or a third-party class) determines the supplier class and creates its instance at run-time. 3 The client instance accesses a static method of the supplier class. Possible solutions: If each client instance has access to at least one supplier instance then convert the static method into an instance method (which class-internally may still call a static method of the supplier class). Introduce a new class with only a single instance (see discussion on singletons in Section 7.4.3) and instance methods instead of class methods. 4 The client instance accesses a static attribute of the supplier class. Possible solution: Add non-static set and get methods to the supplier class which class-internally access the static attribute. 5 The client class accesses a member of the supplier class which is of a primitive type (like int) and which is declared to be static and final (i.e. the client class accesses a constant defined in the supplier class). Possible solution: Provide the constant as the return value of a dedicated instance method. 8.5 Control Semi-Hard-Wired Dependencies Semi-hard-wired dependencies have a less negative impact on testing but are easier to refactor in general. This section describes how to control the effect of semi-hard-wired dependencies on testing Identify Semi-Hard-Wired Dependencies To identify semi-hard-wired dependencies use the criteria described in Section

113 Chapter 8: Implementation Evaluate Semi-Hard-Wired Dependencies For a semi-hard-wired dependency it is interesting to know, how far it is from being completely hard-wired: If a dependency is semi-hard-wired only because the client class e.g. calls a small number of static methods of the supplier class, we expect a higher probability that there are no actual reasons that the dependency is semi-hard-wired and that it is easy to change the category of the dependency and refactor it into a type dependency. We introduce metric DSTMh to evaluate the degree to which a dependency is semi-hard-wired. Metric DSTMh Metric DSTMh(d) is defined as the percentage of statements which cause a dependency d to be semi-hard-wired or hardwired. In other words: metric DSTMh compares the number of statements causing a dependency to be (semi-)hard-wired to the overall number of statements causing the dependency. Statements, which cause a dependency to be (semi-)hard-wired are statements concerning the access to static or final members of the supplier class. Example 18 The source code of class A in Figure 19 contains three distinct statements containing references to an attribute and to methods of class B. One of them (the call of the static method) causes the class dependency to be semi-hard-wired. Note: Multiple access from one method of the client class to the same member of the supplier class of the same category 1 are counted only once. The value of the metric DSTMh for the dependency between A and B is 1 / 3 = For example, read and write access to an attribute belong to a different category.

114 98 Chapter 8: Implementation public class A { private B myb;... public void m() { int i = myb.r; // read access to attribute myb.n(); // method call... myb.n(); // method call (same as above)... B.p(); // static method call } } public class B { public int r; public void n() {...} public static void p() {...} } Figure 19 Example source code A low value of metric DSTMh for a semi-hard-wired dependencies indicates a high probability, that the program statements causing it to be semi-hard-wired weren t intended and that it is easy to refactor the dependency into a type dependency. Note: Metric DSTMh can be applied to hard-wired dependencies as well. In case that the dependency is hard-wired because of an inheritance relationship, the value of DSTMh is smaller than 100%. Nevertheless, we do not assume, that an inheritance relationship is introduced unintentionally, neither that it is easy to refactor. Therefore we use metric DSTMh mainly to study semi-hard-wired dependencies Perform Refactorings The refactorings to transform semi-hard-wired dependencies into type dependencies are equal to items 3 to 5 in Section

115 Chapter 8: Implementation Application of Metrics to Design It is possible to apply some of the metrics described in this chapter to design artifacts as well because there is a direct mapping from a UML class diagram (Figure 20a) to a dependency graph (Figure 20b). (a) «interface» IA B C (b) IA B C A B D E A B D E Figure 20 Dependency graph and class diagram However, some issues limit the applicability of the metrics during design: If the design consist of unconnected island diagrams (which is a common situation), this hinders creating a dependency graph for the entire system and calculating systemlevel metrics. The design usually does not describe object construction which is one important source for hard-wired dependencies and therefore limits the applicability of related metrics like racdh. The design does not describe the implementation details necessary to calculate metrics like DSTMh.

116 100 Chapter 8: Implementation

117 101 Chapter 9 Test During the development activities preceding the test activity it is only possible to predict testability. This chapter gives some hints on how to actually evaluate testability during the test activity. 9.1 Introduction During the test activity we can evaluate how easy it is to achieve the test goals. Information on the degree of testability which has been actually achieved is required as a feedback information to improve ongoing and future testability efforts. In the following section we give some hints on how to evaluate testability during the test activity. These hints round up the description of our approach to improve testability by controlling dependencies. 9.2 Evaluate Testability The testability of a software system can be evaluated based on actual test problems and test process metrics Test Problem Reports A test problem report describes an instance of a testability issue which occurred while performing a testing task. The report may describe quantitative problems (e.g. test execution time of 15 seconds is to slow in the context of... ) or qualitative problems (e.g. the test task XY can t be achieved ). Test problems can be treated as another category of software defects and e.g. in this way recorded and administrated using an existing defect tracking system.

118 102 Chapter 9: Test Test problem reports are a basis to define test process metrics related to testability evaluation. Content of a Test Problem Report A test problem report may contain the following information: severity of the test problem (e.g. based on its scope) repair priority type of the problem (e.g. lack of controllability) problem description information for root cause analysis (including the activity in which the problem could have been detected) Test Process Metrics Test process metrics can be used to evaluate the testability of a software system. Examples for such metrics are: the average number of test cases that have to be changed after a program change, the time required to run a complete regression test, the time required for the overall testing during a development project, the percentage of fully automated test cases, the total number of test problem reports (of a given severity), the actual number of open (i.e. unresolved) test problem reports. the average time test problem reports remain open. Unfortunately, data about test effort are seldom collected systematically during industrial development projects.

119 103 Part III Metrics and Tool Support This part provides formal definitions of our metrics (described informally in Chapter 8) based on a graph representation of object-oriented software and presents a tool which supports analyzing source code as well as calculating the metrics.

120 104

121 105 Chapter 10 Dependency Graph The metrics described in this work are calculated based on a representation of a system s dependency structure called dependency graph. This chapter introduces an abstract model of dependency graphs, which is used in Chapter 11 as a basis to define our metrics and explains how to construct a dependency graph Introduction A number of different formalisms have been described in the literature to represent dependencies within object oriented software including [Baud02], [Bria01], [Kung93], [Labi00], [McGr96c], [Roth99] and [Trao00]. The selection of a particular formalism depends on the requirements of the analysis intended to be performed. In the context of this work, the formalism shall allow to represent class-to-class dependencies and their underlying sources at the statement level. The later is needed to distinguish the different categories of dependencies. Note: An initial but later cancelled requirement was the ability to map the graph representation onto a relational database in order to avoid complex database structures. This requirement asked for a simple formalism and ruled out the use of an abstract syntax tree as a representation of the dependencies within a system. A simple formalism to represent dependencies is difficult to find, because it has to capture dependencies between classes, dependencies between class members and classes, as well as dependencies between class members.

122 106 Chapter 10: Dependency Graph Most notations represent class dependencies only, except the one defined by Traon et al. [Trao00]: this notation is based on a graph, containing both class nodes and member nodes at the same level of abstraction to represent all three kinds of dependencies. We have defined a notation called dependency graph model which allows to represent a dependency graph in terms of classes, class members, class dependencies, and sources of class dependencies Overview of Dependency Graph Model An instance of a dependency graph model (Figure 21) represents classes and class members as nodes of the graph (i.e. as instances of class Class or Member) whereas class dependencies are represented as edges (i.e. as instances of class ClassDependency). Between two classes there is at most one class dependency in each direction. The dependency graph model is able to represent the different sources of class dependencies within a Java program, which exist at the statement level: A statement declaring a generalization relationship (i.e. an inheritance or implements relationship) is represented by an instance of class Generalization. A type declaration statement (like the declaration of a parameter type) is represented by an instance of class TypeDeclaration. Note: A type declaration defines a dependency between a class member and a class. A statement describing the access to a class member (like a method call) is represented by an instance of class Member- Access. Note: The access to a class member introduces a dependency between two class members. The underlying sources of a class dependency are represented as a subedge of the edge representing the class dependency. The entire system under analysis is represented by an instance of class System.

123 Chapter 10: Dependency Graph 107 System 1 1 * Class client * * ClassDependency 1 supplier * /supertype 1 /subtype 1 1 Type * * Generalization * TypeDeclaration * Origin * * Member 1 clientmember 1 suppliermember * * MemberAccess 1 * Figure 21 Reflexive class dependencies Dependency graph model Reflexive class dependencies are not included in the dependency graph model because we are only interested in class dependencies between different classes. Additionally, this simplifies the algorithms used for the analysis of the dependency graph and accelerates metric calculation. Note: a reflexive dependency of a given class may cause dependency cycles between its subclasses. This effect is not investigated in this work. Notwithstanding, the dependency graph model allows to represent access between members of the same class like intra-class method calls. In this case, the member access is not linked to a class dependency (which is the reason, why the lower bound of the multiplicity of the aggregation between class MemberAccess and ClassDependency is zero at the ClassDependency-end). An example of a dependency graph will be given in Chapter 10.5.

124 108 Chapter 10: Dependency Graph 10.3 Classes of Dependency Graph Model This section describes the classes and operations of the dependency graph model (Figure 21) which will be used in Chapter 11 to define our metrics formally. Note to the reader: Most operation names of the dependency graph model are self- explaining. Therefore it is possible to skip this chapter now and to use it as a reference while reading the metric definitions in Chapter 11 instead. The semantics of some operations is described using the Object Constraint Language (OCL) of the UML [UML01] Class Class Class Class represents a concrete class, abstract class, or interface and contains the following attribute and operations: Attribute qualifiedname Description a unique string consisting of the package name and the class name Operation isconcreteclass() isabstractclass() isinterface() isfinal() directsupertypes() allsupertypes() allancestors() directsuppliers() directsuppliersofset() reachableclasses() allsuppliers() isinvolvedincycle() Description returns true, if the class is a concrete class, else false returns true, if the class is an abstract class, else false returns true, if the class is an interface, else false returns true, if the class is declared to be final, else false see below see below see below see below see below see below see below see below

125 Chapter 10: Dependency Graph 109 directsupertypes() Operation directsupertypes() returns the set of all direct superclasses and implemented interfaces of the class: context Class::directSupertypes() : Set(Class) post: result = self.generalization[supertype] allsupertypes() Operation allsupertypes() returns the set of all direct and indirect superclasses as well as all interfaces that are implemented by the class directly and indirectly: context Class::allSupertypes() : Set(Class) post: result = self.directsupertypes->union( directsupertypes.allsupertypes()->asset()) allancestors() Operation allancestors() returns the set of 1) all direct and indirect superclasses of a concrete or abstract class or 2) all interfaces from which a given interfaces inherits: context Class::allAncestors() : Set(Class) post: if!self.isinterface() result = self.allsupertypes()-> select(!isinterface()) else result = self.allsupertypes() directsuppliers() Operation directsuppliers() returns the set of all direct supplier classes of the class: context Class::directSuppliers() : Set(Class) post: result = self.classdependency[supplier] directsuppliersofset() Operation directsuppliersofset() returns the set of all direct supplier classes of a set of classes: context Class::directSuppliersOfSet(s: Set(Class)) : Set(Class) post: result = s->collect(c : Class c.supplier)->asset() Notes: This operation is a helper operation used to specify other operations It would be appropriate to specify this operation as a class operation. Unfortunately, OCL does not support the specification of class operations.

126 110 Chapter 10: Dependency Graph reachableclasses() Operation reachableclasses() returns the set of all supplier classes the input set of (client) classes depends on: context Class::reachableClasses(rc: Set(Class)) : Set(Class) post: if rc->includesall(directsuppliersofset(rc)) then result = rc else result = reachableclasses(rc->union(directsuppliersofset(rc))) Note: The result includes the input set of classes. (This is important when using this operation.) This operation is a helper operation used to specify other operations like allsuppliers() and isinvolvedincycle(). It would be appropriate to specify this operation as a class operation. Unfortunately, as mentioned before, OCL does not support the specification of class operations. allsuppliers() Operation allsuppliers() returns the set of all direct and indirect supplier classes of the class: context Class::allSuppliers() : Set(Class) post: result = reachableclasses(set{self})->excluding(self) isinvolvedincycle() Operation isinvolvedincycle() returns true, if the class is involved in a dependency cycle: context Class::isInvolvedInCycle() : Boolean post: if directsuppliers()->size = 0 result = false else result = reachableclasses(reachableclasses(set{self})-> excluding(self))->includes(self) Alternatively, the Tarjan Algorithm [Tarj72] can be used to identify dependency cycles. Invariant If a class is final, all of its members are final. context Class inv: self.isfinal() implies self.member->forall(isfinal())

127 Chapter 10: Dependency Graph Class Member Class Member represents a class member (i.e. an attribute or a method) and contains the following operations: Operation isattribute() ismethod() isconstructor() ispseudoconstructor() isstatic() isfinal() Description returns true, if the class member is an attribute, else false returns true, if the class member is a method (except constructors), else false returns true, if the class member is a constructor, else false see below returns true, if the class member is static, else false returns true, if the class member is final, else false ispseudoconstructor() Operation ispseudoconstructor() returns true, if the class member is static and if its return type is equal to the containing class or an interface implemented by the class (see Section 7.2.1). context Member::isPseudoConstructor() : Boolean post: result = self.isstatic() and self.typedeclaration->collect(td td.isreturntypedeclaration() and (td.class = self.class or self.class.allsupertypes()->select(isinterface())->contains(td.class)))-> size() = Class ClassDependency Class ClassDependency represents a class dependency and contains the following operations: Operation ishardwired() issemihardwired() Description see below see below ishardwired() Operation ishardwired() returns true, if the class dependency is hard-wired. A class dependency is hard-wired (see Section 7.2.1), if at least one of the following facts holds:

128 112 Chapter 10: Dependency Graph 1 The client class has a (direct or indirect) generalization relationship to the supplier class. 2 The client creates an instance of the supplier class itself. 3 Each member access is either a static method call or an access to a final class member. 4 The supplier class is final. context ClassDependency::isHardWired() : Boolean post: result = self.allancestors()->contains(self.supplier) or self.memberaccess->forall(causeshardwiring()) or self.supplier.isfinal() Note: See Section for operation causeshardwiring() of class MemberAccess which accounts for cases 2 and 3 from above. issemihardwired() Operation issemihardwired() returns true, if the class dependency is semi-hard-wired. A class dependency is semihard-wired (see Section 7.2.2), if the client class is not hard-wired to the supplier class, and at least one method of the supplier class accessed by the client class (or its instances) is declared to be static or final or at least one attribute of the supplier class accessed by the client class (or its instances) is declared to be final. context ClassDependency::isSemiHardWired() : Boolean post: result = not self.ishardwired() and self.memberaccess->exists(ishardwired()) Invariant There are no reflexive class dependencies: context ClassDependency inv: self.client <> self.supplier Class Generalization Class Generalization represents an inheritance or implements relationship and contains the following operations:

129 Chapter 10: Dependency Graph 113 Operation isinheritance() isimplements() Description returns true, if the generalization relationship is based on inheritance, else false (see also Appendix B.1.1 and Appendix B.1.2) returns true, if the generalization relationship is based on an implements relationship, else false (see also Appendix B.1.3) Invariants Within a generalization dependency, the subtype is the client class and the supertype is the supplier class. context GeneralizationDependency inv: self./subtype = self.classdependency.client and self./supertype = self.classdependency.supplier Class TypeDeclaration Class TypeDeclaration represents a type declaration as the underlying cause of a class dependency and contains the following operations: Operation isparametertypedeclaration() isreturntypedeclaration() isthrowclausetypedeclaration() iscatchclausetypedeclaration() isattributetypedeclaration() islocalvariabletypedeclaration() isarrayinitializationtypedeclaration() Description returns true, if the type specifies a method parameter, else false (see also Appendix B.2.1) returns true, if the type specifies a return value, else false (see also Appendix B.2.2) returns true, if the type specifies a throw clause, else false (see also Appendix B.2.3) returns true, if the type specifies a catch clause, else false (see also Appendix B.2.4) returns true, if the type specifies an attribute, else false (see also Appendix B.2.5) returns true, if the type specifies a local variable, else false (see also Appendix B.2.6) returns true, if the type is used to specify an array initializer, else false (see also Appendix B.2.7)

130 114 Chapter 10: Dependency Graph iscastoperatortypedeclaration() isinstanceofoperatortypedeclaration() returns true, if the type is used within a cast operation, else false (see also Appendix B.2.8) returns true, if the type is used on the right side of an instanceof operator, else false (see also Appendix B.2.9) Invariants The class member, which contains the type declaration is the client class (i.e. the client class), the class specifying the type referenced in the type declaration is the supplier class (i.e. the supplier class). context TypeDeclaration inv: self.origin.class = self.classdependency.client and self.type = self.classdependency.supplier Class MemberAccess Class MemberAccess represents an access to a class member as a cause of a class dependency and contains the following operations: Operation isattributeuse() isattributedef() isstaticattributeuse() isstaticattributedef() ismethodcall() isstaticmethodcall() isconstructorcall() iscalltointerface() ispseudoconstructorcall() Description returns true, if an attribute is read, else false (see also Appendix B.3.1) returns true, if an attribute is written, else false (see also Appendix B.3.2) returns true, if a static attribute is read, else false (see also Appendix B.3.3) returns true, if a static attribute is written, else false (see also Appendix B.3.4) returns true, if a method of a concrete or abstract class is called, else false (see also Appendix B.3.5) returns true, if a static method is called (except constructors), else false (see also Appendix B.3.6) returns true, if a constructor is called, else false (see also Appendix B.3.7) returns true, if a method of defined by an interface is called, else false (see also Appendix B.3.8) see below

131 Chapter 10: Dependency Graph 115 issupercall() causeshardwiring() returns true if a constructor of the superclass or an overridden or shadowed method is called using the super keyword, else false (see also Appendix B.3.9 and Appendix B.3.10) see below ispseudoconstructor- Call() Operation ispseudoconstructorcall() returns true, if the called class member is a pseudo-constructor (see Section 7.2.1). context MemberAccess::isPseudoConstructorCall() : Boolean post: result = self.suppliermember.ispseudoconstructor() causeshardwiring() Operation causeshardwiring() returns true, if the member access causes a hard-wired or semi-hard-wired dependency at the class level (see Section 7.2.1). context MemberAccess::causesHardwiring() : Boolean post: result = self.isstaticcall() or self.issupercall() or self.isconstructor- Call() or self.ispseudoconstructorcall() or self.suppliermember.isfinal() Note: The reference of operation causeshardwiring() to operation issupercall() takes into account, that a call from a method of a subclass to an overridden method of the superclass (using the super construct) is bound to this particular method at compile-time, which would cause the dependency to be hard-wired or semi-hard-wired. This effect is however masked by the inheritance relationship between the subclass and the superclass, which leads to a hardwired-dependency anyway. Invariants If the member access causes a dependency between two distinct classes, the class containing the method which contains the access statement is the client class, the class containing the accessed member is the supplier class. context MemberAccess inv: self.classdependency->notempty() implies self.clientmember.class = self.classdependency.client and self.suppliermember.class = self.classdependency.supplier

132 116 Chapter 10: Dependency Graph Class System Class System represents the set of all classes analyzed and contains the following operations: Operation removeclass() removeclasses() removedependency() removedependencies() containscycle() feedbackdependencies() Description see below see below see below see below see below see below Note: Operations needed to create a dependency graph are not considered here because they are not relevant for the definition of our metrics. removeclass() Operation removeclass() returns a deep copy 1 of the system instance, except that the class instance provided as a parameter object as well as all instances of Member, ClassDependency, Generalization, TypeDeclaration, and MemberAccess linked to it are missing in the result. context System::removeClass(c : Class) : System pre: self.class->includes(c) post: result.class->size() = self.class->size() - 1 Note: The operation does not remove the class from the system (but only from the copy returned). This approach has been chosen, because it is not allowed to use operations with side-effects within OCL expressions. removeclasses() Operation removeclasses() returns a deep copy of the system instance, except that the input set of class instances as well as all instances of Member, ClassDependency, Generalization, TypeDeclaration, and MemberAccess linked to them are missing in the result. context System::removeClasses(s : Set(Class)) : System pre: self.class->includesall(s) post: result.class->size() = self.class->size() - s->size() 1 A deep copy of an instance includes all instances directly and indirectly linked to it.

133 Chapter 10: Dependency Graph 117 Note: The operation does not remove any class from the system (but only from the copy returned). removedependency() Operation removedependency() returns a deep copy of the system instance, except that the parameter instance and all instances of Generalization, TypeDeclaration, and MemberAccess aggregated by it are missing in the result. context System::removeDependency(d : ClassDependency) : System pre: self.classdependency->includes(d) post: result.classdependency->size() = self.classdependency->size() - 1 Note: The operation does not remove a dependency from the system (but only from the copy returned). removedependencies() Operation removedependencies() returns a deep copy of the system instance, except that the input set of class dependency instances and all instances of Generalization, TypeDeclaration, and MemberAccess aggregated by them are missing in the result. context System::removeDependencies(s : Set(ClassDependency)) : System pre: self.classdependency->includesall(s) post: result.classdependency->size() = self.classdependency->size() - s->size() Note: The operation does not remove any dependency from the system (but only from the copy returned). containscycle() Operation containscycle() returns true, if the system contains at least one dependency cycle. context System::containsCycle() : Boolean post: self.class->exists(isinvolvedincycle()) feedback- Dependencies() Operation feedbackdependencies() returns a set of feedback dependencies which, when removed, makes the graph acyclic. context System::feedbackDependencies() : Set(ClassDependency) post: if self.containscycle() = false result = Set({}) else self.removedependencies(result).containscycle() = false

134 118 Chapter 10: Dependency Graph This specification is not sufficient. We use the algorithm given below, to identify feedback dependencies. The problem of finding the smallest possible feedback dependency set is NP-complete [Skie97]. To identify a small feedback dependency set we use the following algorithm: 1 Identify dependency cycles applying the Tarjan Algorithm [Tarj72]. 2 Remove all nodes not involved in dependency cycles from the graph. 3 Apply the greedy algorithm described in [Eade93] and [Skie97] to identify the feedback dependencies. As the greedy function we use the difference of the number of incoming and outgoing dependencies of a node [Skie97]. Note: Dependencies caused by inheritance and implementation relationships are difficult to stub and refactor. We follow [Bria01] and others and do not include them as potential feedback dependencies. The complexity of the Tarjan algorithm is O(n+e), the complexity of the greedy algorithm is O(e) where e is the number of the dependencies, and n is the number of the classes Complete Dependency Graph Model Figure 22 shows the dependency graph model including all operations.

135 Chapter 10: Dependency Graph 119 Figure 22 Complete dependency graph model isconcreteclass() isabstractclass() isinnerclass() isinterface() isfinal() directsupertypes() allsupertypes() allancestors() directsuppliers() directsuppliersasset() reachableclasses() allsuppliers() isinvolvedincycle() isattribute() ismethod() isconstructor() ispseudoconstructor() isstatic() isfinal() Class isattributeuse() isattributedef() isstaticattributeuse() isstaticattributedef() ismethodcall() isstaticmethodcall() isconstructorcall() iscalltointerface() ispseudoconstructorcall() issupercall() causeshardwiring() MemberAccess isinheritance() isimplements() Generalization isparametertypedeclaration() isreturntypedeclaration() isthrowclausetypedeclaration() iscatchclausetypedeclaration() isattributetypedeclaration() islocalvariabletypedeclaration() isarrayinitializationtypedeclaration() iscastoperatortypedeclaration() isinstanceofoperatortypedeclaration() TypeDeclaration ishardwired() issemihardwired() ClassDependency 1 * * * * 1 Member Origin * clientmember suppliermember * * client supplier /supertype /subtype Type * * System 1 * 1 * removeclass() removeclasses() removedependency() removedependencies() containscycle() feedbackdependencies() * *

136 120 Chapter 10: Dependency Graph 10.5 Constructing a Dependency Graph To construct a dependency graph, follow the following steps: 1 Create an instance of class Class for each applicationdefined concrete class, abstract class or interface encountered within the system, no matter whether it is involved in a dependency or not. Neglect classes predefined in Java like the class String and wrapper classes 1 like Integer and Boolean. For each class: Create an instance of class Member for each member (i.e. attribute or method) of this class and link it to the containing instance of class Class. 2 If there is at least one program statement causing a class dependency between a client class and a supplier class, create an instance of class ClassDependency and link it to the instances representing the client and supplier class involved. 3 Create an instance of class TypeDeclaration and add it to the dependency graph for each type declaration encountered if there exists no other instance of class TypeDeclaration 1) related to the same instance of class Class (specifying the type), 2) related to the same instance of class Member (specifying the member in which the type declaration occurs), and 3) of the same category (see Section ). Additionally, link the created instance to the instances representing the class and the class member involved. Type declarations based on Java primitive types (like int or boolean), the String class, or wrapper classes like Integer or Boolean are neglected. 4 Create an instance of class MemberAccess and add it to the dependency graph for each member access encountered, if there exists no other instance of class MemberAccess 1) related to the same instances of class Member (specifying the type) and 2) of the same kind (see Section ). Additionally, link the created instance to the instances representing the two client and supplier members involved. Note: We have chosen to represent multiple access of the same kind from a given client method to a given supplier method only by one instance of MemberAccess 1 Wrapper classes are used in Java to handle primitive types as objects.

137 Chapter 10: Dependency Graph 121 because this multiple access could be easily refactored (i.e. outsourced ) into a separate method, leading to a single access. Access to members of the Java String class or wrapper classes like Integer or Boolean are neglected. 5 Create an instance of class Generalization and add it to the dependency graph for each inheritance and implements relationship encountered. Additionally, link the created instance to the instances representing the two classes involved. Java specifics The Java programming language contains some specifics which are dealt with in the following way: Instance initializers and static initializers without method name: Instance initializers and static initializers behave like methods but they do not have a method name. Solution: assign a default method name to them and treat them like normal methods. Instance initializers and constructors: A class may contain one or more instance initializers which are executed once during object construction. Solution: create an instance of class MemberAccess for each possible combination of a constructor (the client method) and an instance initializer (the supplier method) and link them appropriately. Inner and anonymous classes: A class may contain one or more inner classes or anonymous classes which may depend on other classes. Solution: treat all dependencies of the inner and anonymous classes as if they were dependencies of the containing class. This also applies to inheritance and implements relationships. A complete list of dependencies within Java programs and how they are mapped onto the dependency graph is given in Appendix B. Example 19 Figure 24 shows an object diagram of a dependency graph constructed from the source code given in Figure 23.

138 122 Chapter 10: Dependency Graph public class A { private B myb;... public void m() { int i = myb.r; myb.n();... myb.n();... } } public class B { public int r; public void n(){... } } Figure 23 Example source code class level A: Class B: Class : ClassDependency member-toclass level : TypeDeclaration member level myb: Member m: Member : MemberAccess : MemberAccess r: Member n: Member Figure 24 Dependency graph for example source code

139 123 Chapter 11 Metrics Whenever the number of classes within a system grows very large, metrics provide a way to keep an overview of the system and its characteristics. In this chapter we define and discuss new metrics to identify dependencies which are critical for the dependency structure and testing tasks. (Most of these metrics have already been described informally in Chapter 8) Introduction The definition of our metrics is based on the OCL [UML01] and on the dependency graph as described in Chapter 10. This section provides an overview of basic measurement concepts which will be used to characterize our metrics Basic Measurement Concepts Entity Attribute Metric An entity is the object of our measurement and belongs to the real world. Entities in the software domain are e.g. software artifacts, processes, or software engineers. An attribute (in the context of software measurement) is a characteristic of an entity we want to measure or predict. To understand and define the attribute is necessary for any meaningful measurement. A metric (or measure 1 ) is the degree to which a software artifact or process possesses a given attribute [IEEE90]. A metric is a mapping from the empirical world to the formal, relational world and defined in terms of a mapping rule. 1 Within measurement theory, the term measure is used instead of metric: a metric denotes a specific type of measure.

140 124 Chapter 11: Metrics There is an n-to-m relationship between attributes and metrics: one attribute of an entity can have more than one metrics associated with it and one metric can be used to measure more than one attribute. Direct and indirect metrics Unit Scale type Characterization and Prediction A direct metric for a characteristic of an entity involves no other attribute or entity. An indirect metric is measured only indirectly based on other (direct or indirect) metrics [Fent96]. A unit is the basis of the scale used to measure an attribute. Note: Most of our metrics do not have a specific unit. Common scale types are ordinal scales, interval scales, ratio scales, and absolute scales. The scale type determines the admissible transformations applicable to the scale, e.g. the interval scale permits affine transformations (M = am + b) while the absolute scale does not permit any transformation. Metrics are used for characterization and prediction. Characterization means to assess an existing entity by numerically characterizing one or more of its characteristics. Prediction means to predict some characteristic of a (future) entity, involving a mathematical model. We use the following schema, to describe basic properties of a given metric: name: the name of the metric entity: the object of the metric attribute: the attribute of the entity which is measured category: an indication, wether it is a direct or indirect metric scale type: the scale type of the metric If the metric has a unit, this unit is included in the schema as well Overview of Metrics The metrics introduced in this chapter are defined at the class level, system level, and class-dependency level. The main focus is on the reduction metrics for dependencies, while the class metrics and system metrics are used as a basis to define them. Figure 25 indicates the relationships between basic and derived metrics by arrows (with the derived metric at the arrows head).

141 Chapter 11: Metrics 125 class metrics system metrics dependency metrics racdin racdout DSTM DSTMh CD CDh CDsh ACD ACDh ACDsh NCDC NFD racd racdh racdsh rncdc rnfd Figure 25 Overview of metrics 11.2 Class Metrics This section describes the metrics CD, CDh, and CDsh, which measure coupling characteristics at the class level and which are later used to define system-level metrics Metric CD Metric CD measures the degree to which a given class depends on other classes directly as well as indirectly. Properties of metric CD name class dependency entity class attribute coupling category direct metric scale type absolute scale Definition For a class c, metric CD(c) is defined as the number of direct and indirect supplier classes of c. context Class::CD() : Integer post: result = reachableclasses(set{self})->excluding(self)->size()

142 126 Chapter 11: Metrics Motivation and Purpose The reasons to select this metric are: it is used to define metric ACD (see Section ). it is simple to calculate. This avoids complex and time-intensive computation of (reduction) metrics based on it. it is simple to interpret. It can be applied to early software analysis and design diagrams, where information about class members (usually required by coupling metrics) is not available or incomplete. The metric can be used: 1 to characterize the coupling of a given class to other classes within the system and 2 to predict the time required for (re-)compiling all supplier classes before testing the class. Note: We also expect a good correlation between the values of this metric and the difficulty to design test cases or to isolate faults. This is an open issue for further research. Assumptions Basic assumptions of the metric CD are: 1 Indirect dependencies do matter. 2 The size and complexity of the classes does not matter. 3 The effect of indirect dependencies on the investigated attribute does not depend on the level of indirection. 4 Dependencies on concrete classes, abstract classes, and interfaces do have the same effect on the investigated attribute. 5 The category of the dependency does not matter. Discussion Ad assumption 1: Metric CD takes into account indirect dependencies while the majority of all other coupling metrics does not 1. Since indirect dependencies do have relevant effects on testing (as described in Section 2.4), it is important to consider them. The assumptions 2, 3 and 4 from above are a strong simplification. However, more complex metrics are more difficult to inter- 1 Out of 30 coupling metrics described in [Bria96], only metric RFC and RFC consider indirect dependencies.

143 Chapter 11: Metrics 127 pret and require a more thorough understanding of the relationship between program structure and test tasks: Ad assumption 3: It would be possible to define metrics that weight the contribution of indirect dependencies by the level of indirection if the effect of the level of indirection on the attribute under investigation can be quantified sufficiently. A draw-back of weighting is, that the metric values are more difficult to interpret and compare. Ad assumption 4: The type of the supplier class is relevant for some test tasks (e.g. for testing a class in isolation), for some it is not (e.g. for instantiating the supplier instances during test setup). However, not enough information is available on the effect of the type of the supplier class on the test tasks to use an approach based on weigthing. Related Metrics A related metric is metric CBO (coupling between object classes) [Chid94] which differs from metric CD in the following ways: Metric CBO counts all dependencies irrespective of their direction while metric CD only considers outgoing dependencies. Metric CBO only counts method calls and attributes access while metric CD considers all kinds of dependencies between classes including inheritance relationships Metrics CDh and CDsh Metric CDh and metric CDsh are different from metric CD by taking into account the category of the dependencies: Metric CDh only considers hard-wired dependencies, while metric CDsh considers semi-hard-wired as well as hard-wired dependencies. Properties of metric CDh name class dependency caused by hard-wired dependencies entity class attribute ability to test class in isolation category direct metric scale type absolute scale Properties of metric CDsh name class dependency caused by semi-hard-wired and hardwired dependencies entity class

144 128 Chapter 11: Metrics attribute category scale type ability to test class in isolation direct metric absolute scale Definition For a class c, metric CDh(c) is defined as the number of classes c depends on because of hard-wired dependencies: context Class def let directhwsuppliers(c: Class) : Set(Class) = c.classdependency[supplier]->select(ishardwired())-> collect(supplier)->asset() let directhwsuppliersofset(s: Set(Class)) : Set(Class) = s->collect(c : Class c.directhwsuppliers())->asset() let reachableclasseshw(rc: Set(Class)) : Set(Class) = if rc->includesall(directhwsuppliersofset(rc)) then rc else reachableclasseshw(rc->union(directhwsuppliersofset(rc))) context Class::CDh() : Integer post: result = self.reachableclasseshw(set{self})->excluding(self)->size() For a class c, metric CDsh(c) is defined as the number of classes c depends on because of semi-hard-wired as well as hard-wired dependencies: context Class def let directshwsuppliers(c: Class) : Set(Class) = c.classdependency[supplier]->select(ishardwired() or issemihardwired())->collect(supplier)->asset() let directshwsuppliersofset(s: Set(Class)) : Set(Class) = s->collect(c : Class c.directshwsuppliers())->asset() let reachableclassesshw(rc: Set(Class)) : Set(Class) = if rc->includesall(directshwsuppliersofset(rc)) then rc else reachableclassesshw(rc->union(directshwsuppliersofset(rc))) context Class::CDsh() : Integer post: result = self.reachableclassesshw(set{self})->excluding(self)-> size() Motivation and Purpose The metrics can be used for different purposes: 1 To characterize the degree to which a class is hard-wired (or semi-hard-wired) to other classes. 2 To predict the ability to test a class in isolation.

145 Chapter 11: Metrics 129 Metrics CDh and CDsh are the basis to define related system metrics (see Section 11.3). Assumptions Basic assumptions of the metrics CDh and CDsh are: 1 Each supplier class shall be stubbed. 2 Indirect dependencies do matter. 3 The size and complexity of the classes does not matter. 4 The effect of indirect dependencies on the investigated attribute does not depend on the level of indirection. 5 The category of the dependency does matter. Discussion Metrics CDh and CDsh are linked to the test task of testing a class in isolation. This distinguishes these metrics from existing coupling metrics which are general purpose metrics System Metrics This section describes the metrics ACD, ACDh, ACDsh, NCDC, and NFD which address the effect of indirect dependencies, hard-wired dependencies, and dependency cycles on the system. The metrics can be used to characterize the dependency structure of a system, but their main purpose within the context of this work is to provide a basis for the definition of related reduction metrics Metric ACD Metric ACD measures the degree to which classes of a system depend directly and indirectly on other classes within the system. Properties of metric ACD name average class dependency entity system attribute system coupling category indirect metric (based on metric CD and number of classes within the system) scale type ratio scale

146 130 Chapter 11: Metrics Definition For a system s, metric ACD(s) is defined as the average of metric CD for all classes of s: context System::ACD() : Real post: if self.class->size() = 0 result = 0 else result = self.class->iterate(c : Class; sum : Integer = 0 sum = sum + c.cd()) / self.class->size() Motivation and Purpose This metric can be used: 1 to characterize the average coupling between the classes of a system and 2 to predict the average time required to (re-)compile those parts of the system which are effected by a change. Like for metric CD, we expect a good correlation with the difficulty to design test cases and to isolate faults, but this needs further research. Other metrics (like the coupling metrics described in Appendix C) can be used to evaluate the structure of a system, too. The reasons why we define and use metric ACD are: Direct and indirect dependencies are relevant for testing. Metric ACD is sensitive to both of them. Metric ACD is simple to calculate. (This reduces the time required to calculate the metrics based on metric ACD.) it is simple to interpret. It can be applied to early software artifacts like analysis and design diagrams when information at class member level is not available or incomplete (most existing coupling metrics require such information). Assumptions The assumptions for metric ACD are the same as for metric CD (Section ). Discussion Since metric ACD is based on metric CD the discussion of metric CD applies as well: Metric ACD is based on a very abstract view of the system because each class is represented as a node within a graph, neglecting differences in size, complexity, or class type. This metric therefore provides only a rough view of

147 Chapter 11: Metrics 131 the system structure and is not well suited to compare different systems (because it is not normalized). Interpretation of metric values To interpret a value of metric ACD, compare it with the number of classes within the system and consider the average size and complexity per class. Related Metric John Lakos [Lako96] defined the metric Average Component Dependency to predict the average effort to recompile a class. It differs from our metric ACD by adding 1 to the value of metric CD for each class. This shall account for the fact, that not only the supplier classes have to be compiled but the client classes as well Metrics ACDh and ACDsh Metrics ACDh and ACDsh are used to measure the average ability to test the classes of a system in isolation. Properties of metric ACDh name average class dependency caused by hard-wired dependencies entity system attribute ability to test classes in isolation category indirect metric (based on metric CDh and the number of classes within the system) scale type ratio scale Properties of metric ACDsh name average class dependency caused by hard-wired and semihard-wired dependencies entity system attribute ability to test classes in isolation category indirect metric (based on metric CDsh and the number of classes within the system) scale type ratio scale

148 132 Chapter 11: Metrics Definitions For a system s, metric ACDh(s) is defined as the average number of classes a class of s depends on caused by (direct and indirect) hard-wired dependencies. context System::ACDh() : Real post: if self.class->size() = 0 result = 0 else result = self.classs->iterate(c : Class; sum : Integer = 0 sum = sum + c.cdh()) / self.class->size() For a system s, metric ACDsh(s) is defined as the average number of classes a class of s involved in hard-wired and semi-hardwired dependencies depends on caused by (direct and indirect) hard-wired and semi-hard-wired dependencies. context System::ACDsh() : Real post: if self.class->size() = 0 result = 0 else result = self.classs->iterate(c : Class; sum : Integer = 0 sum = sum + c.cdh()) / self.class->size() Purpose The metrics can be used 1 to characterize the degree to which classes of a system are hard-wired (or semi-hard-wired) to supplier classes, 2 to predict the ability of testing the classes of the system in isolation. Assumptions Same as for metric ACD except of assumption 5: The type of the dependency does matter. Discussion A high value of ACDh or ACDsh compared to the value of metric ACD indicates, that the ability to test the systems s classes in isolation is limited.

149 Chapter 11: Metrics Metric NCDC Metric NCDC measures the size of the dependency cycles within a system. Properties of metric NCDC name number of classes within dependency cycles entity system attribute size of dependency cycles category direct metric scale type absolute scale Definition For a system s, metric NCDC(s) is the total number of classes of s involved in dependency cycles: context System::NCDC() : Integer post: result = self.class->select(isinvolvedincycle())->size() Motivation and Purpose This metric can be used: 1 To characterize the size of the dependency cycles within a system. 2 To predict the effort caused by testing all classes involved in the dependency cycles at once (if the cycles aren t broken with help of stubs) during integration testing. Assumptions Basic assumptions of the metric NCDC are: 1 The size and complexity of the classes within the dependency cycles does not matter. 2 Concrete classes, abstract classes, and interfaces within the dependency cycles have the same effect on testing. Discussion To the best knowledge of the author, no metrics have been published so far to characterize dependency cycles within a system. Metric NCDC is (like metric ACD) based on a very abstract view of the system and provides only a rough view of potential test problems caused by dependency cycles.

150 134 Chapter 11: Metrics Metric NFD Metric NFD measures the number of dependencies causing dependency cycles within the system. Properties of metric NFD name number of feedback dependencies entity system attribute number of dependencies causing dependency cycles category direct metric scale type absolute scale Definition For a system s, metric NFD(s) is defined as the cardinality of the feedback dependency set for s: context System::NFD() : Integer post: result = self.feedbackdependencies()->size() Motivation and Purpose This metric can be used: 1 to characterize the number of dependencies causing dependency cycles, and 2 to predict the effort necessary to remove all dependency cycles by removing selected dependencies. Assumptions Basic assumptions of the metric NFD are: 1 The size and complexity of the classes involved in the feedback dependencies does not matter. 2 The difficulty to remove a dependency to a concrete class, an abstract class, or an interface is the same. 3 it is not intended to remove inheritance or implements relationships. 4 The strength of the feedback dependencies does not matter. Assumption 3 is considered by the heuristic algorithm used to identify feedback dependency sets (see Section ). Discussion Feedback dependencies are identified based on a heuristic algorithm and may not be identical with the dependencies which actually introduced the dependency cycles. However, the size of

151 Chapter 11: Metrics 135 the feedback dependency set is a good indicator of the effort to remove the dependency cycles. Related Metrics Metric NSBC (Number of Stubs to Break Cycles) in [Jung02b] is similar to metric NFD, but measures the number of classes which have to be removed in order to break all dependency cycles Dependency Metrics Hard-wired dependencies may have been introduced by accident. Metrics DSTM and DSTMh help to identify such dependencies Metric DSTM Metric DSTM measures the strength of a given dependency. Properties of metric DSTM name number of statements causing the dependency entity dependency attribute strength of dependency category direct metric scale type absolute scale Definition For a dependency d, metric DSTM(d) is defined as the number of distinct statements causing d. context ClassDependency def let isinheritancedependency() : Boolean = self.client.allsupertypes()->contains(self.supplier) context ClassDependency::DSTM() : Integer post: if self.isinheritancedependency() result = self.typedeclaration->size() + self.memberaccess->size() + 1 else result = self.typedeclaration->size() + self.memberaccess->size()

152 136 Chapter 11: Metrics Purpose The metric is used to characterize the strength of the dependency. Assumptions Basic assumptions of metric DSTM are: 1 Multiple access of the same category from one method of a given client class to the same class member of a supplier class is counted only once (see Section 10.5). 2 Each type of access contributes the same to the strength of the dependency. Discussion Ad assumption 2: Weighting different access types differently would be an alternative. However, further research is required to quantify e.g. the effect of different access types on the average ease to refactor a dependency as a basis for defining actual weights Metric DSTMh Metric DSTMh measures the proportion of statements which cause a dependency to be semi-hard-wired. Properties of metric DSTMh name percentage of statements causing a dependency to be semi-hard-wired entity dependency attribute proportion of statements which cause a dependency to be semi-hard-wired category indirect metric unit [%] scale type ratio scale

153 Chapter 11: Metrics 137 Definition Fora dependency d,m etric DSTM h(d)is defined as the num berofdistinctstatem ents causing d to be sem i-hard-wired. context ClassDependency def let isinheritancedependency() : Boolean = self.client.ancestors()->contains(self.supplier) let numberhwstatementswithoutinheritance() : Integer = self.memberaccess->select(causeshardwiring())->size() context ClassDependency::DSTMh() : Integer post: if self.dstm() = 0 result = 0 else if self.isinheritancedependency() result = (numberhwstatementswithoutinheritance() + 1) / self.dstm() else result = numberhwstatementswithoutinheritance() / self.dstm() Note: An inheritance relationship contributes 1 to the value of metric DSTMh like any other statement causing a class dependency to be semi-hard-wired. Purpose The metric can be used for different purposes: 1 To characterize the degree to which the dependency is semi-hard-wired. 2 To predict the difficulty of refactoring a semi-hard-wired dependency into a type dependency. Metric DSTMh can be also be used to characterize hard-wired dependencies as well, but this is not the main intention (see Section 8.5.2). Assumptions The basic assumptions of metric DSTMh are the same as for metric DSTM. Discussion The percentage of statements causing a dependency to be semi-hard-wired is not always sufficient to get a complete picture:

154 138 Chapter 11: Metrics Example 20 Dependency A is caused by 20 dependencies, 5 of them making the dependency semi-hard-wired. Dependency B is caused by 4 dependencies, only one making it semi-hard-wired. In both cases the value of DSTMh equals 0.25 but dependency B may be easier to be refactored into a type dependency because only on statement needs to be tackled. A combination of the absolute number and relative number of statements causing a dependency to be semi-hard-wired therefore provides more information to characterize the degree to which it is semi-hard-wired. However, the smallest values of DSTMh can be found within dependencies with a large value of DSTM: The value of metric DSTMh is small, if the number of statements causing the dependency to be semi-hard-wired (the fraction s numerator within the formula) is small and DSTM (the denominator) is large. Therefore, we prefer the definition of metric DSTM as a relative number Reduction Metrics Reduction metrics for dependencies are a new type of metrics which solve one shortcoming of existing coupling metrics: coupling metrics are class based. If the coupling value is to high, the developer is left alone with the task to identify the set of dependencies with the most negative impact on system structure Introduction A reduction metric rm(d) for dependencies describes the degree to which the value of a metric M is reduced if a dependency d is removed from the system. Note: metric values decrease when a dependency is removed in general. A high value of a reduction metric for a given dependency means that the software characteristic is sensitive to the existence of this dependency.

155 Chapter 11: Metrics 139 Related Metric Types Change metrics Demeyer, Ducasse, and Nierstrasz use metrics called change metrics to find refactorings [Deme00]. These metrics are calculated as the difference between system metric values of two different system versions and are not linked to specific classes or class dependencies Metric racd Metric racd measures the sensitivity of the number of indirect 1 dependencies to the existence of a given dependency. Properties of metric racd name reduction metric based on metric ACD entity dependency attribute effect on indirect dependencies category indirect metric scale type rational scale Definition For a dependency d, metric racd(d) is defined as a reduction metric based on metric ACD. context ClassDependency::rACD() : Real post: if self.system.acd() = 0 then result = 0 else result = 1 - self.system.removedependency(self).acd() / self.system.acd() Motivation and Purpose Metric racd helps to identify metrics with a strong impact on the dependency structure. Assumptions The assumptions of metric racd are: 1 The size and complexity of the classes does not matter. 2 Indirect dependencies do matter. 1 When removing any of the dependencies within a system, the overall number of direct dependencies is always reduced to the same amount, i.e. by 1. Therefore, metric racd does only measure sensitivity concerning indirect dependencies.

156 140 Chapter 11: Metrics 3 The effect of indirect dependencies on the investigated attribute does not depend on the level of indirection. 4 Dependencies on concrete classes, abstract classes, and interfaces have the same effect on the investigated attribute. 5 The category of the dependency does not matter. 6 The dependencies can be removed without substitution. Discussion Metric racd only provides a rough evaluation of a dependency because it is based on simplifying assumptions. A more thorough evaluation of the effect of a given dependency on test tasks would require e.g. to consider the size and complexity of all classes during the calculation of the ACD values. Ability to realize improvements Interpreting values of racd Absolute versus relative metric definition Possible improvements of system metrics, as indicated by the reduction metrics, can only be fully realized if the dependency can be removed without substitution (assumption 6). A high value of metric racd is not bad per se: a well-designed system will always contain outstanding dependencies which e.g. connect subsystems and which therefore have a high racd value. A high value of racd only means, that it is worth to investigate the related dependency closer because of possible large improvements to the dependency structure and testability if the dependency can and should be refactored. Note: whether metric ACD is defined as an absolute number (as in this work) or relative to the number of classes within the system does not have an effect on the metric racd because the number of classes can be cancelled down in the formula of the reduction metric: ra C D d CD D - n CD D = - = - CD D\{d} CD D\{d} - n While the size of the system does not matter for reduction metrics, a possible reduction of metric ACD by a particular value (e.g. 5%) may be more relevant in a larger system than in a smaller system. Comparison with coupling metrics Coupling metrics which take into account indirect coupling aren t sensitive to dependency cycles (see Section 3.3.1) or to classes that play a pivotal role like hubs (see example below).

157 Chapter 11: Metrics 141 Example 21 Within the dependency graph shown in Figure 26, class D and F play the role of a hub, i.e. a class which, when removed from the system, breaks the dependency graph into independent subgraphs. We compare metric racd with metric CBOi, which is derived from metric CBO (see Appendix C) by taking into account indirect dependencies: while metric racd clearly indicates that the dependency between D and F is critical, metric CBOi is not able to highlight the critical classes. A B C D E F G I H Figure 26 Dependency graph dependency d CBOi of racd(d) [%] client class A B A C 3.5 B D B E 0.0 C D D E D F 55.2 F G F H 0.0 F I 17.2 G H Table 5 Dependencies and values of racd

158 142 Chapter 11: Metrics Metrics racdh and racdsh Metrics racdh and racdsh are used to identify dependencies with a large effect on the average number of indirect hard-wired dependencies. Properties of Metric racdh name reduction metric based on metric ACDh entity dependency attribute effect on indirect hard-wired dependencies category indirect metric scale type rational scale Properties of Metric racdsh name reduction metric based on metric ACDsh entity dependency attribute effect on indirect semi-hard-wired and hard-wired dependencies category indirect metric scale type rational scale Definition For a dependency d, metric racdh(d) is defined as a reduction metric based on metric ACDh. context ClassDependency::rACDh() : Real pre: self.ishardwired() post: if self.system.acdh() = 0 then result = 0 else result = 1 - self.system.removedependency(self).acdh() / self.system.acdh() For a dependency d, metric racdsh(d) is defined as a reduction metric based on metric ACDsh. context ClassDependency::rACDsh() : Real pre: self.ishardwired() or self.issemihardwired() post: if self.acdsh() = 0 then result = 0 else result = 1 - self.system.removedependency(self).acdsh() / self.system.acdsh()

159 Chapter 11: Metrics 143 Motivation and Purpose Metrics racdh and racdsh help to identify dependencies that are critical for the overall structure w.r.t. indirect hard-wired (or semi-hard-wired) dependencies. Assumptions Same as for metric racd, except assumption 5: The category of the dependency does matter. Discussion While a high value of metric racd is not per se an indicator of a bad dependency, this is different for the metrics racdh and racdsh: at least the category of the dependency should be reviewed to check wether a refactoring is possible in order to break chains of (semi-)hard-wired dependencies and to improve the ability to test the classes in isolation rncdc Metric rncdc measures the effect of a given dependency on the overall number of classes involved in dependency cycles. Properties of Metric rncdc name reduction metric based on metric NCDC entity dependency attribute effect on number of classes involved in dependency cycles category indirect metric scale type rational scale Definition For a dependency d, metric rncdc(d) is defined as a reduction metric based on metric NCDC. context ClassDependency::rNCDC() : Real post: if self.system.ncdc() = 0 then result = 0 else result = 1 - self.system.removedependency(self).ncdc() / self.system.ncdc() Motivation and Purpose Metric rncdc helps to identify dependencies with a large impact on the number of classes involved in dependency cycles.

160 144 Chapter 11: Metrics Assumptions Basic assumptions of the metric are: 1 The size and complexity of the classes involved in the dependency cycles does not matter. 2 The category of the dependency does not matter. 3 The type of the supplier class (i.e. whether it is a concrete class, abstract class, or interface) does not matter. Discussion A high value of metric rncdc indicates that the concerned dependency contributes significantly to the number of classes involved in dependency cycles and therefore has a negative impact on testing while removing the dependency has a positive effect Metric rnfd Metric rnfd measures the effect of a dependency on the number of feedback dependencies within the system. Properties of metric rnfd name reduction metric based on metric NFD entity dependency attribute effect on number of feedback dependencies category indirect metric scale type rational scale Definition For a dependency d, metric rnfd(d) is defined as a reduction metric based on metric NFD. context ClassDependency::rNFD() : Real post: if self.system.nfd() = 0 then result = 0 else result = 1 - self.system.removedependency(self).nfd() / self.system.nfd() Negative values of rnfd Note: If an optimal algorithm is used for identifying the feedback dependency set, removing a dependency always reduces the overall number of feedback dependencies, i.e. each dependency has a positive value of metric rnfd. Using a faster heuristic algorithm (as in our case) to identify the feedback dependency set has two consequences:

161 Chapter 11: Metrics 145 Removing a dependency may lead in some cases to a different, larger set of feedback dependencies which results in a negative value of metric rnfd. (In our case studies, this effect occurred for 0.1% to 1% of all dependencies.) Metric values may not be fully reproducible in case that the dependency graph is traversed in a different order or if a different heuristic algorithm is used. In practice, however, this is not relevant since the aim is not to compare different systems (using probably different analysis tools) but to identify critical dependencies within one and the same system version using one particular analysis tool. Motivation and Purpose Metric rnfd helps to identify dependencies which, when removed, reduce the number of feedback dependencies. Assumptions Basic assumptions of the metric are: 1 The size and complexity of the classes involved in the feedback dependencies does not matter. 2 The difficulty to remove a dependency to a concrete class, an abstract class, or an interface is the same. 3 The strength of the feedback dependencies does not matter. Discussion Each feedback dependency has a rnfd-value greater zero. However, a dependency with a rnfd-value greater zero is not necessarily a feedback dependency because not all of the dependencies with the ability to break dependency cycles are selected as feedback dependencies. Example 22 The dependency from class A to class B in Figure 27 is not a feedback dependency, still it has the potential to reduce the number of feedback dependencies by 50%.

162 146 Chapter 11: Metrics A B E C D F Figure 27 Dependency graph dependency d is feedback dependency rnfd(d) [%] rncdc(d) [%] A B A C A D B E C F D F E A yes E F F A yes Table 6 Dependencies and values of rnfd and rncdc A dependency which has an effect on the number of feedback dependencies does not necessarily have an effect on the number of classes within dependency cycles, too, and vice versa. For example, the dependency from class E to class A (Table 6) has a rnfd-value of 50% and a rncdc-value of 0%, the dependency from class A to class C instead has a rnfd-value of 0% and a rncdc-value of 16.7% Metrics racdin and racdout Metrics racdin and racdout have not been described in Chapter 8. They are used to identify dependencies with a large effect on the dependency structure which can t be found with metric racd because of redundant dependencies (see discussion below). Properties of metric racdin name reduction metric for incoming dependencies of a class entity class

163 Chapter 11: Metrics 147 attribute category scale type effect on number of indirect class dependencies indirect metric rational scale Properties of metric racdout name reduction metric for outgoing dependencies of a class entity class attribute effect on number of indirect class dependencies category indirect metric scale type rational scale Definition For a class c, metric racdin(c) is defined as the reduction of metric ACD (in percent), if all incoming dependencies of c are removed. context Class::rACDin() : Real post: if self.system.acd() = 0 then result = 0 else result = 1 - self.system.removedependencies( self.classdependency[client]).acd() / self.system.acd() For a class c, metric racdout(c) is defined analogously as the reduction of metric ACD (in percent) if all outgoing dependencies of c are removed. context Class::rACDout() : Real post: if self.system.acd() = 0 then result = 0 else result = 1 - self.system.removedependencies( self.classdependency[supplier]).acd() / self.system.acd() Motivation and Purpose Redundant dependencies Extremely dense dependency structure Whether a dependency structure is sensitive to the existence of an individual dependency or not depends on the existence of redundant dependencies. Possible extremes are extremely dense dependency structures and dependency doubles. Within an extremely dense dependency structure (Figure 28), each class depends on a large number of other classes directly. In this case, removing one dependency has almost no effect on the overall dependency structure and metric ACD. Metric racd is therefore unable to identify any meaningful candidates for

164 148 Chapter 11: Metrics refactoring. Extremely dense dependency structures are pathological anyway and must be avoided. Figure 28 Dependency doubles Example of an extremely dense dependency structure By dependency doubles we mean two (or more) dependencies close to each other. Figure 29 shows an example dependency structure where two substructures are connected by only two dependencies (which are marked bold). If only one of these two dependencies is removed, the effect on the overall dependency structure (and e.g. on metric racd) is much smaller than if both are removed. Figure 29 Example of dependencies doubles In order to compensate some effects of redundant dependencies, we define reduction metrics for sets of dependencies. One criterion to identify meaningful sets of dependencies is to take all incoming or outgoing dependencies of a particular class. Example 23 Table 7 shows the values of metric ACD if the incoming or outgoing dependencies of a particular class are removed from the system (Figure 30) as well as the corresponding values of the racdin and racdout metric. Note the effect of the redundant dependencies from class C to class F and from class D to class F on the maximum value of metric racd in Table 3 (Section 8.3.4) which is 27.6%. If all incoming dependencies of class F are removed, the ACD-value decreases instead by 55.2%.

165 Chapter 11: Metrics 149 A B C D E F G I H Figure 30 Dependency graph class c ACD racdin(c) racdout(c) if all incoming if all outgoing [%] [%] dep. are removed dep. are removed A B C D E F G H I Table 7 Dependencies and values of racdin and racdout Assumptions The assumptions of the metrics racdin and racdout are the same as for metric ACD. Discussion Metric racdin and racdout are able to highlight cases where a set of dependencies (instead of an individual dependency) has a large effect on the number of indirect dependencies. The metrics are not able to overcome the effects of redundant dependencies in all cases, however, the metric values are easy to interpret.

166 150 Chapter 11: Metrics

167 151 Chapter 12 Tool Support In order to calculate the metrics defined in the previous chapter and to support the investigation of test-critical dependencies we have implemented a prototype tool called Design2Test. This chapter describes the functionality of Design2Test and its architecture Overview of Design2Test Design2Test has been implemented by Edgar Merl as a plug-in for the commercially available IDE Together 1 [Merl03]. The tool is based on the metric calculation module ImproveT which has been implemented by the author. Design2Test uses Together to parse the source code and to display the metric results Functionality of Design2Test The functionality of Design2Test includes the analysis of dependencies, the display of information about dependencies, the calculation and display of metric values, the display of dependency graphs, the browsing to source code, and the export of metric data and dependency graphs. Analysis of Dependencies Design2Test uses an API of Together to collect information about dependencies within a software system implemented in 1 Together is a trademark of Borland Inc.

168 152 Chapter 12: Tool Support Java and allows to exclude particular classes or packages from the analysis. Display of Information about Dependencies Design2Test provides a list of all class dependencies found within a system including information about the category (e.g. hard-wired ) of each dependency. The tool supports developers in understanding and refactoring a given dependency by providing information on the program statements causing it (Figure 31). From the list of the class dependencies the user is able to browse to the source code. This helps to understand and refactor the program statements responsible for a particular class dependency. Figure 31 Screenshot of Design2Test (lower half of window) Metric Calculation Design2Test calculates a set of metrics which are selected by the user. The list of the metrics available includes metrics for dependencies and metrics for classes.

169 Chapter 12: Tool Support 153 Design2Test also allows to perform a what-if analysis : when the user temporarily removes a set of dependencies from the dependency graph the tool calculates a set of basic metrics for the resulting graph. In this way it is possible to evaluate the effect of this set of dependencies on the overall system. Metric Display The metric results are displayed as a table containing absolute or relative metric values. Metric values that exceed a particular threshold are highlighted in a different color. The table entries are sortable by metric value or by the name of the classes involved in the dependency. Graphical View of Dependency Graph Design2Test provides a graphical view on a dependency graph. If a node is selected within the graph view, all direct and indirect client and supplier nodes of this particular node are highlighted in a different color. This facilitates understanding of transitive dependencies. Another color schema, which is offered by the tool helps to study dependency cycles. Up to now, the graph view is only useful for smaller dependency graphs because the algorithm used to layout the graph nodes automatically is not sophisticated enough and because scrolling is not supported. However, the user is able to layout the nodes manually. Export of metric data and dependency graph Filters allow to export the metric data as an ASCII-file and to export a dependency graph as a graph file in XML-format. (Right now, only a stand-alone version of ImproveT is able to import a graph file.) 12.3 Architecture of Design2Test This section describes the architecture of Design2Test from a process view and an architectural view Process View Figure 32 shows the steps that Design2Test takes during metric calculation:

170 154 Chapter 12: Tool Support 1 Enable definition of analysis options about which metrics to calculate and which classes or packages to neglect. 2 Parse Java source files (based on Together). 3 Build a dependency graph. 4 Calculate metric values for each selected metric. 5 Display results. enable definition of analysis options parse source files build dependency graph calculate metric values browse to source code display metric values display dependency graph export metric values export dependency graph Figure 32 Process View on Design2Test Architectural View Design2Test is implemented as a plug-in for Together, which means that Design2Test is started from within Together. Design2Test uses Together via an API to parse Java source code files and to collect information about class dependencies (Figure 33).

171 Chapter 12: Tool Support 155 Together GUI IdeStartup Interface GUI Widgets GUI Widgets Design2Test Visitors Metrics Dependency Graph ImproveT Graph Export Metric Export Graph File Metric File Dependency Analyzer IDE API Source Code Interface API Parser Java source file Figure 33 Component view of Design2Test ImproveT is implemented in Java, calculates the metrics based on a dependency graph, and allows to export the dependency graph as a graph file. Design2Test uses ImproveT to create an instance of a dependency graph and to add nodes and dependencies to it. Additionally, Design2Test uses APIs of Together to display the metric results and to enable browsing from the metric results and the display dependency details to the source code and class diagram editors.

172 156 Chapter 12: Tool Support

173 157 Part IV Case Studies This part describes the results of three case studies which have been performed to validate our metrics and the parts of our approach which concern implementation artifacts.

174 158

175 159 Chapter 13 Outline of Case Studies We have performed three case studies to validate our metrics and those parts of our approach concerning implementation artifacts. This chapter describes the investigated systems and issues Introduction General difficulties to validate testability approaches Using case studies External quality characteristics like testability can t be measured directly. Instead, testability can only be measured indirectly, for example in terms of the test effort. To measure the test effort requires to actually define test cases and to execute them. The effort to do so is prohibitive for any system or component of reasonable size. This makes it in general difficult to validate approaches in the testability engineering field. [...] experiments [investigating the actual test effort] are difficult to control and even more difficult to have approved [...]. [McGr96] Some attributes are simply not easy to measure no matter what we do. Understandability, modifiability, portability, and testability all require a thorough inspection of the code of the software product, and even then it is not obvious what the inspectors should be looking for, nor what algorithms to apply once the data have been found. [Glas92] We use case studies to validate our metrics and those parts of our approach which are related to the implementation activity. The case studies include the calculation of metrics and the creation of executable test cases for a subset of the classes. Validating all parts of our approach to testability deployment, however, would have required to build a sufficiently large system from scratch which was impossible because of resource limitations.

176 160 Chapter 13: Outline of Case Studies 13.2 Investigated Systems The systems investigated in the case studies are called A, B, and C. All three systems are based on the same set of requirements in the domain of conventional management information systems and have been created by groups of seven to eight graduate students during a software engineering laboratory project at the FernUniversität Hagen over a period of six months. The systems have been implemented using Java and consist on average of 217 classes, 1,497 dependencies between these classes, and 29,190 NCLOC 1 (Table 8). metric number of classes metric value A B C average % of total concrete classes abstract classes interfaces total number of dependencies 1,853 1,282 1,356 1,497 - NCLOC 32,00 32,180 23,388 29,190-2 Table 8 Size of systems under investigation Table 9 shows the values of the system level metrics ACD, NCDC, and NFD for each system. Within the investigated systems, a class depends on 30% of all other classes (metric ACD) on average. The values of metric NCDC indicate that about 42% (!) of the classes within systems A to C are involved in dependency cycles. These large values may result to some extent from the fact, that avoiding dependency cycles was not an explicit design goal during the laboratory project. metric metric value A B C average ACD absolute in % of classes NCDC absolute in % of classes NFD absolute in % of dependencies Table 9 Testability metric values 1 non-comment lines of code

177 Chapter 13: Outline of Case Studies Investigated Issues The following issues concerning dependencies and related metrics are investigated based on the case studies: 1 Distribution of reduction metric values: Are the reduction metric values of different dependencies distinct enough in size so they are able to highlight outliers? 2 Correlation between reduction metrics: Is the correlation between the values of the reduction metrics low? (If not, we could reduce the number of metrics to be measured without loosing substantial information.) 3 Correlation between metric racd and existing coupling metrics: Is the correlation between metric racd and existing coupling metrics low? (If not, metric racd does not provide any additional information beyond coupling metrics.) 4 Metric racd and effect on system structure: Do systems exist, where a limited number of dependencies has a more than proportional impact on the system structure in terms of metric ACD? The answer to this question is important for our ability to identify critical dependencies. 5 Metric racd and design errors: To which degree are dependencies with a high racd value the result of poor design (i.e. introduced unintentionally, not necessary for functionality, or of a wrong type)? How easy they are to refactor? 6 Metric racd and effect on testing: To which extent do dependencies with a high racd value have an effect on testing? 7 Metric racd and strength of dependencies: The strength of a dependency is relevant for the effort needed to refactor it. How strong are dependencies with a high racd value? 8 Amount and cause of hard-wired dependencies: How many hard-wired dependencies do exist in the investigated systems and what is their origin? 9 Semi-hard-wired dependencies and refactoring:

178 162 Chapter 13: Outline of Case Studies Are semi-hard-wired dependencies with a low value of DSTMh more easy to refactor into a type dependency than those with a high value of DSTMh? 10 Feedback dependencies: Do feedback dependencies have a high racd value? If so, this would be a reason to remove feedback dependencies before searching for other test-critical dependencies. 11 Metric racdin and effect on system structure: Are reduction metrics for classes like metric racdin a useful complement to reduction metrics for dependencies? The main focus of the case studies is on metric racd and issues concerning hard-wired dependencies.

179 163 Chapter 14 Results This chapter describes the results of our case studies for each investigated issue Distribution of Reduction Metric Values Are reduction metrics able to highlight outliers, i.e. are the metric values of different dependencies distinct enough in size? Results For each reduction metric the percentage of dependencies with a metric value greater zero varies from about 4% to 25%. (Table 10). On average 1, 34% of the dependencies have at least one dependency metric value greater zero (see bottom row in Table 10). criterion percentage of dependencies A B C weighted average racd > racdh > racdsh > rnfd > rncdc > at least one reduction metric greater zero Table 10 Reduction metric values greater zero 1 The average is weighted based on the number of dependencies within the system.

180 164 Chapter 14: Results The values of the reduction metrics greater than zero vary over a wide range. Figure 34 to Figure 38 show the values of the metrics racd, racdh, racdsh, rnfd, and rncdc in decreasing order for the first 185 dependencies (i.e. 10 percent of the dependencies) of system A. For system B and C the distributions are similar for all metrics. Figure 34 Distribution of racd-values (system A)

181 Chapter 14: Results 165 Figure 35 Distribution of racdh-values (system A)

182 166 Chapter 14: Results Figure 36 Distribution of racdsh-values (system A)

183 Chapter 14: Results 167 Figure 37 Distribution of rnfd-values (system A) Figure 38 Distribution of rncdc-values (system A)

184 168 Chapter 14: Results The figures above show that the reduction metrics are indeed discriminative and have a potential to highlight outliers which should be evaluated more closely Correlation between Reduction Metrics In case of a high correlation between the reduction metrics, it would not make sense to calculate all of them. To analyze the correlation between the reduction metrics racd, rnfd, and rncdc we use the Spearman's rank correlation coefficient 1 for non-normal data. Interpreting correlation coefficients According to [Bühl02], the value of a correlation coefficient r is interpreted as follows: r 0.2: very low correlation 0.2 r 0.5: low correlation 0.5 r 0.7: medium correlation 0.7 r 0.9: high correlation 0.9 r : very high correlation Results The correlation between the values of the reduction metrics racd, rnfd, and rncdc is medium in one case, in the remaining eight cases the correlation is low to very low (Table 11). Therefore it is impossible to reduce the number of reduction metrics to be measured without loosing substantial information. pair of reduction metric correlation coefficient A B C racd - rnfd racd - rncdc rnfd - rncdc Table 11 Correlation between reduction metrics 1 To evaluate the correlation of non-normal data, [Bühl02] recommends to use the spearman rank correlation coefficient or Kendalls Tau.

185 Chapter 14: Results Correlation between racd and Coupling Metrics A high correlation between metric racd and an existing coupling metrics like CBO, DAC, FO, MIC, RFC, and VOD would allow to substitute metric racd by such a coupling metric. Existing Coupling Metrics Existing coupling metrics include: CBO: Coupling Between Objects DAC: Data Abstraction Coupling FO: FanOut MIC: Method Invocation Coupling RFC: Response for Class VOD: Violations of Demeters Law These metrics are explained in more detail in Appendix C. Metric racd is a metric on dependency level while coupling metrics are class metrics. A direct comparison is therefore not possible. Instead, we analyze the correlation between a) the value of the racd metric for the dependencies and b) the coupling metric values of the client and supplier classes involved, again based on the Spearman's rank correlation coefficient. Results For all investigated systems, there is only a very low to low correlation (Table 12). Therefore it is impossible to predict test-critical dependencies from the coupling values of the classes involved.

186 170 Chapter 14: Results coupling correlation with racd A B C CBO client class supplier class DAC client class supplier class FO client class supplier class MIC client class supplier class RFC client class supplier class VOD client class supplier class Table 12 Correlation between racd and coupling metrics Non-linear relationship between racd and CBO The lack of a substantial correlation between racd and the coupling metric CBO means that there is no linear relationship between the metrics. However, we have observed a non-linear relationship between both metrics. Figure 39 is a boxplot of the coupling values for the client class (dark gray) and supplier class (light gray) of each dependency for system A. Boxplots The box in a boxplot shows the median value as a line and the first and third quartile of a distribution as the lower and upper parts of the box. The whiskers shown above and below the boxes represent the largest and smallest observed data values that are less than 1.5 box lengths from the end of the box. Very rare and exceedingly rare scores are shown as open circles o or stars. The dependencies within system A are grouped into categories based on the values of the racd metric as shown in Table 13.

187 Chapter 14: Results 171 category dependencies values of racd(d) number [%] 0 1, racd = < racd < racd < racd < racd Table 13 Categories of dependencies (system A) The boxplot for system A (Figure 39) exhibits a tendency, that for the most test-critical dependencies (on the right) the coupling value of the client class (dark gray) is equal or smaller compared to the coupling value of the supplier class (light gray). Figure 39 Metric racd and CBO (system A) A similar tendency can be observed for system B in Figure 40 (with a similar grouping of dependencies). Within system C, the difference between the coupling values of the client and supplier class is at least smaller for highly test-critical dependencies compared to other dependencies.

188 172 Chapter 14: Results Figure 40 Metric racd and CBO (system B) From this examples we can state, that a dependency from a class with a low coupling value to a class with a high coupling value often has a greater effect on the system structure in terms of the ACD value than other dependencies. However, it is also not possible to use the quotient of the coupling metric values of the supplier and client class as a substitute for metric racd: the linear correlation between the values of metric racd and this quotient is very low (0.06 for system A) Metric racd and Effect on System Structure Does a small set of dependencies have a more than proportional impact on the system structure in terms of the metric ACD? If so, the dependencies within this set should receive more close attention. To investigate the potential effect of a set of dependencies on the system metric ACD we remove this set of dependencies temporarily from the system and calculate metric ACD.

189 Chapter 14: Results 173 To define sets of dependencies we start with the dependency with the highest racd value and add additional dependencies incrementally in decreasing order of the racd value. Results Table 14 shows the effect of increasingly large sets of dependencies on metric ACD: 0.5% of the dependencies with the highest racd values contribute to 27% of the ACD value of the investigated systems on average, i.e. the contribution of these dependencies is 54-times higher than a proportional contribution (of 0.5%) on average! 10% of the dependencies contribute to 56% of the ACD value on average. dependencies reduction of ACD [%] excluded A B C average 1 dep % % % % % % Table 14 Reduction of ACD Figure 41 shows the increase in ACD with increasing number of dependencies removed from system A.

190 174 Chapter 14: Results dependencies ofinterest Figure 41 Reduction of ACD (system A) A small percentage of dependencies indeed has a very high effect on the overall system structure in terms of metric ACD, according to our results. As mentioned before: A high value of metric racd does not imply that the dependency is necessarily bad. However, if the dependency can be removed, the effect on the system structure and testing is large which justifies a closer look at the dependency Metric racd and Design Errors How often do high values of metric racd indicate errors of the system design? To answer this question we have manually evaluated 33 of the dependencies with the highest racd value within system A. Results We have identified the following design problems within system A: p1 Violation of layered architecture:

191 Chapter 14: Results 175 A class belonging to a lower layer (e.g. an entity class) accesses a class belonging to a higher level (e.g. a control class) which violates the intended layered architecture. p2 Access to global resources: An additional level of indirection is used to access instances of singleton classes (i.e. a class implementing the singleton pattern [Gamm94]). For example, access to the class representing the database is provided via the main control class which increases dependencies to the control class layer significantly. p3 Cyclic GUI navigation: Cyclic navigation dependencies between windows are directly mapped onto cyclic dependencies between interface classes with all negative consequences. p4 Entity classes as façades: Access to the instances of associated entity classes is not gained via the classes administrating the associations (called association classes) but via the entity classes involved in the association. This design decision leads to an additional level of indirection and cyclic dependencies between entity classes and association classes. p5 Superfluous dependency: The dependency is not necessary and can be removed without further refactoring. The client class is not actually used within the entire system or the supplier class does not provide any functionality. p6 Two top-level classes in one file: A class which should be an inner class is designed as a toplevel class within the same file as its client class which breaks commonly accepted programming guidelines. Table 15 shows an overview of the selected dependencies (sorted by decreasing racd value) and how they relate to design problems (for more details please refer to [Merl03]). The meanings of the columns are: Column dependency d specifies the dependency in terms of the names of the client and supplier class. Column within cycle indicates, whether the dependency is involved in a dependency cycle. Column offset indicates, wether the dependency d is the direct source of the design problem indicated in column design problem, or if a close-by dependency is actually the

192 176 Chapter 14: Results source of the problem. The value of the offset describes, how far this dependency is from dependency d. For example, an offset of 1 means, that an adjacent 1 dependency is the source of the design problem. Column category indicates the category of the dependency. The meaning of the entries are as follows: h: hard-wired s: semi-hard-wired t: type dependency. Hub dependency Column hub dependency indicates, whether the dependency d is a hub dependency, i.e. a dependency which represents the only connection between two distinct areas of the dependency graph. Nr dependency d racd [%] within cycle design problem p1 p2 p3 p4 p5 p6 offset category hub depend. 1 Dozentenvereinbarung DVKostenBerechnenK c p1 p5 0 h 2 SeminarisK SeminarisDatenbank c p2 0 s 3 DruckDokumentDK SeminarisK h b 4 SVAendernErfassenAA SeminartypAuswaehlenAA p1 0 h b 5 DVKostenBerechnenK DozentenvereinbarungLoeschenK c p5 0 h 6 SVAendernErfassenAA LeitungsauftragAuswaehlenAA c p3 0 h 7 LeitungsauftragAendernErfassenAA SVAuswaehlenAA c p3 0 h 8 SeminarisH SachbearbeiterA c h 9 SeminarbelegungAA SVAuswaehlenAA h b 10 ErweitertPersistent SeminarisDatenbank c p2 0 h 11 SeminartypAendernErfassenAA Dozentenvereinbarung h AuswaehlenAA 12 SVKurzS Seminarveranstaltung t 13 MeldungsAnzeigeA ExceptionNachrichtA h 14 SachbearbeiterA SachbearbeiterK p5 0 h 15 SeminarisDatenbank CommitOrRollbackWithoutStartTransactionException h 16 DVKostenBerechnenK Zeitpunkt h 1 An adjacent dependency w.r.t. a dependency d is a dependency (distinct from d) which originates or ends in the client or supplier class of d.

193 Chapter 14: Results 177 Nr dependency d racd [%] 17 TeilnahmeFirmenSVRelation TeilnahmeFirmenSVPaar c p4 1 h b 18 SVSeminartypRelation SVSeminartypPaar c p4 1 h b 19 SVAnsprechpartnerRelation SVAnsprechpartnerPaar c p4 1 h b 20 SeminarisK LogProtokollDateiA c p1 p2 1 h 21 SeminarisK DebugProtokollDateiA c p1 p2 1 h 22 AnstellungRelation AnstellungPaar c p4 1 h b 23 BelegungRelation BelegungTripel c p4 1 h b 24 BuchtRelation BuchtPaar c p4 1 h b 25 DozentenvereinbarungRelation Dozentenvereinbarung- c p1 0 s b Anfragen 26 DozentenvereinbarungTripel DVSchluessel c p6 0 h b 27 FirmenAnsprechpartnerRelation FirmenAnsprechpartner- c p4 1 h b Paar 28 LeitungsauftragRelation LeitungsauftragTripel c p4 1 h b 29 RechnungsempfaengerRelation Rechnungsempfaenger- c p4 1 h b Paar 30 DozentenvereinbarungLoeschenK IDozentenvereinbarungK h 31 Seminartyp SeminartypOrdner c p4 0 s 32 SeminarveranstaltungOrdner OeffentlicheSeminarveranstaltung c p4 1 h 33 ParameterEingabeModel IAnfrageParameter t within cycle design problem p1 p2 p3 p4 p5 p6 offset category hub depend. Table 15 Dependencies of system A with highest racd values. The following observations can be made in Table 15: 70% of the dependencies (23 out of 33) are directly related to design problems: 5 violations of layered architecture (p1) 4 accesses to global resources (p2) 2 cyclic GUI navigations (p3) 11 entity classes as facades (p4) 3 superfluous dependencies (p5) 1 case of two top-level classes in one file (p6) About half of these dependencies (11 out of 23) are a direct hit in terms of problem identification (i.e. offset = 0). In twelve other cases, and adjacent dependency is the actual source of the problem. 22 dependencies (67%) are involved in dependency cycles. Most of the dependencies related to design problems (87%) are hard-wired.

194 178 Chapter 14: Results About the half of the dependencies related to design problems (12 out of 23) are hub dependencies. One third of direct hits and another third of close-by hits indicates that it is worth (at least in the studied system) to investigate the dependencies with a high value of metric racd Metric racd and Effect on Testing We have studied the design problems described in the previous section to investigate the degree to which dependencies with a large value of racd effect testing. Results The design problems have the following effects on testing: p1 - Violation of layered architecture: Classes within the lower levels can t be tested independently from classes within higher levels. Additionally this design problem is a main driver in increasing the overall number of (direct and) indirect class dependencies (metric ACD). p2 - Access to global resources: The main global resource in system A is the database. The access from other classes to the database is mainly hardwired which hinders its substitution. In combination with design problem p1 and a large number of hard-wired dependencies this leads to a large number of classes depending on the database indirectly via hard-wired dependencies. This has a significant impact on testing, because a large number of classes can t be tested independently from the database, even if the functionality tested does not depend on it. p3 - Cyclic GUI navigation: Cyclic dependencies between the interface classes cause test problems if the interface classes are unit tested. p4 - Entity classes as facades: The cyclic relationship between entity classes and association classes has a negative effect on testing. Another effect of designing the entity classes as facades is, however, more severe: the dependency of the entity classes on the association classes, in combination with problem p2, makes it

195 Chapter 14: Results 179 impossible to test the entity classes without involving the database. Note: While Table 15 indicates, that most design problems concern dependencies which are involved in dependency cycles, not all test problems in the example are the result of dependency cycles. p5 - Superfluous dependency: An unneeded dependency may cause test problems if it occurs in combination with other design problems. This is e.g. the case for the dependency with the highest racd value (dependency 1 in Table 15). p6 - Two top-level classes in one file: To put two top-level classes into one source code file is allowed by the Java syntax but breaks common programming conventions and may cause minor test problems: the classes involved can t be compiled separately and can t be managed separately by configuration management because the one-to-one relationship between source files and classes is lost Metric racd and Strength of Dependencies The strength of a dependency is relevant for the effort needed to refactor it. How strong are dependencies with a high racd value? Results We use metric DSTM to measure the strength of a dependency between two classes. Figure 42 and Figure 43 show for system A and B that the dependencies with the highest racd values (on the right) are caused only by a small to medium number of distinct 1 source code statements. For system C (with one outlier) this trend is similar. This observation indicates to some extent that the effort to remove or refactor test-critical dependencies is not prohibitive in general. 1 See definition of metric DSTM in Section

196 180 Chapter 14: Results Figure 42 Metric racd and strength of dependency (system A)

197 Chapter 14: Results 181 Figure 43 Metric racd and strength of dependency (system B) 14.8 Amount and Cause of Hard-Wired Dependencies We investigated the amount and cause of hard-wired dependencies. Results On average, 47% of the dependencies are hard-wired and 3% are semi-hard-wired (Table 16). This means that half of the dependencies are either hard-wired or semi-hard-wired within the investigated systems. With respect to the cause of hard-wired dependencies we have observed the following: 76% of the hard-wired dependencies where caused by calls to constructors and pseudo-constructors, 18% by inheritance, and 6% by sole access to static or final class members. (No classes have been defined final which excluded this source of hard-wired dependencies.)

198 182 Chapter 14: Results type of dependency in percent of total A B C average type dependency to class to abstract class to interface sum hard-wired inheritance dependency create final class static/final access only sum semi-hard-wired dependency Table 16 Number of dependencies per category 47.5% of the classes depend on at least one direct supplier class by a hard-wired dependency (Table 17). This means that a test in complete isolation is impossible for about half of the classes. 75.5% of the classes depend on at least one direct supplier class by a semi-hard-wired or hard-wired dependency. This means that the ability to test in insolation is limited for an additional quarter of the classes. metric number of classes involved in hard wired dependencies number of classes involved in semihard-wired and hard wired dependencies metric value A B C av. absolute in % of total absolute in % of total Table 17 Number of classes involved in (semi-)hard-wired dependencies A client class involved in a hard-wired dependency depends directly and indirectly on 26.9 classes by hard-wired dependen-

199 Chapter 14: Results 183 cies on average and on 28.1 classes if semi-hard-wired dependencies are considered as well (Table 18). metric metric value A B C average ACD absolute in % of classes ACDh absolute in % of classes ACDsh absolute in % of classes Table 18 Values of metrics ACD, ACDh, and ACDsh Overall the results indicate that hard-wired dependencies play a significant role within the investigated systems, especially those caused by object creation Semi-Hard-Wired Dependencies and Refactoring Are semi-hard-wired dependencies with a low value of DSTMh more easy to refactor into a type dependency than those with a high value of DSTMh? To investigate this issue we have analyzed system A, which is an improved version of system A with respect to hard-wired dependencies and metric ACD (i.e. several hard-wired dependencies introduced by object creation have been removed by introducing class factories and the value of metric ACD has been reduced). From system A we have selected two sets of semi-hard-wired dependencies: Set s DSTMh consists of 50 semi-hard-wired dependencies with the lowest values of metric DSTMh. Set s random consists of 50 semi-hard-wired dependencies which have been chosen randomly from the system. Note: One dependency chosen randomly for set s random was a member of set s DSTMh as well. Results concerning cause of semi-hard-wired dependencies The causes of semi-hard-wired dependencies within both sets could be categorized as follows:

200 184 Chapter 14: Results 1 Call of a constructor or pseudo-constructor (without any further access causing a semi-hard-wired dependency). 2 Call of a constructor or pseudo-constructor and access to a static member. 3 Access to a static method of a global resource like an association class (implemented as a singleton) or a class providing a reference to a class which implements a protocol manager or encapsulates the database. 4 Access to a static or final member (i.e. all other cases not covered by case 3). Table 19 gives an overview of the number of semi-hard-wired dependencies per underlying cause: Within set s DSTMh, the major cause of a dependency to be semi-hard-wired is the call of a constructor or pseudo-constructor of the supplier class (78%). The access to static method of global resources is the largest source of semi-hard-wired dependencies within set s random (48%), while this source plays no role in set s DSTMh (0%). Within both sets, the percentage of semi-hard-wired dependencies caused by access to static or final members is similar (18% and 22%). cause auf semi-hard-wired dependencies number of dependencies s DSTMh s random abs. [%] abs. [%] call of (pseudo-) constructor call of (pseudo-) constructor or static access access to static method of global resource access to static or final member total Table 19 Cause of semi-hard-wired dependencies The difference within the cause of semi-hard-wired dependencies between both sets is significant (i.e. the observed significance level of the Pearson Chi-square test is smaller than 0.001) and can be explained as follows: Client classes accessing a global resource commonly call one static method (like getsingleinstance()) to get a reference to the resource and do not call a large number of

201 Chapter 14: Results 185 other methods of the supplier class which results in a large value of DSTMh. Client classes directly creating instances of supplier classes by calling their constructors or pseudo-constructors usually use the supplier instance and access a number of instance methods. This results in a smaller value of metric DSTMh. The difference in the underlying causes of the semi-hard-wired dependencies between sets s DSTMh and s random is relevant for the difficulty of refactoring. Results concerning difficulty of refactoring In the course of an ongoing master thesis, Alexander Müller evaluated the difficulty to refactor the dependencies in both sets s DSTMh and s random. This evaluation is based on example refactorings of dependencies. The refactorings are categorized concerning the difficulty to realize them as follows: Easy to refactor: Refactoring a static method of the supplier class into a non-static method if a supplier instance is always available to the client class. Removing a constant from the supplier class (which is to be used as a method parameter to indicate different cases) and instead implementing dedicated methods instead (for each case). Avoiding a constant by moving it from the supplier class to the client class. Medium difficult to refactor: Refactoring a direct instantiation of the supplier class, e.g. with help of a class factory. Difficult to refactor: Refactoring the access to reflective instance properties (in Java access to the static attribute class) which is required by the persistency framework. Refactoring the access to global resources. Refactoring the access to static methods which adhere to the initial system design. Table 20 gives an overview of the results for both sets s random and s DSTMh : Within s DSTMh, the majority of the refactorings is easy or medium difficult to realize while in s random the majority of the refactorings is difficult to realize.

202 186 Chapter 14: Results cause auf semi-hard-wired dependencies call of (pseudo-) constructor call of (pseudo-) constructor and static access s DSTMh refactoring effort s random easy medium difficult easy medium difficult access to global resource 24 access to static or final member total (absolute) total [%] Table 20 Effort to refactor dependencies Above results indicate, that (at least within the investigated system) metric DSTMh can indeed be used to identify semi-hardwired dependencies which can be refactored more easily into a type dependency than other semi-hard-wired dependencies Feedback Dependencies We ve investigated, whether feedback dependencies have a large effect on the metric ACD. Results There are 79 feedback dependencies in system A, 63 in system B, and 34 in system C. The mean of racd values for feedback dependencies is higher than the mean of racd values for other dependencies (Table 21). It is therefore meaningful to remove feedback dependencies before removing other dependencies not only in order to remove the dependency cycles but also to reduce the ACD-value of the system. mean racd value A B C non-feedback dependencies feedback dependencies Table 21 Feedback dependencies and metric racd

203 Chapter 14: Results Metric racdin and Effect on System Structure Are reduction metrics for classes like metric racdin a useful complement to reduction metrics for dependencies? We have calculated the values of metric racdin and racdout for the classes of system A. Results The highest value of metric racdin is 17.8%, the highest value of metric racdout is 17.2%. Figure 44 shows the distribution of the metric values for metric racdin and all classes of the system. (The distribution for metric racdout is similar). Figure 44 Distribution of racdin-values (system A) The six classes 1 with the highest values of metric racdin, for example, are all involved in design problems described in Section 14.5 and Table 15. Additionally, metric racdin was able to identify sets of dependencies which were not identified directly 1 This classes are SeminarisDatenbank, Dozentenvereinbarung, DVKostenBerechnenK, SeminarisK, SVAuswaehlenAA, and SVAendernErfassenAA.

204 188 Chapter 14: Results by metric racd [Merl03]. Based on these observations we confirm that metrics racdin and racdout are useful complements to the reduction metrics for dependencies.

205 189 Part V Conclusion This part summarizes the main contributions and results of this work. Moreover, it points out some limitations of the thesis work and describes open research issues for future work in this area.

206 190

207 191 Chapter 15 Summary Testing is the main technique to evaluate the quality of software systems and to detect errors. Unfortunately, testing consumes considerable amounts of resources. Common approaches to improve the efficiency and effectivity of the test process are to improve the test criteria, -techniques, and -tools used as well as to increase the degree of test automation. Another important, yet not well studied factor contributing to test efficiency and effectivity is the degree to which the software itself facilitates testing, called testability. A lack of testability has severe effects on testing, for example: Within system A it was in general impossible to test classes in isolation. Actually it was necessary to deal with a large number of supplier classes and the underlying database during unit testing. In a company known to the author, it was impossible to run unit tests for a system with several millions lines of code because of a lack of testability. The system tests without previous unit tests, however, were not effective and efficient enough concerning error detection, resulting in the need to employe more testers than developers temporarily. The design of the tool developed in the context of this thesis was oriented too much towards database and performance issues in the beginning which made testing much more difficult and harder to automate. The integration of the tool into Together and the resulting need to start and run Together in order to test the tool made testing and fault isolation more tedious and difficult, too. The main lesson learned from the examples above is that testability is an important requirement which has to be identified and specified at the beginning of a software development project and which has to be deployed throughout the entire software lifecycle. The possibilities to improve testability after the design activity are limited, mainly because the required refactoring effort exceeds practical limitations. Not taking care of testability

208 192 Chapter 15: Summary from the beginning can therefore hardly be compensated at later development stages and endangers project success. Lack of systematic approach Focus of thesis Main results Unfortunately, a systematic approach to testability, covering all major development activities and including detailed constructive and analytical measures has not been published so far and very few software developers treat testability as an explicit requirement and design goal. The intention of this thesis is to make a contribution towards a systematic approach to testability. The thesis focuses on testability in the context of conventional object-oriented software systems, especially on the effect of software dependencies on testing. The main results of this thesis are: a systematic approach to deploy testability throughout the entire life-cycle focusing on software dependencies, the definition of different categories of dependencies based on their effect on unit testing, a general approach to define metrics for dependencies, a set of testability metrics related to test tasks, the tool Design2Test which supports the metrics, and three case studies on the effects of dependencies on testing. Systematic approach to testability deployment Our approach describes for each software development activity the sequence of steps necessary to control the effect of dependencies on testability. These steps include identifying, evaluating, designing, avoiding, and refactoring dependencies (in the context of testability) within use cases models, class diagrams, and source code as well as identifying and categorizing classes relevant to testing. Design guidelines concerning dependencies, techniques and metrics to evaluate dependencies, as well as a tool which allows to calculate the metrics support our approach. Categories of dependencies based on their effect on unit testing We introduced three categories of dependencies which have a different effect on the ability to test classes in isolation: Hardwired-dependencies (which hinder testing in isolation), semihard-wired dependencies, and type dependencies (without a negative effect on testing in isolation). Distinguishing these dif-

209 Chapter 15: Summary 193 ferent categories of dependencies allows to define related design guidelines and metrics and helps to avoid unit testing problems. General approach to define metrics for dependencies We invented a generic approach to define metrics for dependencies which are called reduction metrics. A reduction metric measures the degree to which the characteristic of a software system is sensitive to the existence or type of a given dependency which makes it well suited to identify local dependencies with a global effect on the entire system. The definition of a reduction metric can be based on any system-level metric. Testability Metrics We defined a set of basic testability metrics which allows to evaluate the effect of indirect dependencies and dependency cycles on testing. From this basic metric set we derived additional metrics to evaluate the ability of testing the classes in isolation, based on different categories of dependencies. Reduction metrics based on our basic and derived set of metrics allow to identify individual dependencies with a large effect on the system structure and unit testing. Tool Design2Test Our tool Design2Test analyzes dependencies within Java source code, calculates our metrics, and allows to study the causes of individual dependencies within the source code. Design2Test has been integrated into a commercially available IDE. This IDE represents class diagrams as source code. In this way it is possible to calculate some of our metrics for class diagrams as well. Case Studies We performed case studies to validate the analytical measures of our approach. The case studies included the metric analysis of three systems consisting each of about non-commented lines of code and an in-depth analysis of one of these projects. The results of the case studies indicate that our metrics indeed have discriminative power to highlight critical dependencies, i.e. dependencies with a large effect on the dependency structure of the system. Within the case studies, our metrics were also good indicators of test and design problems. What

210 194 Chapter 15: Summary finally became once again rather obvious is that systems not designed with testing in mind are very hard to test.

211 195 Chapter 16 Future Work Software testability is a broad topic which grows in complexity with each new development and testing technology. The scope of this thesis is of course, due to limited resources, rather small. The results of this thesis, however, lead to a new set of open research issues, especially with respect to process and metric issues Testability Engineering Process We have defined the necessary steps to deploy testability throughout all development activities, focusing on dependencies. To apply this approach within an industrial context it is important to widen the scope and to address other testability factors like controllability and observability as well. Additionally to describing the steps of a process it is also necessary to address the following process issues which have not been covered by this thesis: definition of roles and responsibilities related to testability, training of the developers and testers in design for testability and in testability evaluation techniques like reviews, development of guidelines and checklists to be used during training, development, and reviews, and adoption of software artifact templates to account for testability issues. Addressing these issues helps to establish a full-scale testability engineering process. Further case studies are required to validate such a testability engineering process.

212 196 Chapter 16: Future Work 16.2 Improvement of Testability Metrics Our main goal w.r.t. metrics is to provide immediate feedback to the designers and programmers about the global impact of local dependencies, e.g. while using an integrated software development environment. Our testability metrics are an important step towards this goal. Possible further improvements and developments concerning our metrics include: Reducing the amount of calculation necessary to determine metric values for different system versions. This enables faster feedback to the developers. Improving the ability to highlight test-critical dependencies in the presence of redundant dependencies. Developing metrics which indicate violations of design guidelines related to test-critical dependencies. Developing metrics which account for polymorphic dependencies between classes. Enhancing our metrics in terms of considering weights for different categories of classes. A prerequisite to develop new metrics is to collect additional empirical data to further validate our existing metrics and to study e.g. the effect of dynamic dependencies or specific class categories on testing Outlook Despite several contributions from other researchers and us, research on software testability is still in its infancy and only few companies explicitly take care of designing software systems with testing in mind. However, we believe that the software field will follow the hardware field where mature technology to deploy testability is available and where testability is an explicit design goal since several decades. As an example, about 3 to 5 percent of the area within a highly integrated circuit are devoted to testability features [Jung02c]. Therefore it seams realistic when Terence Colligan [Coll97] suggests to spend about five percent of the total engineering time within software development projects on creating support for test automation. Standards, conferences, and business roles dedicated to testability are a reality within the hardware domain. We expect similar happen in the software field soon, as Sam Guckenheimer from Rational Software put it:

213 Chapter 16: Future Work 197 But I believe that design for testability standards in the software industry are really going to take off in this decade. [Adam01]

214 198 Chapter 16: Future Work

215 Appendix 199

216 200

217 201 Appendix A Glossary abstract class class class dependency concrete class CUT dependency cycle dependency graph Design2Test driver hard-to-test class hard-wired dependency interface A class which can t be instantiated. In the context of this work: A synonym for a concrete class, abstract class, or interface. A syntactic dependency between two classes of an object-oriented software system. A class which can be instantiated. The class under test. A set of dependencies where each class involved depends directly or indirectly on each other class involved in this set of dependencies. The notation used in this work to represent the structure of the classes and class dependencies within an object-oriented software system. The metric tool implemented as part of this thesis. see test driver A class is hard to test, if it is difficult to test without any additional testability features or if its functionality is excessively complex. A dependency from a client class to a supplier class is hardwired, 1 if it is impossible to redefine any implementation detail of the supplier class and 2 if it is impossible to define the value of any non-private attribute of the supplier class the client class relies on without changing the implementation of the client class (or any of its superclasses) or the supplier class. In the context of Java, an interface is a special kind of class without any method implementations.

218 202 Appendix A mock pseudo-constructor refactoring semi-hard-wired dependency stub testing test-critical class test-critical dependency test-sensitive class test driver test oracle testability A mock contains more functionality then a stub, e.g. it is possible to define expected behavior of the CUT. A static operation of a class which returns a new instance of it. Refactoring is the process of rewriting written material to improve its readability or structure, with the explicit purpose of keeping its meaning or behavior. [Wiki] A dependency from a client class to a supplier class is semihard-wired, if 1 some (but not all) implementation details of the supplier class can t be redefined or 2 some (but not all) values of non-private attributes of the supplier class can t be defined the client class relies on without changing the implementation of the client class (or any of its superclasses) or the supplier class. A skeletal or special-purpose implementation of a class, used to develop or test a class that calls or is otherwise dependent on it. (Adapted from [IEEE90]) The process of operating a system, component, or class under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system, component, or class. (Adapted from [IEEE90]) A class is test-critical, if a test problem within this class may have serious impact on the test process and development success. A dependency is called test-critical, if its existence or type (potentially) has a large effect on the overall test process (compared with other dependencies within the system). A class is test-sensitive, if it is not subject to direct testing (i.e. it is never a class under test itself) but if it is involved in the test of other classes and has a negative impact on testing. A class used to invoke a CUT and, often, provide test inputs, control and monitor execution, and report test results. (Adapted from [IEEE90]) A test oracle is a source of expected results for a test case. The degree to which a software artifact facilitates testing in a given test context.

219 Appendix A 203 type dependency UML A dependency from a client class to a supplier class is called a type dependency, if it is possible 1 to redefine all implementation details of the supplier class and 2 to define all values of non-private attributes of the supplier class the client class relies on without changing the implementation of the client class (or any of its superclasses) or the supplier class. Unified Modelling Language [UML01]

220 204 Appendix A

221 205 Appendix B Java - Syntactic Dependencies This appendix describes sources of syntactic dependencies between the classes of a program written in Java. B.1 Generalization B.1.1 B.1.2 B.1.3 Inheritance Inheritance between two classes is indicated by the Java keyword extends and leads to a class dependency between the subclass (the client) and the superclass (the supplier). Ifan anonym ous class inherits from a superclass,this dependency is attributed tothe containing class. Inheritance between Interfaces Inheritance between two interfaces is indicated by the Java keyword extends and leads to a class dependency between the inheriting interface (the client) and the interface it inherits from (the supplier). Implementation of an Interface An implements relationship from a class to an interface is indicated by the Java keyword implements and leads to a class dependency between the class (the client) and the interface (the supplier). Special case: An anonymous class is declared to implement an interface type without using the implements keyword - the keyword new is simply followed by an interface type. For example see Figure 1 where ActionListener is an interface type.

222 206 Appendix B public class ClassXY {... ActionListener a = new ActionListener() {...}; } Figure App.1 Anonymous class implementing an interface If an anonymous class implements an interface, this dependency is attributed to the containing class. B.2 Type Declarations B.2.1 B.2.2 B.2.3 B.2.4 Type of Method Parameter A type declaration concerning a method parameter results in a dependency between the class containing the method (the client) and the class specifying the type (the supplier). Type of Return Value A type declaration concerning the return value of a method results in a dependency between the class containing the method (the client) and the class specifying the type (the supplier). Type Declaration within Throw-Clause A method may throw an exception of a specific type which is indicated by the Java keyword throw in the method declaration. This causes a class dependency between the class containing the method (the client) and the class which specifies the type of the exception (the supplier). Type of Parameter within Catch-Clause A method may contain a catch-clause. A catch clause specifies the type of the exceptions that are catched and processed. This causes a class dependency between the class containing the method (the client) and the class which specifies the type of the exception (the supplier).

223 Appendix B 207 B.2.5 B.2.6 B.2.7 Type of an Attribute A class may contain a declaration of an attribute of a specific type. This causes a class dependency between the class declaring the attribute (the client) and the class which specifies the type of the attribute (the supplier). Type of a Locale Variable The body of a method may contain the declaration of a local variable. This causes a class dependency between the class containing the method (the client) and the class which specifies the type of the local variable (the supplier). Initialization of Array The type of an attribute or local variable may be an array. An array may be initialized using the Java keyword new followed by the specification of the type and the dimensions of the array (Figure 2). ClassXY[] attribut1 = new ClassXY[5]; Figure App.2 Example for the initialization of an array This causes a class dependency between the class containing the attribute or local variable (the client) and the class specifying the type used in the array declaration (the supplier). B.2.8 B.2.9 Cast Operator A cast can be used to change the type of an object reference to the type used to construct the object or one of its supertypes. This causes a dependency between the class containing the cast statement (the client) and the class specifying the type used within the cast statement (the supplier). Instanceof Operator The Java operator instanceof can be used to check, whether an object belongs to a given type or not.

224 208 Appendix B This causes a class dependency between the class using the instanceof operator (the client) and the class specifying the type used within the instanceof operator (the supplier). B.3 Member Access B.3.1 B.3.2 B.3.3 Read Access to an Attribute The implementation of a method may access a non-static attribute of another class to read its value. This causes a class dependency between the class containing the method (the client) and the class that contains the attribute (the supplier). Notes: Access to local variables within a method are not relevant during the analysis of class dependencies. If the attribute is accessed via a class B but actually defined in class A, then the dependency goes to the class A. Write Access to an Attribute The implementation of a method may access a non-static attribute of another class to write its value. This causes a class dependency between the class containing the method (the client) and the class that contains the attribute (the supplier). Notes: Access to local variables within a method are not relevant during the analysis of class dependencies. If the attribute is accessed via a class B but actually defined in class A, then the dependency goes to the class A. Read Access to a Static Attribute The implementation of a method may access a static attribute of another class to read its value. This causes a class dependency between the class containing the method (the client) and the class that contains the attribute (the supplier).

225 Appendix B 209 B.3.4 B.3.5 B.3.6 B.3.7 B.3.8 Write Access to a Static Attribute The implementation of a method may access a static attribute of another class to write its value. The resulting dependency goes from the class containing the method (the client) and the class that contains the attribute (the supplier). Method Call A method of one class may call a non-static method of another concrete or abstract class. This causes a class dependency between the class containing the calling method (the client) and the class containing the called method (the supplier). Note: If a class A calls a method m() of a class B1, but B1 inherits m() from its superclass B without redefining it, then the resulting dependency goes from A to B. Static Method Call A method of one class may call a (non-constructor) static method of another concrete or abstract class. This causes a class dependency between the class containing the calling method (the client) and the class containing the called method (the supplier). Instantiation using New A method of one class may call a constructor of another concrete class. This causes a class dependency between the class containing the calling method (the client) and the class containing the constructor (the supplier). Call to Interface Method A method of one class may call a method defined within an interface. This causes a class dependency between the class containing the calling method (the client) and the interface containing the called method (the supplier).

226 210 Appendix B B.3.9 B.3.10 Explicit Call of Superclass Constructor A constructor of a subclass may explicitly call the constructor of the direct superclass using the Java keyword super. This causes a class dependency between the subclass (the client) and the superclass (the supplier). Calls to Overridden or Shadowed Methods A method of a subclass may call overridden or shadowed methods of the superclass using the keyword super. This causes a class dependency between the subclass (the client) and the superclass (the supplier). B.4 Dependencies Involving Initializers and Inner Classes B.4.1 B.4.2 Static Initializer A class may contain one or more static initializers. A static initializer has neither a name nor a parameter and is executed once immediately after object construction. A static initializer is assigned a unique name (like static_init_1 ) and treated like any other static method. If a static initializer accesses a static member of another class this causes a class dependency between the class containing the static initializer (the client) and the class containing the accessed member (the supplier). Instance Initializer A class may contain one or more instance initializers. An instance initializer has neither a name nor a parameter and is executed once immediately after object construction. An instance initializer is assigned a unique name (like instance_init_1 ) and treated like any other method. On member level there is a dependency between each constructor of a class (the client member) and the method representing the instance initializer(s) (the supplier member). If an instance initializer accesses a member of another class this causes a dependency between the class containing the instance

227 Appendix B 211 initializer (the client) and the class containing the accessed member (the supplier). B.4.3 Inner Class A class may contain one or more inner classes. If a method of an inner class accesses a member of a class other than the containing class, this dependency is attributed to the containing class (the supplier).

228 212 Appendix B

229 213 Appendix C Coupling Metrics The online help of the IDE Together 1, Version 6.0 provides the following descriptions for the coupling metrics it calculates: CBO - Coupling Between Objects Represents the number of other classes to which a class is coupled. Counts the number of reference types that are used in attribute declarations, formal parameters, return types, throws declarations and local variables, and types from which attribute and method selections are made. Primitive types, types from java.lang package and supertypes are not counted. DAC - Data Abstraction Coupling Counts the number of reference types used in attribute declarations. Primitive types and super types are not counted. In Java, types from the java.lang package also belong to primitive types, and are not counted. FO - FanOut Counts the number of reference types that are used in attribute declarations, formal parameters, return types, throws declarations and local variables. Simple types and supertypes are not counted. MIC - Method Invocation Coupling This metric measures the (relative) number of other classes to which a certain class sends messages. Definition: MICnorm = nmic / (N -1) where N is the total number of classes defined in the project, and nmic is the number of classes to which messages are sent. 1 Together is a registered trademark of the Borland Software Corporation.

230 214 Appendix C RFC - Response For Class The size of the response set for the class includes methods in the class's inheritance hierarchy and methods that can be invoked on other objects. A class that provides a larger response set is considered to be more complex and require more testing efforts than one with a smaller overall design complexity. This metric is calculated as 'Number of Local Methods' + 'Number of Remote Methods'. VOD - Violations of Demeters Law The following definitions used to define metric VOD are from [Mari98]. Definition 1 Definition 2 Definition 3 Definition 4 Definition 5 Method M is a client of method f attached to class C, if inside M message f is sent to an object of class C, or to C. If f is specialized in one or more subclasses, then M is only a client of f attached to the highest class in the hierarchy. Method M is a client of some method attached to C. If M is a client of class C then C is a supplier to M. In other words, a supplier class to a method is a class whose methods are called in the method. A class C1 is an acquaintance class of method M attached to class C2, if C1 is a supplier to M and C1 is not one of the following: 1 the same as C2; 2 a class used in the declaration of an argument of M 3 a class used in the declaration of an instance variable of C2 A preferred-acquaintance class of method M is either: 1 a class of objects created directly 1 in M, or 2 a class used in the declaration of a global variable used in M. Class B is called a preferred-supplier to method M (attached to class C) if B is a supplier to M and one of the following conditions holds: 1 B is used in the declaration of an instance variable of C, 2 B is used in the declaration of an argument of M, including C and its superclasses, 3 B is a preferred acquaintance class of M. The class form of Demeters Law has two versions: a strict ver- 1 Direct creation means that a given object is created via operator new.

231 Appendix C 215 sion and a minimization version. The strict form of the law states that every supplier class of a method must be a preferred supplier. The minimization form is more permissive than the first version and requires only to minimize the number of acquaintance classes of each method. The definition of this metric is based on the minimization form of the Law of Demeter. Based on the concepts defined there, and remembering that the minimization form of Demeters Law requires that the number of acquaintance classes should be kept low, VOD metric is defined as follows: Definition of VOD Being given a class C and A the set of all its acquaintance classes, VOD(C) = A Informally, VOD is the number of acquaintance classes of a given class.

232 216 Appendix C

233 217 Appendix D Testability and Other Quality Characteristics D.1 Overview Testability is a distinct quality characteristic in a number of software quality models which have been described in the literature including ISO 9126 [ISO9126-2]. Some of these models are accompanied by a description of the relationships between the different quality characteristics. In a given software design it is not always possible to optimize all quality characteristics at the same time. The crosstable in Figure 3 (from [Glas92]) shows possible trade-offs between quality characteristics. portability reliability efficiency human engineering testability understandability modifiability portability x x x reliability x x efficiency x x x x x x human engineering x x x x x testability x x x x understandability x x x x modifiability x x x x Figure App.3 Trade-offs in achieving quality characteristics Another crosstable as part of a quality model has been given in [Wieg99] (Figure 4). According to this model the quality characteristics flexibility, maintainability, portability, reliability, and reusability have a positive effect on testability whereas efficiency,

234 218 Appendix D integrity, and usability have a negative one. Testability itself has a positive effect on availability, flexibility, maintainability, reliability, and usability but a negative effect on efficiency (according to this model). availability efficiency flexibility integrity availability + + efficiency flexibility integrity interoperability maintainability portability reliability reusability robustness testability usability interoperability maintainability portability reliability reusability robustness testability usability Figure App.4 Relationships between selected quality characteristics Another quality model by Perry in [Gill92] describes a positive relationship of testability to correctness, reliability, usability, and maintainability and a negative relationship to efficiency. D.2 Testability and Maintainability Testability is defined as a subcharacteristic of maintainability in the ISO model of software quality [ISO9126-2] because regression testing is a main task during software maintenance. Testability and changeability Changeability is another subcharacteristic of maintainability according to [ISO9126-2]. Testability improves changeability because it makes it easier to run regression tests after program changes. Hard-to-test applications are difficult to modify

235 Appendix D 219 [Fowl01]. This is one motivation for test-first design in Xtreme Programming which heavily relies on frequent refactorings. Note: Maintainability is not only a characteristic of the software under test but also of the related test cases. If changeability of the software is an important requirement then one should try to improve the maintainability of the test cases as well (which are often manifested as program code as well). D.3 Testability and Reliability A high level of testability improves the accuracy of reliability predictions based on reliability models [Haml93] [Haml96] because these models often use the number of faults found during testing as a basis for the prediction model. Voas defined testability as the probability that existing errors will be revealed during testing according to some testing scheme [Voas95d]. This led some authors [Bert96] [Stri95] to the wrong conclusion that a higher level of testability reduces reliability because failures will be observable more often. The important point is: from a testing point of view we want to observe as many failures as possible during operation while from a user point of view no failures should be observed. What we actually want is some failure indication to a system tester or operator while maintaining system functionality for the user at the same time. If this can be achieved (e.g. by additional test interfaces) then there is no contradiction between high levels of testability and high levels of reliability. Yang et.al. use testability together with test coverage to estimate reliability [Yang98]. Testability and fault tolerance Fault tolerance is a subcharacteristic of reliability and describes the capability of a software to maintain a specified level of performance in cases of software faults or of infringement of its specified interface. Fault tolerance during test execution means that longer sequences of test cases can be executed because less unrecoverable failures occur. Fault tolerance therefore improves testability. Fault tolerance not only improves testability but testability is also relevant to fault tolerance: redundant components in fault-toler-

236 220 Appendix D ant design should have high testability because they should be of high quality and therefore thoroughly tested (see [MIL2165]). D.4 Testability and Performance Testability and performance are likely to have negative effects on each other. Performance tuning makes the code more complex and difficult to understand and often reduces the correspondence between code and design documents. This makes test tasks more difficult. Improving testability often means to add additional code (like assertions or code related to defensive programming) which reduces performance during software execution. Sometimes a trade-off may be necessary between testability and performance (see [Lako96]). D.5 Testability and Reusability Components that are intended for reuse should have high testability if they shall fulfill high quality requirements (and therefore be tested thoroughly) or if they have to be tested multiple times in a different usage context. Reuse can have a positive as well as a negative effect on testability: Reuse has a positive effect on testability by reducing the number of classes to be tested. Reuse (within the same project) can have a negative effect on testability if it leads to additional coupling which reduces e.g. the flexibility to schedule test tasks (see [Lako96]).

237 Appendix D 221 D.6 Testability and Traceability Traceability from requirement documents to design and implementation artifacts is necessary for test case definition. Traceability from test cases back to requirement documents is necessary to understand the rational behind the test cases. D.7 Testability and Usability The relationship between testability and usability is only weak. A positive relationship between testability and usability may exist for example if: Additional system output (e.g. related to the state of business objects, memory consumption, system progress) not only improves the feedback to users about the result of their actions but improves observability in the testing context as well. The state behavior of business objects is simple which makes it easier to understand the pre- and postconditions of user interactions as well as to design related test cases. Usability can have a negative effect on testability if additional functionality is needed to support usability which has to be (implemented and) tested, or user-configureable menus and system parameters make test automation more difficult, or usability features like pop-up windows of messaging systems distract the recording of test-scripts while using a capture-and-replay test tool.

238 222 Appendix D

239 223 Appendix E Index A B abstract class. See class abstract syntax tree abstraction...29 abstractness...12 access statement...79 ACD. See metric ACDh. See metric actor...38, 39 agile software development...4 alternative flow...39, 40 analysis...12, 34, 42, analysis class model...51, 59 analysis package...52 analysis specification...51 analytical measure...13 application problem... 6, 7, 8, 37 architecture architectural layer...55 architectural view...52 layered architecture...52, 174, 178 artifact. See software artifact association...25, 62 association class...75 mandatory association...43, 44, 45 optional association...43, 45 assumption...72 attribute...38, 79, 123, 207 final attribute...60 non-private attribute...62 static attribute...96, 208 authentication...51 Average Class Dependency. See metric balanced binary tree...82 banking application...45, 48 behavior...8 black-box testing. See testing

240 224 Appendix E boxplot built-in-test...6 business logic...52, 53, 74 business process...38 C capture and replay...49 case study cast...73, 79, 207 catch-clause category change impact...20 changeability...69, 218 characterization checklist...40, 76 class...60, 201 abstract class...10, 29, 69, 91, 201 analysis class...56 anonymous class association class...75, 175, 178, 184 class association...10 class dependency. See dependency class diagram...10, 13, 99 class interaction...25 class library class model class under test client class...10 concrete class...10, 29, 60, 69, 201 control class...48, 51, 60, 175 domain class...38, 41, 45, 46 entity class...52, 60, 75, 175, 178 factory class...91, 95 final class...60 framework class...55 hard-to-test class...54, 56, 57, 70, 71, 72, 201 inner class...121, 211 interface class...51, 60 supplier class...10 test framework class...69 test-critical class...53, 56, 71, 202 test-sensitive class...54, 55, 56, 57, 70, 71, 72, 74, 202 wrapper class class dependency...10 client class. See class cohesion...70 command pattern...93 compilation...22, 82, 130 recompilation...24 compile dependency...10

241 Appendix E 225 complexity...6, 8, 11, 37, 42, 59, 126 component-based...22 concrete class. See class configuration file...91 configuration management consistent...8 constant...96 constructive measure...13 constructor...64, 91, 95, 121, 181, 184, 209 default constructor...55 pseudo-constructor...64, 95, 115, 181, 184, 202 context object...91 context. See test context control class...52 control flow...18, 42 controllability...8, 54 correlation...161, 168 coupling...6, 8, 11, 12, 26, 28, 70, 126, 130, 171 coverage white-box test coverage...5 create critical cross-cutting code...78 CRUD matrix...46, 47 customer...6, 7, 8, 42, 43, 51, 78 CUT...18, 62, 67, 73, 201 cycle. See dependency cycle D database...19, 54, 74, 75, 105, 175, 178, 191 debugging...19, 54, 58, 74 deep copy...116, 117 defect defect detection technique...6 defect tracking system demotion...92 dependency...8, 9 11 adjacent dependency...89, 176 class dependency...10, 87, 201 compile dependency...10 definition...9 dependency cycle 17, 18, 21, 24, 56, 70, 78, 83, 84, 85, 88, 92,...107, 117, 133, 153, 160, 179, 201 dependency double dependency graph...80, 94, 99, , 153, 176, 201 dependency graph model dependency structure... 24, 80, 85 design dependency...59, 61 direct dependency...10 dynamic dependency...61

242 226 Appendix E feedback dependency...85, 87, 118, 134, 144, 186 feedback dependency set...84, 117 hard-wired dependency..62, 64, 65, 67, 71, 94, 95, 111, 127, 176,...182, 201 hub dependency implementation dependency...78 indirect dependency...10, 26, 71 local dependency...12, 78 logical dependency...41, 44, 48 model dependency...10 redundant dependency reflexive dependency...107, 112 semantic dependency... 9, 79 semi-hard-wired dependency...65, 69, 96, 97, 112, 127, 176, 183, source code dependency static dependency...61, 79 syntactic class dependency...10 syntactic dependency... 61, 71, 79 test case dependency test-critical dependency...34, 53, 56, 78, 85, 89, 171, 202 type dependency...66, 69, 95, 98, 176, 203 Dependency Inversion Principle...28 design...10, 12, 13, 34, 48, 59 76, 77, 89 design change...78 design class model...60 design diagram design document...9 design erosion...78 design for testability...5, 33 design guideline...25, 34, 69 design model...59 design problem...174, 187 design specification...59 Design2Test...151, 201 designer...11 destroy...47 developer... 7, 8, 11, 37 development development process...53 development activity...12, 34, 38, 57 analysis...12 design...12 implementation requirements capture...12 test...12 diagnostic capability...6 direct dependency. See dependency direct test. See test distributed system...13

243 Appendix E 227 domain class model...38, 51 domain model...37 driver...16, 17, 18, 202 DSTMh. See metric E F G EDIFACT effort...24, 90 encapsulation...70, 74 entity...9, 123 entity relationship model...38 error...6, 49 escalation...93 evolution. See software evolution exception... 17, 19, 54, 79, 206 exceptional flow...39, 40 executable...4 execution path...62 extend relationship...41, 42, 47 external software characteristic...89 external system... 7 facàde...175, 178 factory factory class...26, 71, 74, 183 factory method...26 failure failure mode failure report...6 fault...5 fault isolation...15, 20, 22, 27, 55, 67, 68, 70, 126 fault tolerance feedback dependency. See dependency final...64 final attribute...60 final class...60 final method formalism framework persistency framework...55, 64, 74, 185 test framework class...69 functionality...11, 37, 78, 92 generalization...79 generalization relationship...41 global variable...17 goal. See test goal graphical user interface...16

244 228 Appendix E greedy algorithm GUI...19 GUI-testing...42 guideline...12, 13, 195 design guideline...69 H hard-to-test class. See class hard-wired dependency. See dependency heuristic...29, 144 hub...27 I implementation...12, 34, 48, 51, implementation detail...63, 66 implementation issue...37 implements relationship...79, 112, 121, 205 implicit convention...9 ImproveT include relationship... 41, 42, 44, 47 increment... 15, 53, 55, 76 indirect dependency. See dependency indirection information hiding...70 inheritance...62, 68, 70, 79, 98, 112, 121, 181, 205 inheritance relationship...47 initialization...19 input domain...54 input parameter inspection...3, 159 instance...44 instance initializer...121, 210 instanceof...73, 79, 207 instantiation...19, 46 instrumentation...54 integration strategy integration test. See test interaction diagram...52, 60 interface...10, 29, 60, 69, 71, 91, 95, 201 interface class...52 internal structural software characteristic...89 Internet...19, 54 J K Java...13 key data...45

245 Appendix E 229 L M large system...5, 9, 12 Law of Demeter... 29, 214 layer architectural layer...55 layered design...70 life-time...6 link...72, 95 Liskov Substitution Principle...28, 73 local variable logging...49, 74 main flow... 39, 40, 44 maintainability...52, 219 maintenance...6, 20, 70, 73, 74, 78 maintenance engineer...11 mandatory association. See association method abstract method...55 final method get method...96 method call...17 method signature...73 overridden method...115, 210 set method...96 shadowed method static method...71, 96, 184 metric...25, 35, 76, 94, 99, , 159 ACD...24, 81, 86, 129 ACDh ACDsh Average Class Dependency...81 CBO...26, 127, 141, 213 CBOi...27, 141 CD CDh CDsh change metric DAC...169, 213 design metric...12 direct metric DSTM...135, 179 DSTMh...97, 136, 183 FO...169, 213 indirect metric metric tool...35 MIC...169, 213 NCDC...84, 133 NFD...84, 87, 134 racd...86, 139, 178

246 230 Appendix E racdh...95, 142 racdin...146, 187 racdout...146, 187 racdsh reduction metric...86, 87, 95, 124, 138 RFC...169, 214 rncdc rnfd...87, 144 test process metric...101, 102 VOD...169, 214 mock...16, 18, 67, 73, 202 mock-up...38 model dependency. See dependency monitor...74 multiplicity...45 N O P navigation...42 cyclic navigation dependency NCDC. See metric NFD. See metric non-ambiguous...8 notation...10, 106 object context object...91 object construction...74 object use...74 Object Constraint Language object-oriented...8, 12 object-oriented system...10 observability...8, 54 observable...5, 18 observer pattern...93 OCL on demand...72 operation...38 oracle. See test oracle package...12, 29, 66, 152 analysis package...56 package boundary...71 parameter...19, 55, 78, 79, 91, 95 parameter object...68 performance...191, 220 persistency...52, 75 persistency framework...55 polling... 93

247 Appendix E 231 polymorphism...15, 61 postcondition...39, 40, 41, 42, 46, 48 precondition...39, 40, 41, 44, 48 prediction principles of good design...11 private...55 problem...6 problem domain...38 problem report...82 problem testability. See testability protocol manager prototype...55, 75 proxy...72 pseudo-constructor. See constructor Q R quality... 6, 23 software quality... 4 quantitative...8 racd. See metric racdh. See metric read...47 real-time system...13 recompilation...24 reduction metric. See metric re-entrance...20, 21 refactoring...25, 29, 85, 87, 90, 95, 98, 139, 179, 185, 202 reference... 79, 184 reflection...71 regression testing. See testing relationship...9 reliability repeatable...5 requirement...5, 11 customer requirement functional software requirement non-functional requirement...37, 53, 59 performance requirement requirement specification document...4 requirements capture...12, 34, requirements engineer...37, 43 requirements specification...4, 51 requirements specification document...4, 8, 9 technical requirement...52, 57 testability requirement... 13, 53, 57, 61 user requirement...3, 37 resource...4, 11, 16 global resource , 178, 184

248 232 Appendix E resource consumption...74 responsibility return parameter...19 return value...17, 62, 79, 96, 206 reusability...52, 220 reuse...53, 70, 220 test case reuse review... 3, 6, 40, 76 risk...6 rnfd. See metric role root cause analysis run-time...61, 91 S scale absolute scale interval scale ordinal scale ratio scale scale type scenario...38 semantic...73 semantic dependency. See dependency semantic dependency. See dependency sensitivity analysis...35, 85 separate...5 separation of concerns...6, 8 side effect...25 singleton...26, 64, 71, 91, 96, 175, 184 size...6, 126 software software artifact... 4 software developer. See developer software evolution...4 source code...4, 10, 13, 62, 63, 73, 77, 80, 152 source code dependency. See dependency specification...10 stability...12, 29 state...18, 19, 20, 22, 40, 54, 62 state diagram...52, 60 state model...38 static...64 static analysis...71, 80 static initializer...121, 210 stereotype...64 structure...8 stub...16, 17, 18, 21, 49, 54, 67, 69, 72, 74, 85, 118, 202 subclass...55, 60, 65, 79 super...210

249 Appendix E 233 superclass...60, 63 supplier class. See class syntactic reference...9 system build system external system... 37, 38, 49 system resource...26 system startup...72 system test. See test system trace...20 systematic approach...11, 12, 33 T Tarjan Algorithm...110, 118 technical infrastructure...73 telecommunication...33 telecommunication software...13 template test...12, 34, black-box test case...18 class under test direct test...54 integration test...15, 17, 25, 77, 84, 133 maintenance test...58 regression test...49 system test...15, 67, 77 test activity...15 test automation...4, 21, 26 test case...4, 5, 6, 8, 17, 74 test case definition...40 test case maintenance...20, 42 test context...4, 8 test coverage criterion...16, 17 test criteria...5 test design...16, 44, 70 test effort test execution...16, 62 test follow-up...16 test goal...4, 5, 34, 101 test in isolation...12, 18, 66, 75, 84, 95, 128, 182 test level...15 test maintenance...16 test object...15 test object testability. See testability test oracle...7, 8, 54, 202 test order...17, 42 test plan...40 test planning...16 test preparation test problem... 7, 11, 12, 53, 89

250 234 Appendix E test problem report test process metric test progress...72 test reference...6, 8 test reference testability. See testability test result...18, 19, 20 test result analysis...16, 46 test setup...19, 27, 42, 45, 55, 62, 70 test task...11, 15, 16, 35, 101 test-critical class. See class test-critical dependency. See dependency test-ready...22 test-sensitive class. See class unit test...15, 17, 21, 58, 62, 67, 77, 84 white-box test...54 white-box test case...18 testability...4 8, 10, 202 definition...4 problem testability...7, 37, 40 test object testability...7, 8 test reference testability...7, 8 testability engineering...11, 23 testability evaluation testability feature...61 testability requirement. See requirement testable testable requirement...8, 40 Test-Driven Development... 4, 24 tester...45, 62, 65, 68 Test-First Design...4, 12, 24 testing , 6, 202 black-box testing...5 dynamic testing... 3, 10 regression testing...3, 6 static testing... 3 white-box testing...5 third-party company...49 third-party framework...54 third-party provider...54 threshold value...82 throw-clause time critical path...53 Together trace...20 traceability trade-off...58 training transaction...74 transitive...10, 12 transparency...73

251 Appendix E 235 type...69, 71, 207 type declaration...54, 79, 113, 120, 206 type dependency. See dependency U V W X Y UML...9, 10, 13, 39, 41, 43, 64, 65, 99, 203 UML-model...8 unit unit test. See test unstructured...5 update...47 usability...49, 221 use case...37, 44, 45, 53 textual use case description...38, 47 use case criticality...38, 40, 43 use case dependency use case diagram...48 use case frequency...38, 40, 43 use case model... 10, 13, 37, 52 use case reference...46 use case relationship...10 user...37, 39, 49 user interface...16 validation... 5, 38 variable local variable...79 visibility scope...71 what-if analysis white-box test coverage. See coverage white-box testing. See testing work package...70 XML Xtreme Programming...4, 24 Yoyo-Effect...68

252 236 Appendix E

253 237 Appendix F Bibliography [Adam01] [Alha97] [Aris02] [Armo00] [Bach97] [Bahi] [Baud02] [Beck01] [Beck03] [Bert96] [Bind94] [Bind95c] [Bind96b] [Bind99] Ed Adams and Sam Guckenheimer. Achieving quality by design - part II: Using UML. White-paper by Rational. Available from Internet: < Ghassan Al-Hayek, Yves Le Traon, Chantal Robach. Impact of system partitioning on test cost, IEEE Design & Test of Computers, pp , Jan Eric Arisholm. Dynamic coupling measures for object-oriented software. In Proceedings of 8th International Symposium on Software Metrics (Metrics 2002),Ottawa, June Frank Armour, Granville Miller. Advanced use case modeling: software systems. Addison Wesley, ISBN James Bach. Attributes of software testability Available from Internet: < A. Terry Bahill, Frank F. Dean. Discovering system requirements. Available from Internet: < B. Baudry, Y. Le Traon, G. Sunye, J-M. Jézéquel. Testability analysis of a UML class diagram. In Proceedings of 8th International Symposium on Software Metrics (Metrics 2002),Ottawa, June K. Beck. Aim, Fire. In IEEE Software, September/October 2001, pp K. Beck. Test-driven development by example. Addison-Wesley, Antonia Bertolino, Lorenzo Strigini. On the use of testability measures for dependability assessment, IEEE Transactions on Software Engineering, vol. 22, pp , Feb Robert V. Binder. Design for testability in object-oriented systems. Communications of the ACM, vol. 37, pp , Sept Robert V. Binder, Trends in testing object-oriented software, IEEE Computer, pp , Oct Robert V Binder. The FREE approach to testing object-oriented software: an overview Available from Internet: < Robert V. Binder. Testing object-oriented systems: models, patterns, and tools. Addison Wesley, 1999.

254 238 Appendix F [Bria96] [Bria96b] [Bria99c] [Bria01] [Bria02] [Bühl02] [Chen76] [Chid94] [Clar01] [Cold96] [Coll97] [Deme00] [DOD2167A] [Dorm97] [Drab99] Lionel Briand, John Daly, Jürgen Wuest. A unified framework for coupling measurement in object-oriented systems. Fraunhofer Institute for Experimental Software Engineering, Germany, ISERN Technical Report. Lionel C. Briand, Sandro Morasca, and Victor R. Basili. Propertybased software engineering measurement. IEEE Transactions on Software Engineering, vol. 22, no. 1, Jan Lionel C. Briand, John W. Daly and Jürgen K. Wuest. A unified framework for coupling measurement in object-oriented systems. IEEE Transactions on Software Engineering, vol. 25, no. 1, January/February, 1999, pp Available from Internet: < L. C. Briand, Y. Labiche, and Y. Wang. Revisiting strategies for ordering class integration testing in the presence of dependency cycles. Carleton University, Ottawa, Canada, Technical Report TR SCE-01-02, L. Briand, S. Morasca, V. Basili. An operational process for goaldriven definition of measures. IEEE Transactions on Software Engineering, vol. 28, no. 12, December A. Bühl and P. Zöfel. SPSS Version 11: Einführung in die moderne Datenanalyse unter Windows. Addison Wesley, P.P. Chen. The entity-relationship model: toward a unified view of data. ACM Transactions on Database Systems, vol. 1, no. 1 pp. 9-36, Shyam R. Chidamber and Chris. F. Kemerer. A metrics suite for object oriented design. IEEE Transactions on Software Engineering, vol. 20, no. 6, June JDepend, written by Mike Clark, Clarkware Consulting, Inc. Available from Internet: < Jens Coldewey. Decoupling of object-oriented systems. Tech. rep., sd&m Muenchen, Dec Terence M. Colligan. Nine steps to delivering defect-free software Available from Internet: < Serge Demeyer, Stéphane Ducasse, and Oscar Nierstrasz. Finding refactorings via change metrics. In Proceedings of OOPSLA'2000 (International Conference on Object - Oriented Programming Systems Languages and Applications), pp DOD-STD-2167A, Defense system software development. Department of Defense, Washington, D.C February Misha Dorman. C++: "it is testing, Jim, but not as we know it". Presented at EuroSTAR 97, Edinburgh November Rodger Drabick. On-track requirements: How to evaluate requirements for testability. STQE Magazine, Available from Internet: <

255 Appendix F 239 [Eade93] P. Eades, X. Lin, and W. F. Smyth. A fast & effective heuristic for the feedback arc set problem. In Information Processing Letter, no. 47, 1993, pp [Edle01] Håkan Edler and Jonas Hörnstein (editors). BIT in software components. Project report of the EC 5th Framework Project IST Available from Internet: < [Faya99] M. E. Fayad, D. C. Schmidt, and R. E. Johnson (Eds). Building application frameworks: object oriented foundations of framework design, Wiley & Sons, 1999, ISBN [Feat02] Michael Feathers. Working effectively with legacy code. White-paper. Available from Internet: < [Fent96] N. E. Fenton and S. L. Pfleeger. Software Metrics: A rigorous and practical approach. (2nd edition), Thomson Computer Press, [Fire01] Donald Firesmith and Brian Henderson-Sellers. The OPEN process framework: An introduction. Addison-Wesley, December 2001, ISBN [Flat01] David Flater. Debugging agent interactions: a case study. In Proceedings of the 6th ACM Symposium on Applied Computing, March [Fowl00] M. Fowler. Refactoring - Improving the design of existing code. Addison-Wesley, Object Technology Series, [Fowl01] Martin Fowler. Separting user interface code. IEEE Software, March/ April 2001, pp [Fowl01b] Martin Fowler. Reducing coupling. IEEE Software, July/August 2001, pp [Free02] Steve Freeman and Paul Simmons, "Retrofitting unit tests," presented at Third International Conference on extreme Programming and Agile Processes in Software Engineering, Alghero, Sardinia, Italy, May, [Gamm94] Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Design patterns: Elements of reusable object-oriented software. Addison-Wesley, [Gelp96] David Gelperin, Alden M. Hayashi. What s your testability maturity? Application Development Trends, May 1996, pp [Geor02] B. George and L. Williams: An initial investigation of Test-Driven Development in industry. ACM Symposium on Applied Computing, March Available from Internet: < [Gilb93] T. Gilb, D. Graham. Software inspection, Addison Wesley, [Gill92] Alan Gillies. Software quality: theory and management. Chapman & Hall Computing, [Glas92] Robert L. Glass. Building quality software. Prentice Hall, Englewood Cliffs, New Jersey, ISBN

256 240 Appendix F [Gupta94] [Gupta95] [Gurp01] [Haml93] [Haml96] [Hatt99] [Hend96] [Hill01] [IEEE90] [ISO9126-2] [ISO9126-3] [Jaco92] [Jung99] [Jung02] [Jung02b] [Jung02c] Suresh C. Gupta, Mukul K. Sinha. Improving software testability by observability and controllability, in Proceedings of IFIP 13th World Computer Congress, Hamburg, Germany, 28 August - 2 September, 1994, North-Holland, S. C. Gupta and M. K. Sinha. Impact of software testability considerations on software development life cycle. In Proceedings of IEEE First International Conference on Software Testing Reliability and Quality Assurance, Piscataway, NJ, 1995, pp ISBN J. van Gurp and J. Bosch. Design, implementation and evolution of object oriented frameworks: concepts and guidelines. Software Practice and Experience, vol. 31, p , 2001 D. Hamlet and J. Voas. Faults on its sleeve: amplifying software reliability testing. In Proceedings of ISSTA 93, Boston, June, 1993, pp Dick Hamlet. Predicting dependability by testing. In Proceedings of ISSTA 96, San Diego, CA, USA, 1996, pp Les Hatton. Testing, complexity, coupling, diagnosis & repetitive failure. Presented at EuroSTAR 99, Barcelona, Spain, Brian Henerson-Sellers. Object-oriented metrics: measures of complexity. Prentice Hall PTR, Michael Hill. The four keys to test-first design: laziness, resentment, xenophobia, and guile. Presented at ICASE, October Available from Internet: < IEEE Std Standard Glossary of Software Engineering Terminology, ISO/IEC Software product quality - external metrics. ISO/IEC Software product quality - internal metrics. Ivar Jacobson, Magnus Christerson, Patrik Jonsson, and Gunnar Övergaard. Object-oriented software engineering: a use case driven approach. Addison-Wesley, Stefan Jungmayr. Reviewing software artifacts for testability. Presented at EuroSTAR 99, Barcelona, Spain, Nov. 8-12, Stefan Jungmayr. Design for testability. In Proceedings of CON- QUEST 2002, Nuremberg, Germany, September 18th-20th, 2002, pp Stefan Jungmayr. Identifying test-critical dependencies In Proceedings of IEEE International Conference on Software Maintenance. Montréal, Canada, October 3-6, 2002, pp Dae-Young Jung, Sung-Ho Kwak, Moon-Key Lee. Reusable embedded debugger for 32bit RISC processor using the JTAG boundary scan architecture. In Proceedings of The Third IEEE Asia-Pacific Conference on ASICs, August 6-8, 2002, Grand Hotel, Taipei, Taiwan R.O.C. Available from Internet: <

257 Appendix F 241 [Kafu] [Kahl98] [Kitc95] [Koen99] [Kraw98] [Kung93] [Kung94b] [Kung94c] [Labi00] [Lait00] [Lako96] [Lieb88] [Link01] [Link02] [Lisk88] [Mari98] Dennis Kafura. Object-oriented software design and construction with C++. Available from Internet: < Bernd Kahlbrandt. Software-Engineering: objektorientierte Software- Entwicklung mit der Unified modeling language. Springer, Barbara Kitchenham, Shari Lawrence Pfleeger, and Norman Fenton. Towards a framework for software measurement validation. IEEE Transactions on Software Engineering, vol. 21, no. 12, December 1995, pp Andrew Koenig and Barbara E. Moo. the considered questionable. Journal on Object-Oriented Programming, May 1999, p. 73, 74, 77. Henryk Krawczyk, Bogdan Wiszniewski. Analysis and Testing of Distributed Software Applications, ch. Design for Testability, pp Research Studies Press Ltd., David Chenho Kung, Jerry Gao, Pei Hsia, J. Lin, Yasufumi Toyoshima: Design recovery for software testing of object-oriented programs. In Proceedings of Working Conference on Reverse Engineering, WCRE 1993, May 21-23, 1993, Baltimore, Maryland, USA, pp D. Kung, J. Gao, P. Hsia, F. Wen, and Y. Toyoshima, Change impact identification in object oriented software maintenance, in Proceedings of the Conference on Software Maintenance1994, pp , 1994 D. Kung, J. Gao, P. Hsia, Y. Toyoshima, and C. Chen, On Regression Testing of Object-Oriented Programs, 1994 Y. Labiche, P. Thévenod-Fosse, H. Waeselynck, and M.-H. Durand. Testing levels for object-oriented software. In Proceedings of ICSE 2000, pp Oliver Laitenberger. Cost-effective detection of software effects through perspective-based inspections. Fraunhofer IRB Verlag, 2000, ISBN John Lakos. Large-scale C++ software design. Addison-Wesley, ISBN K. Lieberherr, I. Holland, and A. Riel. Object-Oriented Programming: An Objective Sense of Style, in OOPSLA 88, 1988 Johannes Link. Einsatz von Mock-Objekten für den Softwaretest. Java Spektrum, no. 4, July/August 2001, pp Johannes Link. Unit Tests mit Java: Der Test-First-Ansatz. dpunkt- Verlag, Barbara Liskov. Data Abstraction and Hierarchy, SIGPLAN Notices, vol. 23, no. 5, May Radu Marinescu. An Object oriented metrics suite on coupling. Universitatea Politechnica Timisoara, Facultatea de Automatica si Calculatoare, Departamentul de Calculatoare si Inginerie Software. September, 1998.

258 242 Appendix F [Mart96b] Robert C. Martin. The Liskov Substitution Principle. C++ Report, March [Mart96c] Robert C. Martin. The Dependency Inversion Principle. C++ Report, May [Mart96e] Robert C. Martin. Granularity. C++ Report, Nov [Mart97] Robert C. Martin. Stability. C++ Report, Jan [Maso99] J. Mason and E. S. Ochotta. The application of object-oriented design techniques to the evolution of the architecture of a large legacy software system. In Proceedings of 5th USENIX Conference on Object- Oriented Technologies and Systems (COOTS 99), San Diego, California, USA, May 3-7, [Maxi03] E. M. Maximilien and L. Williams. Assessing test-driven development at IBM. In Proceedings of IEEE 25th International Conference on Software Engineering, 3-10 May 2003, Portland, Oregon, USA, pp , [McGr96] J. McGregor, A measure of testing effort, in Second USENIX Conference on Object-Oriented Technologies and Systems (COOTS), pp , [McGr96c] J.D. McGregor, B.A. Malloy, and R.L. Siegmund. A comprehensive program representation of object-oriented software. In Annals of Software Engineering, vol. 2, [Merl03] Edgar Merl. Testbarkeitsanalyse von Abhängigkeiten in objektorientierter Software. Master thesis, Fernuniversität Hagen, Germany, [MIL2165] MIL-STD-2165: Military standard testability program for electronic systems and equipment. January 26, [Muel02] M. Müller, O. Hagner. Experiment about Test-first programming. In Conference on Empirical Assessment In Software Engineering EASE 02, Keele, April Available from Internet: < ~exp/xp/ease02.ps.gz> [Over94] J. Overbeck, Integration Testing for Object-Oriented Software. PhD thesis, University of Technology, Vienna, Austria, 1994 [Pelk96] Ute Pelkmann. Testen von OO-Programmen aus der Sicht der Praxis. In Test, Analyse und Verifikation von Software (Monika Muellerburg, Andreas Spillner, and Peter Liggesmeyer, eds.), pp , GMD, [Pol00] Martin Pol, Tim Koomen, and Andreas Spillner. Management und Optimierung des Testprozesses: Praktischer Leitfaden für erfolgreiches Software-Testen mit TPI und TMap. dpunkt.verlag, April ISBN [Rain01] J.B. Rainsberger. Use your singletons wisely. IBM developerworks, July Available from Internet: <

259 Appendix F 243 [Rich98] Debra J. Richardson and Nancy S. Eickelmann. Testability models for risk assessment. In Proceedings of NASA s Workshop on risk management (WoRM), Nemacolin Woodlands Resort, Farmington, PA, October 26, Available from Internet: < research.ivv.nasa.gov/worm98/worm_98_proceedings.htm> [Robe96] Suzanne Robertson. An early start to testing: how to test requirements. In Proceedings of EuroSTAR 96, Amsterdam, December 2-6, [Roma85] Gruia-Catalin Roman. A taxonomy of current issues in requirements engineering. IEEE Computer, April 1985, pp [Roth99] Gregg Rothermel, Mary Jean Harrold, and Jeinay Dedhia. Regression test selection for C++ software. Technical Report , Computer Science Department, Oregon State University, January [Ruep97] Peter Rüppel. Ein generisches Werkzeug fuer den objektorientierten Softwaretest. PhD thesis, Fachbereich Informatik, Technische Universität Berlin, [RUP99] Jacobson Ivar, Booch Grady and Rumbaugh James. The Unified Software Development Process. Addison-Wesley, ISBN [Rupp02] Chris Rupp. Requirements-Engineering und -Management: Professionelle, iterative Anforderungsanalyse für die Praxis. 2nd Edition, Hanser, 2002, ISBN [Six03] H.-W. Six and Mario Winter. Software Engineering I - objektorientierte Softwareentwicklung. Course material of FernUniversität Hagen, Germany. [Skie97] S. S. Skiena. The algorithm design manual. Springer, [Spil00] Andreas Spillner. From V-Model to W-Model: Etablishing the whole test process. In Proceedings of Conquest 2000 (4th Conference on Quality Engineering in Software Technology), Sept , 2000, Nürnberg, pp , ISBN [Spil01] Andreas Spillner. Das W-Modell - Testen als paralleler Prozess zum Software-Entwicklungsprozess. In Proceedings of TAV16, Nordakademie Elmshorn, , Softwaretechnik-Trends, vol. 21, no. 1, [Spil03] Andreas Spillner, Tilo Linz. Basiswissen Softwaretest: Aus- und Weiterbildung zum Certified Tester. dpunkt.verlag, [Stac97] Dave A. Stacey. Lecture notes on software requirements specification. Guelph Natural Computation Research Group, University of Guelph, Ontario, Canada. October 07, Available from Internet: < [Stri95] Lorenzo Strigini and Antonia Bertolino. The flip side of designing for testability. IEEE Software, vol. 12, no. 5, p , [Szyp97] Clemens Szyperski. Component software: Beyond object-oriented programming. Addison-Wesley, ISBN

260 244 Appendix F [Taen89] David Taenzer, Murthy Ganti, and Sunil Podar. Object-oriented software reuse: the yoyo problem. in Journal of Object-Oriented Programming, September 1989, pp [Tarj72] R. E. Tarjan. Depth-first search and linear graphalgorithms. SIAM Journal on Computing, vol. 1, pp , [Trao00] Y. Le Traon, T. Jéron, J-M. Jézéquel, and P. Morel. Efficient objectoriented integration and regression testing. In IEEE Transactions on Software Engineering, vol. 49, no. 1, March 2000, pp [Tuur01] Lassi A. Tuura and Lucas Taylo. Ignominy: a tool for software dependency and metric analysis with examples from large HEP packages. In Proceedings of Computing in high enery and nuclear physics (CHEP 01), Beijing, China, September 3-7, [UML01] UML Semantics, version 1.4, Sept [Vers99] Gerhard Versteegen (edit.). Das V-Modell 97 in der Praxis. dpunkt.verlag, 1999, ISBN [Voas95b] Jeffrey Voas and Keith W. Miller. Software testability: The new verfication. IEEE Software, vol. 12, pp , May [Voas95d] Jeffrey Voas, J. Payne, and R. Mills. Software testability: An experiment in measuring simulation reusability, in Proceedings of the ACM SIGSOFT SSR 95, Seattle, April 29-30, [Voas97h] Jeffrey Voas. Half-Day-Tutorial: Software Testability. Submitted to ICSM 97, [Webs95] Bruce F. Webster. Pitfalls of object-oriented development. M&T Books, [Weyu82] Elaine J. Weyuker. On testing non-testable programs. The Computer Journal, vol. 25, no. 4, 1982, pp [Whit00] J. A. Whittaker. What is software testing? And why is it so hard?. IEEE Software, pp , Jan [Wieg99] Karl E. Wiegers. Software Requirements. Microsoft Press, [Wiki] Wikipedia. Available from Internet: < [Wint99] Mario Winter. Qualitätssicherung für objektorientierte Software: Anforderungsermittlung und Test gegen die Anforderungsspezifikation. PhD thesis, FernUniversität Hagen, Department for Software Engineering, Sept [Yang98] Mark C.K. Yang, W. Eric Wong, and Alberto Pasquini. Applying testability to reliability estimation. In Proceedings of the The Ninth International Symposium on Software Reliability Engineering, 4-7 November, 1998 Paderborn, Germany. Available from Internet: <

Permanent Establishments in International Tax Law

Permanent Establishments in International Tax Law Schriftenreihe zum Internationalen Steuerrecht Herausgegeben von Univ.-Prof. Dr. Michael Lang Band 29 Permanent Establishments in International Tax Law herausgegeben von Hans-Jörgen Aigner Mario Züger

More information

Bank- und finanzwirtschaftliche Forschungen Band 395

Bank- und finanzwirtschaftliche Forschungen Band 395 Bank- und finanzwirtschaftliche Forschungen Band 395 Institut für schweizerisches Bankwesen der Universität Zürich Schweizerisches Institut für Banken und Finanzen an der Universität St. Gallen Florentina

More information

Services supply chain management and organisational performance

Services supply chain management and organisational performance Services supply chain management and organisational performance Irène Kilubi Services supply chain management and organisational performance An exploratory mixed-method investigation of service and manufacturing

More information

Erich R. Utz. Modelling and Measurement Methods of Operational Risk in Banking. Herbert Utz Verlag München

Erich R. Utz. Modelling and Measurement Methods of Operational Risk in Banking. Herbert Utz Verlag München Erich R. Utz Modelling and Measurement Methods of Operational Risk in Banking Herbert Utz Verlag München Schriftenreihe für angewandtes Management herausgegeben von Prof. Dr. Dr. Christian Werner, Fachhochschule

More information

Customer Intimacy Analytics

Customer Intimacy Analytics Customer Intimacy Analytics Leveraging Operational Data to Assess Customer Knowledge and Relationships and to Measure their Business Impact by Francois Habryn Scientific Publishing CUSTOMER INTIMACY ANALYTICS

More information

Basic Testing Concepts and Terminology

Basic Testing Concepts and Terminology T-76.5613 Software Testing and Quality Assurance Lecture 2, 13.9.2006 Basic Testing Concepts and Terminology Juha Itkonen SoberIT Contents Realities and principles of Testing terminology and basic concepts

More information

Contents. Introduction and System Engineering 1. Introduction 2. Software Process and Methodology 16. System Engineering 53

Contents. Introduction and System Engineering 1. Introduction 2. Software Process and Methodology 16. System Engineering 53 Preface xvi Part I Introduction and System Engineering 1 Chapter 1 Introduction 2 1.1 What Is Software Engineering? 2 1.2 Why Software Engineering? 3 1.3 Software Life-Cycle Activities 4 1.3.1 Software

More information

Global Trade Law. von Ulrich Magnus. 1. Auflage. Global Trade Law Magnus schnell und portofrei erhältlich bei beck-shop.de DIE FACHBUCHHANDLUNG

Global Trade Law. von Ulrich Magnus. 1. Auflage. Global Trade Law Magnus schnell und portofrei erhältlich bei beck-shop.de DIE FACHBUCHHANDLUNG Global Trade Law International Business Law of the United Nations and UNIDROIT. Collection of UNCITRAL's and UNIDROIT's Conventions, Model Acts, Guides and Principles von Ulrich Magnus 1. Auflage Global

More information

Usability Evaluation of Modeling Languages

Usability Evaluation of Modeling Languages Usability Evaluation of Modeling Languages Bearbeitet von Christian Schalles 1. Auflage 2012. Taschenbuch. XXIII, 183 S. Paperback ISBN 978 3 658 00050 9 Format (B x L): 0 x 0 cm Gewicht: 275 g Weitere

More information

Engineering. Software. Eric J. Braude. Michael E. Bernstein. Modern Approaches UNIVERSITATSBIBLIOTHEK HANNOVER ' TECHNISCHE INFORM ATIONSBIBLIOTHEK

Engineering. Software. Eric J. Braude. Michael E. Bernstein. Modern Approaches UNIVERSITATSBIBLIOTHEK HANNOVER ' TECHNISCHE INFORM ATIONSBIBLIOTHEK Software Engineering Modern Approaches SECOND EDITION Eric J. Braude Boston University, Metropolitan College Michael E. Bernstein Boston University, Metropolitan College TECHNISCHE INFORM ATIONSBIBLIOTHEK

More information

An Enterprise Modeling Framework for Banks using. Algebraic Graph Transformation

An Enterprise Modeling Framework for Banks using. Algebraic Graph Transformation An Enterprise Modeling Framework for Banks using Algebraic Graph Transformation vorgelegt von Diplom-Wirtschaftsinformatiker Christoph Brandt aus Berlin-Lichterfelde von der Fakultät IV - Elektrotechnik

More information

Agile Test-based Modeling

Agile Test-based Modeling Agile Test-based Modeling Bernhard Rumpe Software Systems Engineering TU Braunschweig, Germany www.sse.cs.tu-bs.de Model driven architecture (MDA) concentrates on the use of models during software development.

More information

Technische Berichte Nr. 11 des Hasso-Plattner-Instituts für Softwaresystemtechnik an der Universität Potsdam

Technische Berichte Nr. 11 des Hasso-Plattner-Instituts für Softwaresystemtechnik an der Universität Potsdam HASSO - PLATTNER - INSTITUT für Softwaresystemtechnik an der Universität Potsdam Requirements for Service Composition Harald Meyer Dominik Kuropka Technische Berichte Nr. 11 des Hasso-Plattner-Instituts

More information

limo B. Poser The Impact of Corporate Venture Capital

limo B. Poser The Impact of Corporate Venture Capital limo B. Poser The Impact of Corporate Venture Capital GABLER EDITION WISSENSCHAFT Timo B. Poser The Impact of Corporate Venture Capital Potentials of Competitive Advantages for the Investing Company Deutscher

More information

Embedded Software Development and Test in 2011 using a mini- HIL approach

Embedded Software Development and Test in 2011 using a mini- HIL approach Primoz Alic, isystem, Slovenia Erol Simsek, isystem, Munich Embedded Software Development and Test in 2011 using a mini- HIL approach Kurzfassung Dieser Artikel beschreibt den grundsätzlichen Aufbau des

More information

SOFTWARE ARCHITECTURE QUALITY EVALUATION

SOFTWARE ARCHITECTURE QUALITY EVALUATION SOFTWARE ARCHITECTURE QUALITY EVALUATION APPROACHES IN AN INDUSTRIAL CONTEXT Frans Mårtensson Blekinge Institute of Technology Licentiate Dissertation Series No. 2006:03 School of Engineering Software

More information

for High Performance Computing

for High Performance Computing Technische Universität München Institut für Informatik Lehrstuhl für Rechnertechnik und Rechnerorganisation Automatic Performance Engineering Workflows for High Performance Computing Ventsislav Petkov

More information

Software Engineering. Software Testing. Based on Software Engineering, 7 th Edition by Ian Sommerville

Software Engineering. Software Testing. Based on Software Engineering, 7 th Edition by Ian Sommerville Software Engineering Software Testing Based on Software Engineering, 7 th Edition by Ian Sommerville Objectives To discuss the distinctions between validation testing and defect t testing To describe the

More information

CS314: Course Summary

CS314: Course Summary CS314: Course Summary Prof. Robert B. France Dept. of Computer Science Colorado State University Robert B. France 1 1 Software Development Issues Explored - 1 Software engineering basics Why do we need

More information

Chapter 8 Software Testing

Chapter 8 Software Testing Chapter 8 Software Testing Summary 1 Topics covered Development testing Test-driven development Release testing User testing 2 Program testing Testing is intended to show that a program does what it is

More information

Introduction to Automated Testing

Introduction to Automated Testing Introduction to Automated Testing What is Software testing? Examination of a software unit, several integrated software units or an entire software package by running it. execution based on test cases

More information

Certified Software Quality Engineer (CSQE) Body of Knowledge

Certified Software Quality Engineer (CSQE) Body of Knowledge Certified Software Quality Engineer (CSQE) Body of Knowledge The topics in this Body of Knowledge include additional detail in the form of subtext explanations and the cognitive level at which the questions

More information

HWZ Schriftenreihe für Betriebs- und Bildungsökonomie. Herausgegeben von HWZ Hochschule für Wirtschaft Zürich. Band 9

HWZ Schriftenreihe für Betriebs- und Bildungsökonomie. Herausgegeben von HWZ Hochschule für Wirtschaft Zürich. Band 9 HWZ Schriftenreihe für Betriebs- und Bildungsökonomie Herausgegeben von HWZ Hochschule für Wirtschaft Zürich Band 9 Isabelle Kern The Suitability of Topic Maps Tools for Knowledge Creation with Stakeholders

More information

Über die Semantik von Modellierungssprachen

Über die Semantik von Modellierungssprachen Über die Semantik von Modellierungssprachen und des UML-Standards Prof. Dr. Bernhard Rumpe Technische Universität Braunschweig http://www.sse.cs.tu-bs.de/ Seite 2 What is a model? And why do we need modeling

More information

Design Specification for IEEE Std 1471 Recommended Practice for Architectural Description IEEE Architecture Working Group 0 Motivation

Design Specification for IEEE Std 1471 Recommended Practice for Architectural Description IEEE Architecture Working Group 0 Motivation Design Specification for IEEE Std 1471 Recommended Practice for Architectural Description IEEE Architecture Working Group 0 Motivation Despite significant efforts to improve engineering practices and technologies,

More information

Automated Module Testing of Embedded Software Systems

Automated Module Testing of Embedded Software Systems Automated Module Testing of Embedded Software Systems Master s Thesis Fredrik Olsson Henrik Lundberg Supervisors Thomas Thelin, LTH Michael Rosenberg, EMP Nicklas Olofsson, EMP II Abstract When designing

More information

How to start up a software business within a cloud computing environment

How to start up a software business within a cloud computing environment Thomas Buchegger How to start up a software business within a cloud computing environment An evaluation of aspects from a business development perspective Anchor Academic Publishing disseminate knowledge

More information

Leitlinien- Clearingbericht "Depression

Leitlinien- Clearingbericht Depression Schriftenreihe Band 12 Leitlinien- Clearingbericht "Depression Leitlinien-Clearingverfahren von Bundesärztekammer und Kassenärztlicher Bundesvereinigung in Kooperation mit Deutscher Krankenhausgesellschaft

More information

Empirical study of Software Quality Evaluation in Agile Methodology Using Traditional Metrics

Empirical study of Software Quality Evaluation in Agile Methodology Using Traditional Metrics Empirical study of Software Quality Evaluation in Agile Methodology Using Traditional Metrics Kumi Jinzenji NTT Software Innovation Canter NTT Corporation Tokyo, Japan [email protected] Takashi

More information

Agile Software Engineering Practice to Improve Project Success

Agile Software Engineering Practice to Improve Project Success Agile Software Engineering Practice to Improve Project Success Dietmar Winkler Vienna University of Technology Institute of Software Technology and Interactive Systems [email protected]

More information

The Unified Software Development Process

The Unified Software Development Process The Unified Software Development Process Technieche Universal Darmstadt FACHBEREICH IN-FORMAHK BLIOTHEK Ivar Jacobson Grady Booch James Rumbaugh Rational Software Corporation tnventar-nsr.: Sachgebiete:

More information

Human-Survey Interaction

Human-Survey Interaction Lars Kaczmirek Human-Survey Interaction Usability and Nonresponse in Online Surveys Herbert von Halem Verlag Bibliografische Information der Deutschen Bibliothek Die deutsche Bibliothek verzeichnet diese

More information

Management. Project. Software. Ashfaque Ahmed. A Process-Driven Approach. CRC Press. Taylor Si Francis Group Boca Raton London New York

Management. Project. Software. Ashfaque Ahmed. A Process-Driven Approach. CRC Press. Taylor Si Francis Group Boca Raton London New York Software Project Management A Process-Driven Approach Ashfaque Ahmed CRC Press Taylor Si Francis Group Boca Raton London New York CRC Press is an imprint of the Taylor St Francis Croup, an Informa business

More information

Software testing. Objectives

Software testing. Objectives Software testing cmsc435-1 Objectives To discuss the distinctions between validation testing and defect testing To describe the principles of system and component testing To describe strategies for generating

More information

Produktfamilienentwicklung

Produktfamilienentwicklung Produktfamilienentwicklung Bericht über die ITEA-Projekte ESAPS, CAFÉ und Families Günter Böckle Siemens CT SE 3 Motivation Drei große ITEA Projekte über Produktfamilien- Engineering: ESAPS (1.7.99 30.6.01),

More information

Globalization, Technologies and Legal Revolution

Globalization, Technologies and Legal Revolution Minderheiten und Autonomien Band 21 Francesco Palermo/Giovanni Poggeschi/Günther Rautz/Jens Woelk (eds.) Globalization, Technologies and Legal Revolution The Impact of Global Changes on Territorial and

More information

The Concern-Oriented Software Architecture Analysis Method

The Concern-Oriented Software Architecture Analysis Method The Concern-Oriented Software Architecture Analysis Method Author: E-mail: Student number: Supervisor: Graduation committee members: Frank Scholten [email protected] s0002550 Dr. ir. Bedir Tekinerdoǧan

More information

Design with Reuse. Building software from reusable components. Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 14 Slide 1

Design with Reuse. Building software from reusable components. Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 14 Slide 1 Design with Reuse Building software from reusable components. Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 14 Slide 1 Objectives To explain the benefits of software reuse and some reuse

More information

Benefits of Test Automation for Agile Testing

Benefits of Test Automation for Agile Testing Benefits of Test Automation for Agile Testing Manu GV 1, Namratha M 2, Pradeep 3 1 Technical Lead-Testing Calsoft Labs, Bangalore, India 2 Assistant Professor, BMSCE, Bangalore, India 3 Software Engineer,

More information

Chapter 17 Software Testing Strategies Slide Set to accompany Software Engineering: A Practitioner s Approach, 7/e by Roger S. Pressman Slides copyright 1996, 2001, 2005, 2009 by Roger S. Pressman For

More information

Business Process Technology

Business Process Technology Business Process Technology A Unified View on Business Processes, Workflows and Enterprise Applications Bearbeitet von Dirk Draheim, Colin Atkinson 1. Auflage 2010. Buch. xvii, 306 S. Hardcover ISBN 978

More information

Andreas Cseh. Quantum Fading. Strategies for Leveraged & Inverse ETFs. Anchor Academic Publishing. disseminate knowledge

Andreas Cseh. Quantum Fading. Strategies for Leveraged & Inverse ETFs. Anchor Academic Publishing. disseminate knowledge Andreas Cseh Quantum Fading Strategies for Leveraged & Inverse ETFs Anchor Academic Publishing disseminate knowledge Cseh, Andreas: Quantum Fading : Strategies for Leveraged & Inverse ETFs, Hamburg, Anchor

More information

Proceedings of the 8 th Euro-Asia Conference on Environment and CSR: Tourism, MICE, Hospitality Management and Education Session (Part II)

Proceedings of the 8 th Euro-Asia Conference on Environment and CSR: Tourism, MICE, Hospitality Management and Education Session (Part II) Proceedings of the 8 th Euro-Asia Conference on Environment and CSR: Tourism, MICE, Hospitality Management and Education Session (Part II) Yanling Zhang (Ed.) Proceedings of the 8 th Euro-Asia Conference

More information

Object Oriented Design

Object Oriented Design Object Oriented Design Kenneth M. Anderson Lecture 20 CSCI 5828: Foundations of Software Engineering OO Design 1 Object-Oriented Design Traditional procedural systems separate data and procedures, and

More information

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur

Module 10. Coding and Testing. Version 2 CSE IIT, Kharagpur Module 10 Coding and Testing Lesson 26 Debugging, Integration and System Testing Specific Instructional Objectives At the end of this lesson the student would be able to: Explain why debugging is needed.

More information

Kapitel 2 Unternehmensarchitektur III

Kapitel 2 Unternehmensarchitektur III Kapitel 2 Unternehmensarchitektur III Software Architecture, Quality, and Testing FS 2015 Prof. Dr. Jana Köhler [email protected] IT Strategie Entwicklung "Foundation for Execution" "Because experts

More information

The W-MODEL Strengthening the Bond Between Development and Test

The W-MODEL Strengthening the Bond Between Development and Test Andreas Spillner Dr. Spillner is working as Professor at the Hochschule Bremen (University of Applied Sciences) where he is responsible for software engineering and real time systems. Dr. Spillner has

More information

Berufsakademie Mannheim University of Co-operative Education Department of Information Technology (International)

Berufsakademie Mannheim University of Co-operative Education Department of Information Technology (International) Berufsakademie Mannheim University of Co-operative Education Department of Information Technology (International) Guidelines for the Conduct of Independent (Research) Projects 5th/6th Semester 1.) Objective:

More information

Universiti Teknologi MARA. Requirement Analysis Using UML Approach for Research Management System (RMS)

Universiti Teknologi MARA. Requirement Analysis Using UML Approach for Research Management System (RMS) C^tJ O19OO(^'J.Tfi^'i- Universiti Teknologi MARA Requirement Analysis Using UML Approach for Research Management System (RMS) Enamul Hasan Bin Rusly Thesis submitted in fulfillment of the requirements

More information

Development models. 1 Introduction. 2 Analyzing development models. R. Kuiper and E.J. Luit

Development models. 1 Introduction. 2 Analyzing development models. R. Kuiper and E.J. Luit Development models R. Kuiper and E.J. Luit 1 Introduction We reconsider the classical development models: the Waterfall Model [Bo76], the V-Model [Ro86], the Spiral Model [Bo88], together with the further

More information

SAP Enterprise Portal 6.0 KM Platform Delta Features

SAP Enterprise Portal 6.0 KM Platform Delta Features SAP Enterprise Portal 6.0 KM Platform Delta Features Please see also the KM Platform feature list in http://service.sap.com/ep Product Management Operations Status: January 20th, 2004 Note: This presentation

More information

Chapter 9 Software Evolution

Chapter 9 Software Evolution Chapter 9 Software Evolution Summary 1 Topics covered Evolution processes Change processes for software systems Program evolution dynamics Understanding software evolution Software maintenance Making changes

More information

Software Development Process Models and their Impacts on Requirements Engineering Organizational Requirements Engineering

Software Development Process Models and their Impacts on Requirements Engineering Organizational Requirements Engineering Software Development Process Models and their Impacts on Requirements Engineering Organizational Requirements Engineering Prof. Dr. Armin B. Cremers Sascha Alda Overview Phases during Software Development

More information

Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools

Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools Jack Greenfield Keith Short WILEY Wiley Publishing, Inc. Preface Acknowledgments Foreword Parti Introduction to

More information

How To Teach A Software Engineer

How To Teach A Software Engineer Corporate Technology Social Skills für Experten Erfahrungsbericht vom Siemens Curriculum für Senior Architekten / Architekten Matthias Singer Siemens AG Learning Campus Copyright 2010. All rights reserved.

More information

Lehrstuhl für Rechnertechnik und Rechnerorganisation (LRR-TUM) Annual Report 1998/1999

Lehrstuhl für Rechnertechnik und Rechnerorganisation (LRR-TUM) Annual Report 1998/1999 Research Report Series Lehrstuhl für Rechnertechnik und Rechnerorganisation (LRR-TUM) Technische Universität München http://wwwbode.informatik.tu-muenchen.de/ Editor: Prof. Dr. A. Bode Vol. 18 Lehrstuhl

More information

Requirements engineering

Requirements engineering Learning Unit 2 Requirements engineering Contents Introduction............................................... 21 2.1 Important concepts........................................ 21 2.1.1 Stakeholders and

More information

Software Engineering. So(ware Evolu1on

Software Engineering. So(ware Evolu1on Software Engineering So(ware Evolu1on 1 Software change Software change is inevitable New requirements emerge when the software is used; The business environment changes; Errors must be repaired; New computers

More information

The Role of CM in Agile Development of Safety-Critical Software

The Role of CM in Agile Development of Safety-Critical Software The Role of CM in Agile Development of Safety-Critical Software Tor Stålhane1, Thor Myklebust 2 1 Norwegian University of Science and Technology, N-7491, Trondheim, Norway 2 SINTEF ICT, Strindveien 2,

More information

Standard Glossary of Terms Used in Software Testing. Version 3.01

Standard Glossary of Terms Used in Software Testing. Version 3.01 Standard Glossary of Terms Used in Software Testing Version 3.01 Terms Used in the Expert Level Test Automation - Engineer Syllabus International Software Testing Qualifications Board Copyright International

More information

Karunya University Dept. of Information Technology

Karunya University Dept. of Information Technology PART A Questions 1. Mention any two software process models. 2. Define risk management. 3. What is a module? 4. What do you mean by requirement process? 5. Define integration testing. 6. State the main

More information

Agile Techniques for Object Databases

Agile Techniques for Object Databases db4o The Open Source Object Database Java and.net Agile Techniques for Object Databases By Scott Ambler 1 Modern software processes such as Rational Unified Process (RUP), Extreme Programming (XP), and

More information

Modellistica Medica. Maria Grazia Pia, INFN Genova. Scuola di Specializzazione in Fisica Sanitaria Genova Anno Accademico 2002-2003

Modellistica Medica. Maria Grazia Pia, INFN Genova. Scuola di Specializzazione in Fisica Sanitaria Genova Anno Accademico 2002-2003 Modellistica Medica Maria Grazia Pia INFN Genova Scuola di Specializzazione in Fisica Sanitaria Genova Anno Accademico 2002-2003 Lezione 18-19 The Unified Process Static dimension Glossary UP (Unified

More information

Software Testing. Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program.

Software Testing. Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program. Software Testing Definition: Testing is a process of executing a program with data, with the sole intention of finding errors in the program. Testing can only reveal the presence of errors and not the

More information

E-Commerce Design and Implementation Tutorial

E-Commerce Design and Implementation Tutorial A Mediated Access Control Infrastructure for Dynamic Service Selection Dissertation zur Erlangung des Grades eines Doktors der Wirtschaftswissenschaften (Dr. rer. pol.) eingereicht an der Fakultat fur

More information

Getting started with API testing

Getting started with API testing Technical white paper Getting started with API testing Test all layers of your composite applications, not just the GUI Table of contents Executive summary... 3 Introduction... 3 Who should read this document?...

More information

Bernd H. Oppermann (Ed.) International Legal Studies und Verfassungsgeschichte by European Scholars of the ELPIS Network

Bernd H. Oppermann (Ed.) International Legal Studies und Verfassungsgeschichte by European Scholars of the ELPIS Network Bernd H. Oppermann (Ed.) International Legal Studies und Verfassungsgeschichte by European Scholars of the ELPIS Network Dr. Kristin Rohleder, geb. 1981, studierte Rechtswissenschaft in Trier. 2005 1.

More information

HP SAP. Where Development, Test and Operations meet. Application Lifecycle Management

HP SAP. Where Development, Test and Operations meet. Application Lifecycle Management HP SAP Where Development, Test and Operations meet Application Lifecycle Management 1 Introduction 1.1 ALM CONCEPTS Application Lifecycle Management (ALM) empowers IT to manage the core application life-cycle,

More information

ida.com excellence in dependable automation

ida.com excellence in dependable automation IEC 61508 Maintenance Status IEC 61508 Maintenance Projekt ist aus dem zulässigen Zeitrahmen gelaufen Viele Baustellen auch durch neue Mitglieder (Frankreich, USA, IEC 61511 Team) Bestehende Anforderungen,

More information

Screen Design : Navigation, Windows, Controls, Text,

Screen Design : Navigation, Windows, Controls, Text, Overview Introduction Fundamentals of GUIs - methods - Some examples Screen : Navigation, Windows, Controls, Text, Evaluating GUI Performance 1 Fundamentals of GUI What kind of application? - Simple or

More information

Advanced Testing Techniques

Advanced Testing Techniques 9 March, 2010 ISSN 1866-5705 www.testingexperience.com free digital version print version 8,00 printed in Germany Advanced Testing Techniques Conferences Special istockphoto.com/nwphotoguy istockphoto.com/esemelwe

More information

Metrics in Software Test Planning and Test Design Processes

Metrics in Software Test Planning and Test Design Processes Master Thesis Software Engineering Thesis no: MSE-2007:02 January 2007 Metrics in Software Test Planning and Test Design Processes Wasif Afzal School of Engineering Blekinge Institute of Technology Box

More information

CS 451 Software Engineering Winter 2009

CS 451 Software Engineering Winter 2009 CS 451 Software Engineering Winter 2009 Yuanfang Cai Room 104, University Crossings 215.895.0298 [email protected] 1 Testing Process Testing Testing only reveals the presence of defects Does not identify

More information

Targeted Advertising and Consumer Privacy Concerns Experimental Studies in an Internet Context

Targeted Advertising and Consumer Privacy Concerns Experimental Studies in an Internet Context TECHNISCHE UNIVERSITAT MUNCHEN Lehrstuhl fur Betriebswirtschaftslehre - Dienstleistungsund Technologiemarketing Targeted Advertising and Consumer Privacy Concerns Experimental Studies in an Internet Context

More information

Quantification and Traceability of Requirements

Quantification and Traceability of Requirements Quantification and Traceability of Requirements Gyrd Norvoll Master of Science in Computer Science Submission date: May 2007 Supervisor: Tor Stålhane, IDI Norwegian University of Science and Technology

More information

LINGUISTIC SUPPORT IN "THESIS WRITER": CORPUS-BASED ACADEMIC PHRASEOLOGY IN ENGLISH AND GERMAN

LINGUISTIC SUPPORT IN THESIS WRITER: CORPUS-BASED ACADEMIC PHRASEOLOGY IN ENGLISH AND GERMAN ELN INAUGURAL CONFERENCE, PRAGUE, 7-8 NOVEMBER 2015 EUROPEAN LITERACY NETWORK: RESEARCH AND APPLICATIONS Panel session Recent trends in Bachelor s dissertation/thesis research: foci, methods, approaches

More information

The Software Process. The Unified Process (Cont.) The Unified Process (Cont.)

The Software Process. The Unified Process (Cont.) The Unified Process (Cont.) The Software Process Xiaojun Qi 1 The Unified Process Until recently, three of the most successful object-oriented methodologies were Booch smethod Jacobson s Objectory Rumbaugh s OMT (Object Modeling

More information

SPICE auf der Überholspur. Vergleich von ISO (TR) 15504 und Automotive SPICE

SPICE auf der Überholspur. Vergleich von ISO (TR) 15504 und Automotive SPICE SPICE auf der Überholspur Vergleich von ISO (TR) 15504 und Automotive SPICE Historie Software Process Improvement and Capability determination 1994 1995 ISO 15504 Draft SPICE wird als Projekt der ISO zur

More information

Chapter 11, Testing, Part 2: Integration and System Testing

Chapter 11, Testing, Part 2: Integration and System Testing Object-Oriented Software Engineering Using UML, Patterns, and Java Chapter 11, Testing, Part 2: Integration and System Testing Overview Integration testing Big bang Bottom up Top down Sandwich System testing

More information

Boom and Bust Cycles in Scientific Literature A Toolbased Big-Data Analysis

Boom and Bust Cycles in Scientific Literature A Toolbased Big-Data Analysis Boom and Bust Cycles in Scientific Literature A Toolbased Big-Data Analysis Bachelorarbeit zur Erlangung des akademischen Grades Bachelor of Science (B.Sc.) im Studiengang Wirtschaftsingenieur der Fakultät

More information

Towards Collaborative Requirements Engineering Tool for ERP product customization

Towards Collaborative Requirements Engineering Tool for ERP product customization Towards Collaborative Requirements Engineering Tool for ERP product customization Boban Celebic, Ruth Breu, Michael Felderer, Florian Häser Institute of Computer Science, University of Innsbruck 6020 Innsbruck,

More information

Basic Trends of Modern Software Development

Basic Trends of Modern Software Development DITF LDI Lietišķo datorsistēmu programmatūras profesora grupa e-business Solutions Basic Trends of Modern Software Development 2 3 Software Engineering FAQ What is software engineering? An engineering

More information

1 Business Modeling. 1.1 Event-driven Process Chain (EPC) Seite 2

1 Business Modeling. 1.1 Event-driven Process Chain (EPC) Seite 2 Business Process Modeling with EPC and UML Transformation or Integration? Dr. Markus Nüttgens, Dipl.-Inform. Thomas Feld, Dipl.-Kfm. Volker Zimmermann Institut für Wirtschaftsinformatik (IWi), Universität

More information

Software Engineering. Software Processes. Based on Software Engineering, 7 th Edition by Ian Sommerville

Software Engineering. Software Processes. Based on Software Engineering, 7 th Edition by Ian Sommerville Software Engineering Software Processes Based on Software Engineering, 7 th Edition by Ian Sommerville Objectives To introduce software process models To describe three generic process models and when

More information

Multi-Channel Distribution Strategies in the Financial Services Industry

Multi-Channel Distribution Strategies in the Financial Services Industry Multi-Channel Distribution Strategies in the Financial Services Industry DISSERTATION der Universität St. Gallen, Hochschule für Wirtschafts-, Rechts- und Sozialwissenschaften (HSG) zur Erlangung der Würde

More information

Traceability Patterns: An Approach to Requirement-Component Traceability in Agile Software Development

Traceability Patterns: An Approach to Requirement-Component Traceability in Agile Software Development Traceability Patterns: An Approach to Requirement-Component Traceability in Agile Software Development ARBI GHAZARIAN University of Toronto Department of Computer Science 10 King s College Road, Toronto,

More information

Open Source Customer Relationship Management Solutions Potential for an Impact of Open Source CRM Solutions on Small- and Medium Sized Enterprises

Open Source Customer Relationship Management Solutions Potential for an Impact of Open Source CRM Solutions on Small- and Medium Sized Enterprises Henrik Vogt Open Source Customer Relationship Management Solutions Potential for an Impact of Open Source CRM Solutions on Small- and Medium Sized Enterprises Diplom.de Henrik Vogt Open Source Customer

More information

Effective Methods for Software and Systems Integration

Effective Methods for Software and Systems Integration Effective Methods for Software and Systems Integration Boyd L. Summers CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 CRC Press is an imprint of Taylor

More information

Best Practices for Improving the Quality and Speed of Your Agile Testing

Best Practices for Improving the Quality and Speed of Your Agile Testing A Conformiq White Paper Best Practices for Improving the Quality and Speed of Your Agile Testing Abstract With today s continually evolving digital business landscape, enterprises are increasingly turning

More information