1 Quality Assurance Methods for Model-based Development: A Survey and Assessment Copyright 2007 SAE International Ines Fey DaimlerChrysler AG, Berlin, Germany Ingo Stürmer Model Engineering Solutions, Berlin, Germany ABSTRACT This paper examines state-of-the-art quality assurance (QA) techniques for model-based software development in the automotive domain. Both the aims and effort required to apply a certain method are discussed for all relevant QA techniques. Since QA techniques can only be used effectively if they are seamlessly integrated within the overall development process and among each other, an appropriate interconnection and order of application is important. Based on our experience from automotive software development projects, we suggest a QA strategy that includes the selection of QA techniques and the sequence of their application. INTRODUCTION Nowadays, model-based development is common practice within a wide range of automotive embedded software development projects. Following the model-based approach means focusing on graphical models as the central development artifact to specify, design and implement software. These models, which are used for specifying and checking the functional behavior of the control function, are usually realized using block-oriented modeling languages such as Matlab/Simulink/Stateflow [TMW06]. Apart from aspects of functional behavior, the model is also used for designing and structuring the software to be developed. The controller software can be either (1) manually developed by programmers on the basis of the controller model or (2) generated automatically by a code generator. This implies that the quality of the software strongly depends on the quality of the model. Therefore, both the model and the derived software must be constructed and analyzed with regard to correctness, robustness, reliability, etc. Analytical quality assurance methods (QA) such as reviews, testing, or static analyses are often used for this purpose. The appropriate order and combination of such QA methods, for example testing and reviews, is important in order to reduce their scope and required effort. Apart from efficiency aspects, it is desirable to increase the effectiveness of all QA methods. This paper examines state-of-the-art quality assurance methods for model-based development, and discusses the aims, pros and cons, as well as the required effort of the individual QA methods. Furthermore, an applicable sequencing for the QA methods is proposed to increase their effectiveness and overall quality assurance efficiency. MODEL-BASED DEVELOPMENT In model-based development (MBD), the seamless use of executable models is characteristic of function and control system design and the following implementation phase. This means that models are used throughout the entire development of the control system, from the preliminaries to the detailed design. In the first design stage, a so-called physical model is created, which is derived from the functional specification (see Fig. 1). The physical model describes the behavior of the control function to be developed, containing transformation algorithms related to continuous input signals as well as incoming events or states. These algorithms are usually described by using floating-point arithmetic. Since the physical model is focused on the design of the control function and on checking the functional behavior with regard to the stated requirements it cannot directly serve as a basis for production code creation. Implementation details which are the prerequisite for automatic coding are not considered here. Therefore the physical model needs to be manually revised by implementation experts with respect to the needs of the production code (e.g. function parts are distributed to different tasks). For example, in order to enhance the model from a realization point of view, the floating point arithmetic contained in the physical
2 model is adjusted to the fixed-point arithmetic used by the target processor. If fixed-point arithmetic is used, the model must be augmented with necessary scaling information in order to keep imprecision in the representation of fixed-point numbers as low as possible. Apart from the change in the type of arithmetic, it might be necessary to substitute certain model elements that are not part of the language subset supported by a particular code generator. Furthermore, it is often necessary to restructure the physical model with respect to a planned software design. The result of this evolutionary rework on the physical model is a so-called implementation model. The implementation model can be used as a basis for (A) manual coding by means of a software developer, or (B) automatic code generation by means of a code generator (Fig, 1). Although both approaches are common practice, the application of a code generator for automatic controller code implementation is coming increasingly to the fore. For instance, [Bur04] compares the V-model for plain, manually generated code using models and automatically generated code and points out the advantage of model-based code generation. Fig. 1: Model-based development: using manual code creation (A) versus model-based implementation (B) SOURCES OF ERROR IN MODEL-BASED DEVELOPMENT PHASES The sources of error, which can be identified within the different model-based development phases, are [SCW05]: (1) design errors which are caused due to inappropriate design of the (physical) model with respect to the functional requirements or due to misunderstandings regarding the semantics of the modeling language; (2) arithmetic errors due to imprecise representation of the control function s arithmetic within the implementation model or resulting from improper floating-point to fixed-point conversion (e.g. quantization errors); (3) tool errors incorporated by a certain tool within the tool-chain that contains implementation bugs or that has not been set up correctly (e.g. model simulator, code generator configuration); (4) hardware errors of the development or target environment itself; (5) run-time errors caused on the target hardware due to e.g. resource demand mismatches, scheduling errors etc.; and (6) interface errors between the generated control algorithm on one side, legacy code (e.g. custom code) or wrapper software (driver, operating system etc.) on the other side. A number of techniques, which are capable of revealing such issues, are available as state-of-the-art quality assurance methods. Some specifics regarding the model-based development approach are explained in the following in more detail. QUALITY ASSURANCE IN MODEL-BASED DEVELOPMENT As already stated, the quality of the implementation model substantially determines the quality (correctness, efficiency, etc.) of the derived code no matter whether it is automatically generated or manually created. Therefore a combination of different testing, reviews, and static analysis techniques should be applied during the development process. MODEL-BASED TESTING One of the great advantages of model-based development is the opportunity to start simulation and testing activities more or less right after the project starts, since executable development artifacts are on hand early in the development process. It is possible to simulate the model and the generated code at different stages of the development. Considering this as an advantage allows the development engineer to focus on specific errors at each development stage. For this purpose, both the model and the code need to be executable. Then, model and code can be stimulated using the same input values (cf. left side of Figure 2).
3 Therefore, different ways of simulation (cf. right side of Figure 2) are available to support the safeguarding of the model and the code: 1. Model-in-the-Loop (MiL): The aim of the MiL simulation is to check the validity of a model with respect to the functional requirements. This simulation is executed within the development environment on a host PC. After evaluating the simulation results, they are often reused as a reference (expected values) for following verification steps either on the model or software level. In addition to functional testing, the possible simulation pathways within the model can be measured using model coverage metrics such as decision coverage or MC/DC coverage. 2. Software-in-the-Loop (SiL): The aim of SiL is to analyze fixed-point scaling effects of the code and to detect possible arithmetical problems (e.g. over- /underflow). The software to be tested is usually derived from a model that was used earlier during MiL simulation. SiL tests are executed on a host PC and by using the same stimuli that have been used for MiL simulation, if available. The execution results should be comparable to the results obtained during MiL. Results can differ, however, due to different handling of numerical instabilities or exception handling by the MATLAB simulation environment and by the executed code. Beyond the functional view, code coverage is measured during SiL tests. 3. Processor-in-the-Loop (PiL): The aim of PiL is to verify the functional behavior of the software in the target processor environment and to measure code efficiency (profiling, memory usage, etc.). PiL tests are executed on an experimental hardware, which contains the same processor as the target system but provides additional resources for logging, storing, and exchanging test data and test results. As for SiL tests, the same test stimuli used e.g. for MiL or SiL simulation, are usually reused since the tested code is the same then used during SiL tests but (cross-) compiled by using the project s target compiler. Obviously, it is advisable to measure code coverage on this level as well. Fig. 2: Model-based Testing All simulation levels have in common that the question of what constitutes appropriate test stimuli for model and code testing is fundamental. When running the test, the outputs of the different simulation levels can be compared with certain acceptance criteria. Depending on the developed and tested application this comparison might yield some technical problems that have to be considered during test preparation. Due to quantization errors, the outputs of the MiL simulation, for instance, and the output of the SiL or PiL testing are usually not completely identical. As a consequence, sophisticated signal comparison methods have to be applied in these cases. The use of structural testing criteria on the model level (model coverage) as well as on the code level (code coverage) for test quality rating is common practice. Model coverage supplements the known benefits of code coverage, namely controlling the test depth and detecting those parts of a model or code which are not covered by a given test suite. Furthermore, test stimuli generation for model and code coverage can be automated by the use of test vector generators such as Reactis [REA] for model coverage or the Evolutionary Test Tool [WSB01] for code coverage. MODEL REVIEW Executable models which were created early in model evolution can be regarded as executable specifications. They reflect the functional requirements of the control function to be developed in a constructive manner. An agreement to follow certain modeling guidelines is important to increase
4 the comprehensibility (readability) of the model, to facilitate maintenance, to ease testing, reuse, and extensibility, and to simplify the exchange of models between OEMs and supplier. Therefore, guidelines and patterns for model design, such as those published by the MathWorks Automotive Advisory Board [TMW01] or the IMMOS project [IMMOS] have to be available. Following the modeling conventions stated in those guidelines, which consider production code generation aspects in particular, supports the translation of the model into safe and efficient code. Review procedures primarily specialized in the verification of requirements specifications, e.g. Fagan inspections [GG93], can be adapted to perform model reviews,. For models which already contain implementation details, additional issues have to be taken into account. The aims of model reviews are: to check whether or not the textual specified functional requirements are realized in the model to ensure that relevant modeling guidelines are fulfilled (e.g. naming conventions, structuring, modularization) to check to be sure that a number of selected quality criteria such as portability, maintainability, testability are met. to check to be sure that the implementation model meets the requirements for the generation of safe code (e.g. robustness) and efficient code (e.g. resource optimizations) To handle the complexity of this task, model reviews are often guided by an in-house set of specific modeling and review guidelines. These are commonly summarized as a review check list. During the model review, a series of findings with suggestions and comments on individual model parts are gathered and recorded with a reference to the affected model elements. The references to the model elements enable the developer to track which parts of the model may have to be revised. CODE REVIEW In order to assure that the quality of the code is acceptable, it is common to check the code by using static testing techniques such as reviews. Reviewing manually written code is a widely accepted practice to find programming errors. In order to do this, the code needs to be wellstructured and documented. The basis for code reviews is usually a set of coding guidelines like the MISRA-C Guidelines [MIRA]. The efficiency of a code review is generally very high, but it requires a large effort. AUTOCODE REVIEW In contrast to manually written code, the code generated automatically will have a low density of faults, if the code generator works properly. Errors will tend to be systematic, because the tool should perform the transformation identically each time for a given model and the very same code generator configuration. Autocode peer review can be quite effective (even though it is inefficient), since inappropriate modeling and improper variable scaling, for instance, are easier to detect in the code than in the model (examples for this are provided in [SC+06]). STATIC CODE ANALYSIS Static code analysis can help in the process of reviewing the code, in particular if performed automatically by appropriate tools. Advanced static analysis tools, which are available for languages such as C, perform powerful algorithms to evaluate whether the code follows expected programming rules. They check the syntactic correctness and, to varying degrees, the semantic correctness of the source code. They add a greater degree of rigor to the kind of checks performed by a compiler. These tools will not check whether the code has the functionality the programmer intended, but will find constructs which might be erroneous or non-portable, as well as constructs that do not behave as expected. The documentation of certain rule violations given as a result of a static analysis is usually easier to assess than the actual generated code itself. APPROPRIATE COMBINATION OF QUALITY ASSURANCE METHODS Due to the effort of individual QA methods it is desirable to combine QA methods in order to limit their scope and thus the extent of e.g. a code review. The scope can be reduced if specific aspects can be more easily checked by other methods, preferably at a higher level of abstraction. For example, checking appropriate variable scaling on the code level is cumbersome and inefficient [SC+06]. However, checking variable scaling by means of automated model checks and a back-to-back test between the fixedpoint code and the implementation model is to be performed with limited test resources. One reason for this is that the test cases for back-to-back tests are already available, since they have been determined already during the functional testing phase at the model level. However, in contrast to scaling errors, code inefficiencies caused by inappropriate modeling or unsophisticated code generation are easier to detect on the code level by reviews than on the model level (see [SC+06] for examples). The following order of application and combination of QA methods is based on our experiences during automotive software development projects (see [SC+06]). Fig. 3 is used as a reference for the different QA tasks. It is worth noting that the proposed order and combination of the QA methods could reduce the effort of e.g. the model review by 40% compared to traditional approaches.
5 Fig. 3: QA approach for model-based development (A) COMBINED REQUIREMENTS REVIEW, MODEL REVIEW, AND AUTOMATED MODEL CHECK In general, the effort of formal inspection techniques such as a requirements review is relatively high and cost intensive (Table 1, 1 st row). The aim of a requirements review is to check to be sure that the specification is complete, correct, and consistent. The review itself, however, has to be carried out manually, often lacking appropriate tool support. The effort of a model review is also relatively high (Table 1, 2 nd row). We found a combination of requirements and model review in conjunction with powerful tool support for the reviews themselves as well as for automated checks on the model useful in reducing the reviewing effort. Since the requirements specification is usually already represented within DOORS [DOORS], it seems to be reasonable to gather all comments which come up during the different reviews by using DOORS as well. This makes the discussion and processing of the review comments as efficient as possible. Furthermore, our solution included using an instance of the ToolNet [TN06] framework. This enables the creation and management of references (links) from the requirements to certain model elements and vice versa. In addition it also helps to create links directly from the reviewed model elements to the reviewers remarks recorded in DOORS. ToolNet is a service-based integration framework, which can manage comments, the links to model objects as well as the degree of realization, thereby facilitating the integrated usage of Simulink/Stateflow and DOORS in our application. Moreover, the scope and effort of the model review can be reduced by carrying out automated model checks (Table 1, 3 rd row), which focus on those modeling guidelines that are automatically checkable (e.g. naming conventions). Appropriate tool support is available by means of the Matlab Model Advisor [MMA] or the Mint [MINT] tool. The automated check should be carried out right before the model review, which reduces its scope, since then the model reviewer can concentrate on functional aspects and only has to focus on those modeling guidelines that are difficult to check by tools (e.g. ensure that modeling patterns are used which are capable of being generated into safe and efficient code). Since the model should directly reflect the functional behavior stated in the requirements specification, it is meaningful to carry out a combined model and requirements review. This ensures, for example, that the stated functional requirements can be identified within the model and that specific model parts can be traced back to the requirements. Referencing model elements from textual requirements means that it is possible to track the parts of the model that may have to be reworked after changing the requirements or if tests linked to this requirement failed. For efficiency reasons, precise assignment of the individual review comments to the model elements concerned is needed, at least at the subsystem level or state machine level. (B) COMBINED FUNCTIONAL AND STRUCTURAL TESTING ON MODEL LEVEL Before translating the implementation model into code (be it automated or manual) the model should be rigorously tested by means of functional and structural testing (Table 1, 4 th -7 th rows). Functional testing relies on the application specification. The test cases are systematically derived on the basis of the requirements specification. Structural testing takes the internal structure of the model into account with the aim of testing as many model paths (e.g. conditions, transitions, states) as possible. The test cases are calculated based on the selected coverage criteria for model coverage (e.g. decision coverage). A meaningful test strategy should consist of a combination of both approaches. This guarantees that the drawbacks of one approach are compensated by the effectiveness of the other.
6 For example, functional testing cannot ensure that each program element has been executed, but structural testing can (in principle). In general, the effort of performing functional as well as structural testing is low if well-engineered tool support is available. Tools like MTest, which integrate different test design approaches, automate as many test activities like test harness creation or test execution as possible, and facilitate test repetition as well as regression testing, are to be named here. However, the major portion of the test resources is used in investments on designing error-sensitive test cases for functional and structural testing. Systematic approaches for designing functional tests, supported by tools like CTE or TPT, ease this part of the work, whereas structureoriented test cases can be derived automatically out of the model by using tools like Reactis or ET. Tab. 1: Survey of MBD quality assurance method Quality Assurance Method Automation Effort Effectiveness Tool Support 1 Requirements Review no high middle ToolNet 2 Model Review no high middle ToolNet 3 Automated Model Checks partly low high Model Advisor,Mint 4 Functional Testing yes low high MTest 5 Functional Test Case Design partly high high CTM, CTE, TPT 6 Structural Testing yes low high SL V&V 7 Structural Test Case Design partly high high Reactis, ET 8 Static code analysis (syntax) yes low high lint, QA-C 9 Static code analysis (dataflow) yes high high Polyspace 10 Autocode Review no high low - (C) COMBINED STATIC CODE ANALYSIS AND AUTOCODE REVIEW There are different sources of programming errors which demand a systematic check of the generated code. For example inappropriate modeling, faulty configuration of the code generator, or an erroneous code generator lead to specific errors that may be difficult to detect by other quality assurance measures. A combination of static code analysis and a review of the autocode seem to be suitable to cover these questions. An autocode review (ACR) aims at identifying inappropriate modeling, code inefficiencies, custom code integration errors, and code generator bugs. The effort of the ACR is very high(table 1, last row), with a relatively low effectiveness (see [SC+06]). It is reasonable to reduce its scope by performing a static code analysis in advance (Table 1, 8 th - 9 th row). The latter should be applied in order to (1) check to see that the code conforms to generally accepted language standards and coding guidelines (e.g. ANSI C, MISRA) and to (2) find data flow errors (e.g. dead code). The effort to apply (1) is very low, with a high effectiveness. However, the effort of performing data flow analysis is high, but also is highly effective. Appropriate tool support is available for both approaches, such as QAC [QAC], lint [LINT], and the Polyspace verifier [POL]. (to here) A follow-up regression test of the implementation model ensures that no additional errors were incorporated during IM revision concerning the findings in the autocode. (D) BACK-TO-BACK TEST OF IMPLEMENTATION MODEL AND AUTOCODE Finally, a back-to-back test needs to be performed. In doing so, the execution behavior of the FXP autocode (Software-in-the-Loop or Processor-in-the-Loop simulation) is compared to the simulation behavior of the model (Modelin-the-Loop simulation). This ensures that the code really reflects the functional behavior of the model. Results can differ, however, due to different handling of numerical instabilities or exception handling of the MATLAB simulation environment and the code executed [SCW05]. Furthermore, the structural discrepancies of the model and the autocode must be compared. This requires additional test cases, which could be designed according to measured structural autocode coverage. SUMMARY Model-based development significantly improves the quality of the automotive embedded software development process. This process is meaningfully supplemented by a variety of tools and QA methods which support the developer and tester. The QA methods discussed in this paper, their automation potentials, the required effort for carrying them out, as well as the expected effectiveness and possible tool support are summarized in Table 1. Nevertheless, due to the relatively high effort required to safeguard the model-based development process, it is still desirable to reduce the effort and increase the effectiveness of the applied QA methods. In this paper a suitable combination of QA methods is suggested. We restricted ourselves to QA methods applied for the development of the embedded controller software. The correctness of the tools, e.g. of the code generator, are outside the scope of this paper. However, extensive surveys of possible QA methods that can be applied to a code generator are commented on in [Stü06].
7 The proposed order and focused scope of each method promises a significant effort reduction. Efficiency as well as effectiveness improvements can also be expected, as already shown in recent projects (e.g. [SC+06]). Complementing the quality assurance measures, a collection of Do s and Don ts of modeling as guidelines and patterns prevents a number of already known issues from arising. In order to ensure the efficient management and publishing of such guidelines and pattern collections, specific tool support is necessary, such as that presented in [CDF+05]. The latter collection describes typical problems and suggests base patterns that should be used and reused during the development of functions in order to avoid troubleshooting during or after code generation. Further research has to show whether an ideal combination can also allow specific QA methods to be omitted. For example, it is likely that such an ideal combination can make an autocode review (high effort, low effectiveness) superfluous. ACKNOWLEDGMENTS The work described here was partially performed as part of the IMMOS project funded by the German Federal Ministry of Education and Research (project ref. 01ISC31D), REFERENCES [Bur04] [CDF+05] [DOORS] [GG93] [IMMOS] [LINT] [MINT] [MIRA] [MMA] [POL] [QAC] Burnard, A. Verifying and Validating Automatically Generated Code, Int. Automotive Conference (IAC), pp , Conrad, M., Dörr, H., Fey, I., Pohlheim, H., Stürmer, I. Guidelines und Reviews in der Modell-basierten Entwicklung von Steuergeräte-Software (in German), in Simulation und Test in der Funktions- und Softwareentwicklung für die Automobilelektronik, Expert- Verlag, Berlin, DOORS (product information). Telelogic AG, Gilb, T. and Graham, D., Software Inspections, Addison-Wesley, Stephen Johnson, Lint, a C program checker. Computer Science Technical Report 65, Bell Laboratories, December MINT (product information), Ricardo plc, MISRA (2004) Guidelines for the use of the C language in critical systems, MIRA Ltd., http: / /www.misra.org.uk/, ISBN , Matlab Model Advisor (product information), The Mathworks Inc., Polyspace Verifier (product information), Polyspace Technologies, QA-C (product information), QA Systems, [REA] [SCW05] [SC+06] [Stü06] [TMW01] [TMW06] [TN06] Reactis (product information), Reactive Systems, Stürmer, I.; Conrad, M.; Weinberg, D.: Overview of Existing Safeguarding Techniques for Automatically Generated Code, Proc. of 2 nd Intl. ICSE Workshop on Software Engineering for Automotive Systems (SEAS 05), Stürmer, I.; Conrad, M.; Fey, I.; Dörr, H.: Experiences with Model and Autocode Reviews in Model-based Software Development, Proc. of 3 rd Intl. ICSE Workshop on Software Engineering for Automotive Systems (SEAS 06), S , Stürmer, I.: Systematic Testing of Code Generation Tools - A Test Suite-oriented Approach for Safeguarding Model-based Code Generation, Pro BUSINESS, Berlin, Mathworks Automotive Advisory Board (MAAB): MAAB Controller Style Guidelines for Production Intent,Release V1.00, The Mathworks, Inc.,Natick,MA April 2001, esign/maab.shtml. The MathWorks, ToolNet (product information). Extessy AG, [WSB01] Wegener, J., Stahmer H. and Baresel, A. Evolutionary Test Environment for Automatic Structural Testing. Special Issue of Information and Software Technology, Vol. 43, pp , CONTACT Ines Fey earned a Diploma degree in Computer Science in Since 1996 she has been a senior researcher at the Software Technology Lab of the DaimlerChrysler Group Research E/E and Information Technology. She is a member of the FAKRA working group on Functional Safety and the MathWorks Automotive Advisory Board (MAAB). Ingo Stürmer is founder and principal consultant of Model Engineering Solutions, a consultant company based in Berlin (Germany), which provides best of practice techniques and methods in the area of model-based code generation for embedded systems. Ingo worked as a PhD student at DaimlerChrysler Research and Technology as well as a researcher at the Fraunhofer Institute for Computer Architecture and Software Technology (FIRST). Ingo is member of the MISRA Autocode Working Group, member of the ACM (SIGSOFT), and GI (German society for compute science).