Quality Assurance Methods for Model-based Development: A Survey and Assessment
|
|
- Laurel O’Connor’
- 8 years ago
- Views:
Transcription
1 Quality Assurance Methods for Model-based Development: A Survey and Assessment Copyright 2007 SAE International Ines Fey DaimlerChrysler AG, Berlin, Germany ines.fey@daimlerchrysler.com Ingo Stürmer Model Engineering Solutions, Berlin, Germany stuermer@model-engineers.com ABSTRACT This paper examines state-of-the-art quality assurance (QA) techniques for model-based software development in the automotive domain. Both the aims and effort required to apply a certain method are discussed for all relevant QA techniques. Since QA techniques can only be used effectively if they are seamlessly integrated within the overall development process and among each other, an appropriate interconnection and order of application is important. Based on our experience from automotive software development projects, we suggest a QA strategy that includes the selection of QA techniques and the sequence of their application. INTRODUCTION Nowadays, model-based development is common practice within a wide range of automotive embedded software development projects. Following the model-based approach means focusing on graphical models as the central development artifact to specify, design and implement software. These models, which are used for specifying and checking the functional behavior of the control function, are usually realized using block-oriented modeling languages such as Matlab/Simulink/Stateflow [TMW06]. Apart from aspects of functional behavior, the model is also used for designing and structuring the software to be developed. The controller software can be either (1) manually developed by programmers on the basis of the controller model or (2) generated automatically by a code generator. This implies that the quality of the software strongly depends on the quality of the model. Therefore, both the model and the derived software must be constructed and analyzed with regard to correctness, robustness, reliability, etc. Analytical quality assurance methods (QA) such as reviews, testing, or static analyses are often used for this purpose. The appropriate order and combination of such QA methods, for example testing and reviews, is important in order to reduce their scope and required effort. Apart from efficiency aspects, it is desirable to increase the effectiveness of all QA methods. This paper examines state-of-the-art quality assurance methods for model-based development, and discusses the aims, pros and cons, as well as the required effort of the individual QA methods. Furthermore, an applicable sequencing for the QA methods is proposed to increase their effectiveness and overall quality assurance efficiency. MODEL-BASED DEVELOPMENT In model-based development (MBD), the seamless use of executable models is characteristic of function and control system design and the following implementation phase. This means that models are used throughout the entire development of the control system, from the preliminaries to the detailed design. In the first design stage, a so-called physical model is created, which is derived from the functional specification (see Fig. 1). The physical model describes the behavior of the control function to be developed, containing transformation algorithms related to continuous input signals as well as incoming events or states. These algorithms are usually described by using floating-point arithmetic. Since the physical model is focused on the design of the control function and on checking the functional behavior with regard to the stated requirements it cannot directly serve as a basis for production code creation. Implementation details which are the prerequisite for automatic coding are not considered here. Therefore the physical model needs to be manually revised by implementation experts with respect to the needs of the production code (e.g. function parts are distributed to different tasks). For example, in order to enhance the model from a realization point of view, the floating point arithmetic contained in the physical
2 model is adjusted to the fixed-point arithmetic used by the target processor. If fixed-point arithmetic is used, the model must be augmented with necessary scaling information in order to keep imprecision in the representation of fixed-point numbers as low as possible. Apart from the change in the type of arithmetic, it might be necessary to substitute certain model elements that are not part of the language subset supported by a particular code generator. Furthermore, it is often necessary to restructure the physical model with respect to a planned software design. The result of this evolutionary rework on the physical model is a so-called implementation model. The implementation model can be used as a basis for (A) manual coding by means of a software developer, or (B) automatic code generation by means of a code generator (Fig, 1). Although both approaches are common practice, the application of a code generator for automatic controller code implementation is coming increasingly to the fore. For instance, [Bur04] compares the V-model for plain, manually generated code using models and automatically generated code and points out the advantage of model-based code generation. Fig. 1: Model-based development: using manual code creation (A) versus model-based implementation (B) SOURCES OF ERROR IN MODEL-BASED DEVELOPMENT PHASES The sources of error, which can be identified within the different model-based development phases, are [SCW05]: (1) design errors which are caused due to inappropriate design of the (physical) model with respect to the functional requirements or due to misunderstandings regarding the semantics of the modeling language; (2) arithmetic errors due to imprecise representation of the control function s arithmetic within the implementation model or resulting from improper floating-point to fixed-point conversion (e.g. quantization errors); (3) tool errors incorporated by a certain tool within the tool-chain that contains implementation bugs or that has not been set up correctly (e.g. model simulator, code generator configuration); (4) hardware errors of the development or target environment itself; (5) run-time errors caused on the target hardware due to e.g. resource demand mismatches, scheduling errors etc.; and (6) interface errors between the generated control algorithm on one side, legacy code (e.g. custom code) or wrapper software (driver, operating system etc.) on the other side. A number of techniques, which are capable of revealing such issues, are available as state-of-the-art quality assurance methods. Some specifics regarding the model-based development approach are explained in the following in more detail. QUALITY ASSURANCE IN MODEL-BASED DEVELOPMENT As already stated, the quality of the implementation model substantially determines the quality (correctness, efficiency, etc.) of the derived code no matter whether it is automatically generated or manually created. Therefore a combination of different testing, reviews, and static analysis techniques should be applied during the development process. MODEL-BASED TESTING One of the great advantages of model-based development is the opportunity to start simulation and testing activities more or less right after the project starts, since executable development artifacts are on hand early in the development process. It is possible to simulate the model and the generated code at different stages of the development. Considering this as an advantage allows the development engineer to focus on specific errors at each development stage. For this purpose, both the model and the code need to be executable. Then, model and code can be stimulated using the same input values (cf. left side of Figure 2).
3 Therefore, different ways of simulation (cf. right side of Figure 2) are available to support the safeguarding of the model and the code: 1. Model-in-the-Loop (MiL): The aim of the MiL simulation is to check the validity of a model with respect to the functional requirements. This simulation is executed within the development environment on a host PC. After evaluating the simulation results, they are often reused as a reference (expected values) for following verification steps either on the model or software level. In addition to functional testing, the possible simulation pathways within the model can be measured using model coverage metrics such as decision coverage or MC/DC coverage. 2. Software-in-the-Loop (SiL): The aim of SiL is to analyze fixed-point scaling effects of the code and to detect possible arithmetical problems (e.g. over- /underflow). The software to be tested is usually derived from a model that was used earlier during MiL simulation. SiL tests are executed on a host PC and by using the same stimuli that have been used for MiL simulation, if available. The execution results should be comparable to the results obtained during MiL. Results can differ, however, due to different handling of numerical instabilities or exception handling by the MATLAB simulation environment and by the executed code. Beyond the functional view, code coverage is measured during SiL tests. 3. Processor-in-the-Loop (PiL): The aim of PiL is to verify the functional behavior of the software in the target processor environment and to measure code efficiency (profiling, memory usage, etc.). PiL tests are executed on an experimental hardware, which contains the same processor as the target system but provides additional resources for logging, storing, and exchanging test data and test results. As for SiL tests, the same test stimuli used e.g. for MiL or SiL simulation, are usually reused since the tested code is the same then used during SiL tests but (cross-) compiled by using the project s target compiler. Obviously, it is advisable to measure code coverage on this level as well. Fig. 2: Model-based Testing All simulation levels have in common that the question of what constitutes appropriate test stimuli for model and code testing is fundamental. When running the test, the outputs of the different simulation levels can be compared with certain acceptance criteria. Depending on the developed and tested application this comparison might yield some technical problems that have to be considered during test preparation. Due to quantization errors, the outputs of the MiL simulation, for instance, and the output of the SiL or PiL testing are usually not completely identical. As a consequence, sophisticated signal comparison methods have to be applied in these cases. The use of structural testing criteria on the model level (model coverage) as well as on the code level (code coverage) for test quality rating is common practice. Model coverage supplements the known benefits of code coverage, namely controlling the test depth and detecting those parts of a model or code which are not covered by a given test suite. Furthermore, test stimuli generation for model and code coverage can be automated by the use of test vector generators such as Reactis [REA] for model coverage or the Evolutionary Test Tool [WSB01] for code coverage. MODEL REVIEW Executable models which were created early in model evolution can be regarded as executable specifications. They reflect the functional requirements of the control function to be developed in a constructive manner. An agreement to follow certain modeling guidelines is important to increase
4 the comprehensibility (readability) of the model, to facilitate maintenance, to ease testing, reuse, and extensibility, and to simplify the exchange of models between OEMs and supplier. Therefore, guidelines and patterns for model design, such as those published by the MathWorks Automotive Advisory Board [TMW01] or the IMMOS project [IMMOS] have to be available. Following the modeling conventions stated in those guidelines, which consider production code generation aspects in particular, supports the translation of the model into safe and efficient code. Review procedures primarily specialized in the verification of requirements specifications, e.g. Fagan inspections [GG93], can be adapted to perform model reviews,. For models which already contain implementation details, additional issues have to be taken into account. The aims of model reviews are: to check whether or not the textual specified functional requirements are realized in the model to ensure that relevant modeling guidelines are fulfilled (e.g. naming conventions, structuring, modularization) to check to be sure that a number of selected quality criteria such as portability, maintainability, testability are met. to check to be sure that the implementation model meets the requirements for the generation of safe code (e.g. robustness) and efficient code (e.g. resource optimizations) To handle the complexity of this task, model reviews are often guided by an in-house set of specific modeling and review guidelines. These are commonly summarized as a review check list. During the model review, a series of findings with suggestions and comments on individual model parts are gathered and recorded with a reference to the affected model elements. The references to the model elements enable the developer to track which parts of the model may have to be revised. CODE REVIEW In order to assure that the quality of the code is acceptable, it is common to check the code by using static testing techniques such as reviews. Reviewing manually written code is a widely accepted practice to find programming errors. In order to do this, the code needs to be wellstructured and documented. The basis for code reviews is usually a set of coding guidelines like the MISRA-C Guidelines [MIRA]. The efficiency of a code review is generally very high, but it requires a large effort. AUTOCODE REVIEW In contrast to manually written code, the code generated automatically will have a low density of faults, if the code generator works properly. Errors will tend to be systematic, because the tool should perform the transformation identically each time for a given model and the very same code generator configuration. Autocode peer review can be quite effective (even though it is inefficient), since inappropriate modeling and improper variable scaling, for instance, are easier to detect in the code than in the model (examples for this are provided in [SC+06]). STATIC CODE ANALYSIS Static code analysis can help in the process of reviewing the code, in particular if performed automatically by appropriate tools. Advanced static analysis tools, which are available for languages such as C, perform powerful algorithms to evaluate whether the code follows expected programming rules. They check the syntactic correctness and, to varying degrees, the semantic correctness of the source code. They add a greater degree of rigor to the kind of checks performed by a compiler. These tools will not check whether the code has the functionality the programmer intended, but will find constructs which might be erroneous or non-portable, as well as constructs that do not behave as expected. The documentation of certain rule violations given as a result of a static analysis is usually easier to assess than the actual generated code itself. APPROPRIATE COMBINATION OF QUALITY ASSURANCE METHODS Due to the effort of individual QA methods it is desirable to combine QA methods in order to limit their scope and thus the extent of e.g. a code review. The scope can be reduced if specific aspects can be more easily checked by other methods, preferably at a higher level of abstraction. For example, checking appropriate variable scaling on the code level is cumbersome and inefficient [SC+06]. However, checking variable scaling by means of automated model checks and a back-to-back test between the fixedpoint code and the implementation model is to be performed with limited test resources. One reason for this is that the test cases for back-to-back tests are already available, since they have been determined already during the functional testing phase at the model level. However, in contrast to scaling errors, code inefficiencies caused by inappropriate modeling or unsophisticated code generation are easier to detect on the code level by reviews than on the model level (see [SC+06] for examples). The following order of application and combination of QA methods is based on our experiences during automotive software development projects (see [SC+06]). Fig. 3 is used as a reference for the different QA tasks. It is worth noting that the proposed order and combination of the QA methods could reduce the effort of e.g. the model review by 40% compared to traditional approaches.
5 Fig. 3: QA approach for model-based development (A) COMBINED REQUIREMENTS REVIEW, MODEL REVIEW, AND AUTOMATED MODEL CHECK In general, the effort of formal inspection techniques such as a requirements review is relatively high and cost intensive (Table 1, 1 st row). The aim of a requirements review is to check to be sure that the specification is complete, correct, and consistent. The review itself, however, has to be carried out manually, often lacking appropriate tool support. The effort of a model review is also relatively high (Table 1, 2 nd row). We found a combination of requirements and model review in conjunction with powerful tool support for the reviews themselves as well as for automated checks on the model useful in reducing the reviewing effort. Since the requirements specification is usually already represented within DOORS [DOORS], it seems to be reasonable to gather all comments which come up during the different reviews by using DOORS as well. This makes the discussion and processing of the review comments as efficient as possible. Furthermore, our solution included using an instance of the ToolNet [TN06] framework. This enables the creation and management of references (links) from the requirements to certain model elements and vice versa. In addition it also helps to create links directly from the reviewed model elements to the reviewers remarks recorded in DOORS. ToolNet is a service-based integration framework, which can manage comments, the links to model objects as well as the degree of realization, thereby facilitating the integrated usage of Simulink/Stateflow and DOORS in our application. Moreover, the scope and effort of the model review can be reduced by carrying out automated model checks (Table 1, 3 rd row), which focus on those modeling guidelines that are automatically checkable (e.g. naming conventions). Appropriate tool support is available by means of the Matlab Model Advisor [MMA] or the Mint [MINT] tool. The automated check should be carried out right before the model review, which reduces its scope, since then the model reviewer can concentrate on functional aspects and only has to focus on those modeling guidelines that are difficult to check by tools (e.g. ensure that modeling patterns are used which are capable of being generated into safe and efficient code). Since the model should directly reflect the functional behavior stated in the requirements specification, it is meaningful to carry out a combined model and requirements review. This ensures, for example, that the stated functional requirements can be identified within the model and that specific model parts can be traced back to the requirements. Referencing model elements from textual requirements means that it is possible to track the parts of the model that may have to be reworked after changing the requirements or if tests linked to this requirement failed. For efficiency reasons, precise assignment of the individual review comments to the model elements concerned is needed, at least at the subsystem level or state machine level. (B) COMBINED FUNCTIONAL AND STRUCTURAL TESTING ON MODEL LEVEL Before translating the implementation model into code (be it automated or manual) the model should be rigorously tested by means of functional and structural testing (Table 1, 4 th -7 th rows). Functional testing relies on the application specification. The test cases are systematically derived on the basis of the requirements specification. Structural testing takes the internal structure of the model into account with the aim of testing as many model paths (e.g. conditions, transitions, states) as possible. The test cases are calculated based on the selected coverage criteria for model coverage (e.g. decision coverage). A meaningful test strategy should consist of a combination of both approaches. This guarantees that the drawbacks of one approach are compensated by the effectiveness of the other.
6 For example, functional testing cannot ensure that each program element has been executed, but structural testing can (in principle). In general, the effort of performing functional as well as structural testing is low if well-engineered tool support is available. Tools like MTest, which integrate different test design approaches, automate as many test activities like test harness creation or test execution as possible, and facilitate test repetition as well as regression testing, are to be named here. However, the major portion of the test resources is used in investments on designing error-sensitive test cases for functional and structural testing. Systematic approaches for designing functional tests, supported by tools like CTE or TPT, ease this part of the work, whereas structureoriented test cases can be derived automatically out of the model by using tools like Reactis or ET. Tab. 1: Survey of MBD quality assurance method Quality Assurance Method Automation Effort Effectiveness Tool Support 1 Requirements Review no high middle ToolNet 2 Model Review no high middle ToolNet 3 Automated Model Checks partly low high Model Advisor,Mint 4 Functional Testing yes low high MTest 5 Functional Test Case Design partly high high CTM, CTE, TPT 6 Structural Testing yes low high SL V&V 7 Structural Test Case Design partly high high Reactis, ET 8 Static code analysis (syntax) yes low high lint, QA-C 9 Static code analysis (dataflow) yes high high Polyspace 10 Autocode Review no high low - (C) COMBINED STATIC CODE ANALYSIS AND AUTOCODE REVIEW There are different sources of programming errors which demand a systematic check of the generated code. For example inappropriate modeling, faulty configuration of the code generator, or an erroneous code generator lead to specific errors that may be difficult to detect by other quality assurance measures. A combination of static code analysis and a review of the autocode seem to be suitable to cover these questions. An autocode review (ACR) aims at identifying inappropriate modeling, code inefficiencies, custom code integration errors, and code generator bugs. The effort of the ACR is very high(table 1, last row), with a relatively low effectiveness (see [SC+06]). It is reasonable to reduce its scope by performing a static code analysis in advance (Table 1, 8 th - 9 th row). The latter should be applied in order to (1) check to see that the code conforms to generally accepted language standards and coding guidelines (e.g. ANSI C, MISRA) and to (2) find data flow errors (e.g. dead code). The effort to apply (1) is very low, with a high effectiveness. However, the effort of performing data flow analysis is high, but also is highly effective. Appropriate tool support is available for both approaches, such as QAC [QAC], lint [LINT], and the Polyspace verifier [POL]. (to here) A follow-up regression test of the implementation model ensures that no additional errors were incorporated during IM revision concerning the findings in the autocode. (D) BACK-TO-BACK TEST OF IMPLEMENTATION MODEL AND AUTOCODE Finally, a back-to-back test needs to be performed. In doing so, the execution behavior of the FXP autocode (Software-in-the-Loop or Processor-in-the-Loop simulation) is compared to the simulation behavior of the model (Modelin-the-Loop simulation). This ensures that the code really reflects the functional behavior of the model. Results can differ, however, due to different handling of numerical instabilities or exception handling of the MATLAB simulation environment and the code executed [SCW05]. Furthermore, the structural discrepancies of the model and the autocode must be compared. This requires additional test cases, which could be designed according to measured structural autocode coverage. SUMMARY Model-based development significantly improves the quality of the automotive embedded software development process. This process is meaningfully supplemented by a variety of tools and QA methods which support the developer and tester. The QA methods discussed in this paper, their automation potentials, the required effort for carrying them out, as well as the expected effectiveness and possible tool support are summarized in Table 1. Nevertheless, due to the relatively high effort required to safeguard the model-based development process, it is still desirable to reduce the effort and increase the effectiveness of the applied QA methods. In this paper a suitable combination of QA methods is suggested. We restricted ourselves to QA methods applied for the development of the embedded controller software. The correctness of the tools, e.g. of the code generator, are outside the scope of this paper. However, extensive surveys of possible QA methods that can be applied to a code generator are commented on in [Stü06].
7 The proposed order and focused scope of each method promises a significant effort reduction. Efficiency as well as effectiveness improvements can also be expected, as already shown in recent projects (e.g. [SC+06]). Complementing the quality assurance measures, a collection of Do s and Don ts of modeling as guidelines and patterns prevents a number of already known issues from arising. In order to ensure the efficient management and publishing of such guidelines and pattern collections, specific tool support is necessary, such as that presented in [CDF+05]. The latter collection describes typical problems and suggests base patterns that should be used and reused during the development of functions in order to avoid troubleshooting during or after code generation. Further research has to show whether an ideal combination can also allow specific QA methods to be omitted. For example, it is likely that such an ideal combination can make an autocode review (high effort, low effectiveness) superfluous. ACKNOWLEDGMENTS The work described here was partially performed as part of the IMMOS project funded by the German Federal Ministry of Education and Research (project ref. 01ISC31D), REFERENCES [Bur04] [CDF+05] [DOORS] [GG93] [IMMOS] [LINT] [MINT] [MIRA] [MMA] [POL] [QAC] Burnard, A. Verifying and Validating Automatically Generated Code, Int. Automotive Conference (IAC), pp , Conrad, M., Dörr, H., Fey, I., Pohlheim, H., Stürmer, I. Guidelines und Reviews in der Modell-basierten Entwicklung von Steuergeräte-Software (in German), in Simulation und Test in der Funktions- und Softwareentwicklung für die Automobilelektronik, Expert- Verlag, Berlin, DOORS (product information). Telelogic AG, Gilb, T. and Graham, D., Software Inspections, Addison-Wesley, Stephen Johnson, Lint, a C program checker. Computer Science Technical Report 65, Bell Laboratories, December MINT (product information), Ricardo plc, MISRA (2004) Guidelines for the use of the C language in critical systems, MIRA Ltd., http: / / ISBN , Matlab Model Advisor (product information), The Mathworks Inc., Polyspace Verifier (product information), Polyspace Technologies, QA-C (product information), QA Systems, [REA] [SCW05] [SC+06] [Stü06] [TMW01] [TMW06] [TN06] Reactis (product information), Reactive Systems, Stürmer, I.; Conrad, M.; Weinberg, D.: Overview of Existing Safeguarding Techniques for Automatically Generated Code, Proc. of 2 nd Intl. ICSE Workshop on Software Engineering for Automotive Systems (SEAS 05), Stürmer, I.; Conrad, M.; Fey, I.; Dörr, H.: Experiences with Model and Autocode Reviews in Model-based Software Development, Proc. of 3 rd Intl. ICSE Workshop on Software Engineering for Automotive Systems (SEAS 06), S , Stürmer, I.: Systematic Testing of Code Generation Tools - A Test Suite-oriented Approach for Safeguarding Model-based Code Generation, Pro BUSINESS, Berlin, Mathworks Automotive Advisory Board (MAAB): MAAB Controller Style Guidelines for Production Intent,Release V1.00, The Mathworks, Inc.,Natick,MA April 2001, esign/maab.shtml. The MathWorks, ToolNet (product information). Extessy AG, [WSB01] Wegener, J., Stahmer H. and Baresel, A. Evolutionary Test Environment for Automatic Structural Testing. Special Issue of Information and Software Technology, Vol. 43, pp , CONTACT Ines Fey earned a Diploma degree in Computer Science in Since 1996 she has been a senior researcher at the Software Technology Lab of the DaimlerChrysler Group Research E/E and Information Technology. She is a member of the FAKRA working group on Functional Safety and the MathWorks Automotive Advisory Board (MAAB). Ingo Stürmer is founder and principal consultant of Model Engineering Solutions, a consultant company based in Berlin (Germany), which provides best of practice techniques and methods in the area of model-based code generation for embedded systems. Ingo worked as a PhD student at DaimlerChrysler Research and Technology as well as a researcher at the Fraunhofer Institute for Computer Architecture and Software Technology (FIRST). Ingo is member of the MISRA Autocode Working Group, member of the ACM (SIGSOFT), and GI (German society for compute science).
Overview of Existing Safeguarding Techniques for Automatically Generated Code
Overview of Existing Safeguarding Techniques for Automatically Generated Code Ingo Stürmer Member of the ACM stuermer@acm.org Daniela Weinberg Fraunhofer FIRST Computer Architecture and Software Technology
More informationBest Practices for Verification, Validation, and Test in Model- Based Design
2008-01-1469 Best Practices for Verification, Validation, and in Model- Based Design Copyright 2008 The MathWorks, Inc. Brett Murphy, Amory Wakefield, and Jon Friedman The MathWorks, Inc. ABSTRACT Model-Based
More informationUsing Model and Code Reviews in Model-based Development of ECU Software Mirko Conrad, Heiko Dörr, Ines Fey, Ingo Stürmer
Using Model and Code Reviews in Model-based Development of ECU Software DaimlerChrysler AG, Research E/E and Information Technology {Mirko.Conrad Heiko.Doerr Ines.Fey First.I.Stuermer}@DaimlerChrysler.com
More informationModel Based System Engineering (MBSE) For Accelerating Software Development Cycle
Model Based System Engineering (MBSE) For Accelerating Software Development Cycle Manish Patil Sujith Annamaneni September 2015 1 Contents 1. Abstract... 3 2. MBSE Overview... 4 3. MBSE Development Cycle...
More informationSoftware Production. Industrialized integration and validation of TargetLink models for series production
PAGE 24 EB AUTOMOTIVE Industrialized integration and validation of TargetLink models for series production Continuous Software Production The complexity of software systems in vehicles is increasing at
More informationIBM Rational Rhapsody
IBM Rational Rhapsody IBM Rational Rhapsody Reference Workflow Guide Version 1.9 License Agreement No part of this publication may be reproduced, transmitted, stored in a retrieval system, nor translated
More informationwww.dspace.com Model-Based Development of Safety-Critical Software: Safe and Effi cient
www.dspace.com Model-Based Development of Safety-Critical Software: Safe and Effi cient Translation of Sicherheitskritische Software entwickeln Published at: MEDengineering, 06/2012 Software for safety-critical
More informationModel-based Testing of Automotive Systems
2008 International Conference on Software Testing, Verification, and Validation Model-based Testing of Automotive Systems Eckard Bringmann, Andreas Krämer PikeTec GmbH, Germany Eckard.Bringmann@PikeTec.com,
More informationModel-based Testing of Automotive Systems
Model-based Testing of Automotive Systems Eckard Bringmann, Andreas Krämer PikeTec GmbH, Germany Eckard.Bringmann@PikeTec.com, Andreas.Kraemer@PikeTec.com Abstract In recent years the development of automotive
More informationHardware in the Loop (HIL) Testing VU 2.0, 182.117, WS 2008/09
Testen von Embedded Systems Hardware in the Loop (HIL) Testing VU 2.0, 182.117, WS 2008/09 Raimund dkirner Testing Embedded Software Testing the whole system including the physical environment is not possible
More informationAutomating Code Reviews with Simulink Code Inspector
Automating Code Reviews with Simulink Code Inspector Mirko Conrad, Matt Englehart, Tom Erkkinen, Xiaocang Lin, Appa Rao Nirakh, Bill Potter, Jaya Shankar, Pete Szpak, Jun Yan, Jay Clark The MathWorks,
More informationSystematic Testing of Embedded Automotive Software: The Classification-Tree Method for Embedded Systems (CTM/ES)
Systematic Testing of Embedded Automotive Software: The Classification-Tree Method for Embedded Systems (CTM/ES) Mirko Conrad DaimlerChrysler AG, Research and Technology, Berlin, Germany The software embedded
More informationVerification and Validation According to ISO 26262: A Workflow to Facilitate the Development of High-Integrity Software
ABSTRACT Verification and Validation According to ISO 26262: A Workflow to Facilitate the Development of High-Integrity Software Mirko Conrad The MathWorks, Inc. Natick, MA, USA mirko.conrad@mathworks.com
More informationDeployment of Model-based Software Development in Safety-related Applications: Challenges and Solutions Scenarios
Deployment of Model-based Software Development in Safety-related Applications: Challenges and Solutions Scenarios Mirko Conrad, Heiko Doerr Research E/E and Information Technology DaimlerChrysler AG Alt-Moabit
More informationConverting Models from Floating Point to Fixed Point for Production Code Generation
MATLAB Digest Converting Models from Floating Point to Fixed Point for Production Code Generation By Bill Chou and Tom Erkkinen An essential step in embedded software development, floating- to fixed-point
More informationSQMB '11 Automated Model Quality Rating of Embedded Systems
SQMB '11 Automated Model Quality Rating of Embedded Systems Jan Scheible (jan.scheible@daimler.com) Daimler AG - Group Research and Advanced Engineering Hartmut Pohlheim (pohlheim@model-engineers.com)
More informationQualifying Software Tools According to ISO 26262
Qualifying Software Tools According to ISO 26262 Mirko Conrad 1, Patrick Munier 2, Frank Rauch 3 1 The MathWorks, Inc., Natick, MA, USA mirko.conrad@mathworks.com 2 The MathWorks, SAS, Grenoble, France
More informationCertification Authorities Software Team (CAST) Position Paper CAST-26
Certification Authorities Software Team (CAST) Position Paper CAST-26 VERIFICATION INDEPENDENCE COMPLETED January 2006 (Rev 0) NOTE: This position paper has been coordinated among the software specialists
More informationDie wichtigsten Use Cases für MISRA, HIS, SQO, IEC, ISO und Co. - Warum Polyspace DIE Embedded Code-Verifikationslösung ist.
Die wichtigsten Use Cases für MISRA, HIS, SQO, IEC, ISO und Co. - Warum Polyspace DIE Embedded Code-Verifikationslösung ist. Christian Guß Application Engineer The MathWorks GmbH 2015 The MathWorks, Inc.
More informationContinuous Integration Build-Test-Delivery (CI-BTD) Framework in compliance with ISO26262
Continuous Integration Build-Test-Delivery (CI-BTD) Framework in compliance with ISO26262 Manish Patil Sathishkumar T September 2015 1 Contents Abstract... 3 1. Introduction... 3 2. Industry Challenges...
More informationAUTOSAR Seminar WS2008/2009 - Assignment: Simulation of Automotive Systems in the Context of AUTOSAR
AUTOSAR Seminar WS2008/2009 - Assignment: Simulation of Automotive Systems in the Context of AUTOSAR Krasnogolowy, Alexander March 31, 2009 Hasso-Plattner-Institut for IT-Systems Engineering University
More informationModel Based Software Development for DDG 1000 Advanced Gun System
BAE Systems Land & Armaments Model Based Software Development for DDG 1000 Advanced Gun System Dirk Jungquist BAE Systems Land & Armaments 2012 Distribution Statement A: Approved for public release; distribution
More informationSoftware Technology in an Automotive Company - Major Challenges
Software Technology in an Automotive Company - Major Challenges Klaus Grimm DaimlerChrysler AG, Research and Technology Alt-Moabit 96A, 10559 Berlin, Germany klaus, grimm @ daimlerchrysler.com Abstract
More informationInstrumentation-Based Verification for Medical-Device Software
Instrumentation-Based Verification for Medical-Device Software Rance Cleaveland Professor of Computer Science, University of Maryland and Executive & Scientific Director, Fraunhofer USA Center for Experimental
More informationINTEGRATION OF THE CODE GENERATION APPROACH IN THE MODEL-BASED DEVELOPMENT PROCESS BY MEANS OF TOOL CERTIFICATION
Journal of Integrated Design and Process Science, Vol. 8 (2), pp.-, 2004 INTEGRATION OF THE CODE GENERATION APPROACH IN THE MODEL-BASED DEVELOPMENT PROCESS BY MEANS OF TOOL CERTIFICATION Ingo Stürmer Department
More informationStatic Analysis of Dynamic Properties - Automatic Program Verification to Prove the Absence of Dynamic Runtime Errors
Static Analysis of Dynamic Properties - Automatic Program Verification to Prove the Absence of Dynamic Runtime Errors Klaus Wissing PolySpace Technologies GmbH Argelsrieder Feld 22 82234 Wessling-Oberpfaffenhofen
More informationIntroduction of ISO/DIS 26262 (ISO 26262) Parts of ISO 26262 ASIL Levels Part 6 : Product Development Software Level
ISO 26262 the Emerging Automotive Safety Standard Agenda Introduction of ISO/DIS 26262 (ISO 26262) Parts of ISO 26262 ASIL Levels Part 4 : Product Development System Level Part 6 : Product Development
More informationIntroduction to Automated Testing
Introduction to Automated Testing What is Software testing? Examination of a software unit, several integrated software units or an entire software package by running it. execution based on test cases
More informationWiederverwendung von Testfällen bei der modellbasierten SW-Entwicklung
Wiederverwendung von Testfällen bei der modellbasierten SW-Entwicklung DGLR Workshop "Verifikation in der modellbasierten Software-Entwicklung" Garching, 04 October 2011 Dipl.-Ing. Peter Hermle, Key Account
More informationPrüfung von Traceability Links -Workshop
1 Prüfung von Traceability Links -Workshop Darmstadt, 7.12.2007 Agenda des Workshops 2 10.00 Begrüßung und Vorstellung der Teilnehmer 10.30 Erörterung der Entwicklungsmethoden 11.30 Mittagspause 12.15
More informationSoftware Development with Real- Time Workshop Embedded Coder Nigel Holliday Thales Missile Electronics. Missile Electronics
Software Development with Real- Time Workshop Embedded Coder Nigel Holliday Thales 2 Contents Who are we, where are we, what do we do Why do we want to use Model-Based Design Our Approach to Model-Based
More informationTESSY Automated dynamic module/unit and. CTE Classification Tree Editor. integration testing of embedded applications. for test case specifications
TESSY Automated dynamic module/unit and integration testing of embedded applications CTE Classification Tree Editor for test case specifications Automated module/unit testing and debugging at its best
More informationHitex Germany. White Paper. Unit Test of Embedded Software
Hitex Germany Head Quarters Greschbachstr. 12 76229 Karlsruhe Germany +049-721-9628-0 Fax +049-721-9628-149 E-mail: Sales@hitex.de WEB: www.hitex.de Hitex UK Warwick University Science Park Coventry CV47EZ
More informationTo introduce software process models To describe three generic process models and when they may be used
Software Processes Objectives To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software
More informationIdentification and Analysis of Combined Quality Assurance Approaches
Master Thesis Software Engineering Thesis no: MSE-2010:33 November 2010 Identification and Analysis of Combined Quality Assurance Approaches Vi Tran Ngoc Nha School of Computing Blekinge Institute of Technology
More informationIntegrated Model-based Software Development and Testing with CSD and MTest
Integrated Model-based Software Development and Testing with CSD and Andreas Rau / Mirko Conrad / Helmut Keller / Ines Fey / Christian Dziobek DaimlerChrysler AG, Germany fa-stz-andreas.rau Mirko.Conrad
More informationDevelopment of AUTOSAR Software Components within Model-Based Design
2008-01-0383 Development of AUTOSAR Software Components within Model-Based Design Copyright 2008 The MathWorks, Inc. Guido Sandmann Automotive Marketing Manager, EMEA The MathWorks Richard Thompson Senior
More informationModel-based Testing of Automotive Systems
Model-based Testing of Automotive Systems Eckard Bringmann and Andreas Krämer ICST 08 Presented by Julia Rubin on November 21, 2012 Multidisciplinary Business 2 Supply Chain of Components 3 Innovation
More informationComponent-based Development Process and Component Lifecycle Ivica Crnkovic 1, Stig Larsson 2, Michel Chaudron 3
Component-based Development Process and Component Lifecycle Ivica Crnkovic 1, Stig Larsson 2, Michel Chaudron 3 1 Mälardalen University, Västerås, Sweden, ivica.crnkovic@mdh.se 2 ABB Corporate Research,
More informationWhat is the benefit of a model-based design of embedded software systems. in the car industry?
What is the benefit of a model-based design of embedded software systems Manfred Broy Technical University Munich, Germany Sascha Kirstan Altran Technologies, Germany Helmut Krcmar Technical University
More informationSoftware Engineering for LabVIEW Applications. Elijah Kerry LabVIEW Product Manager
Software Engineering for LabVIEW Applications Elijah Kerry LabVIEW Product Manager 1 Ensuring Software Quality and Reliability Goals 1. Deliver a working product 2. Prove it works right 3. Mitigate risk
More informationOpportunities and Challenges in Software Engineering for the Next Generation Automotive
Opportunities and Challenges in Software Engineering for the Next Generation Automotive Cyber Physical Systems Electro Mobility Technische Universität München Institut für Informatik Cyber Physical Systems
More informationSoftware testing. Objectives
Software testing cmsc435-1 Objectives To discuss the distinctions between validation testing and defect testing To describe the principles of system and component testing To describe strategies for generating
More informationBest practices for developing DO-178 compliant software using Model-Based Design
Best practices for developing DO-178 compliant software using Model-Based Design Raymond G. Estrada, Jr. 1 The MathWorks, Torrance, CA Eric Dillaber. 2 The MathWorks, Natick, MA Gen Sasaki 3 The MathWorks,
More informationQuality Assurance of Models for Autocoding
Quality Assurance of Models for Autocoding Ann Cass, Pierre Castori S YNS PACE AG Hardstrasse 11 CH - 4052 Basel ac@synspace.com, pc@synspace.com Abstract: Automatic Code Generation is an emerging technology
More informationONLINE EXERCISE SYSTEM A Web-Based Tool for Administration and Automatic Correction of Exercises
ONLINE EXERCISE SYSTEM A Web-Based Tool for Administration and Automatic Correction of Exercises Daniel Baudisch, Manuel Gesell and Klaus Schneider Embedded Systems Group, University of Kaiserslautern,
More informationSoftware Quality Assurance Software Inspections and Reviews
Software Quality Assurance Software Inspections and Reviews Contents Definitions Why software inspections? Requirements for inspections Inspection team Inspection phases 2 Definitions Manual quality assurance
More informationQuality Management. Lecture 12 Software quality management
Quality Management Lecture 12 Software quality management doc.dr.sc. Marko Jurčević prof.dr.sc. Roman Malarić University of Zagreb Faculty of Electrical Engineering and Computing Department of Fundamentals
More informationA Knowledge-based Product Derivation Process and some Ideas how to Integrate Product Development
A Knowledge-based Product Derivation Process and some Ideas how to Integrate Product Development (Position paper) Lothar Hotz and Andreas Günter HITeC c/o Fachbereich Informatik Universität Hamburg Hamburg,
More informationIngo Stürmer, Dietrich Travkin. Automated Transformation of MATLAB Simulink and Stateflow Models
Ingo Stürmer, Dietrich Travkin Automated Transformation of MATLAB Simulink and Stateflow Models Ingo Stürmer Model Engineering Solutions Dietrich Travkin University of Paderborn Object-oriented Modeling
More informationComplexity- and Performance Analysis of Different Controller Implementations on a Soft PLC
Complexity- and Performance Analysis of Different Controller Implementations on a Soft PLC Robert Feldmann Technion Israel Institute of Technology TUM Technical University Munich rfeld3@gmail.com Abstract.
More informationValidating Diagnostics in Early Development Stages
Validating Diagnostics in Early Development Stages Explanations by means of an Example of an automatic exterior lighting control Dipl.-Ing. Valentin Adam, Daimler AG Dipl.-Ing. Matthias Kohlweyer, Daimler
More informationAdvanced TTCN-3 Test Suite validation with Titan
Proceedings of the 9 th International Conference on Applied Informatics Eger, Hungary, January 29 February 1, 2014. Vol. 2. pp. 273 281 doi: 10.14794/ICAI.9.2014.2.273 Advanced TTCN-3 Test Suite validation
More informationGEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications
GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications Harris Z. Zebrowitz Lockheed Martin Advanced Technology Laboratories 1 Federal Street Camden, NJ 08102
More informationTest Case Design by Means of the CTE XL
Test Case Design by Means of the CTE XL Eckard Lehmann and Joachim Wegener DaimlerChrysler AG Research and Technology Alt-Moabit 96 a D-10559 Berlin Eckard.Lehmann@daimlerchrysler.com Joachim.Wegener@daimlerchrysler.com
More informationUML-based Test Generation and Execution
UML-based Test Generation and Execution Jean Hartmann, Marlon Vieira, Herb Foster, Axel Ruder Siemens Corporate Research, Inc. 755 College Road East Princeton NJ 08540, USA jeanhartmann@siemens.com ABSTRACT
More informationSoftware Engineering. Software Processes. Based on Software Engineering, 7 th Edition by Ian Sommerville
Software Engineering Software Processes Based on Software Engineering, 7 th Edition by Ian Sommerville Objectives To introduce software process models To describe three generic process models and when
More informationCertification of a Scade 6 compiler
Certification of a Scade 6 compiler F-X Fornari Esterel Technologies 1 Introduction Topic : What does mean developping a certified software? In particular, using embedded sofware development rules! What
More informationSoftware Engineering Reference Framework
Software Engineering Reference Framework Michel Chaudron, Jan Friso Groote, Kees van Hee, Kees Hemerik, Lou Somers, Tom Verhoeff. Department of Mathematics and Computer Science Eindhoven University of
More informationSAFE SOFTWARE FOR SPACE APPLICATIONS: BUILDING ON THE DO-178 EXPERIENCE. Cheryl A. Dorsey Digital Flight / Solutions cadorsey@df-solutions.
SAFE SOFTWARE FOR SPACE APPLICATIONS: BUILDING ON THE DO-178 EXPERIENCE Cheryl A. Dorsey Digital Flight / Solutions cadorsey@df-solutions.com DIGITAL FLIGHT / SOLUTIONS Presentation Outline DO-178 Overview
More informationAutomotive Software Engineering
Automotive Software Engineering List of Chapters: 1. Introduction and Overview 1.1 The Driver Vehicle Environment System 1.1.1 Design and Method of Operation of Vehicle Electronic 1.1.2 Electronic of the
More informationPlug. & Play. Various ECUs tested by automated sequences. dspace Magazine 3/2009 dspace GmbH, Paderborn, Germany info@dspace.com www.dspace.
page 34 Delphi Diesel systems Plug & Play Various ECUs tested by automated sequences page 35 Delphi Diesel Systems has successfully developed automated integration and feature tests for various ECUs for
More informationACHIEVING FUNCTIONAL SAFETY OF AUDI DYNAMIC STEERING USING A STRUCTURED DEVELOPMENT PROCESS
ACHIEVING FUNCTIONAL SAFETY OF AUDI DYNAMIC STEERING USING A STRUCTURED DEVELOPMENT PROCESS Dr Juergen Schuller* 1, Marnix Lannoije* 2, Dr Michael Sagefka* 3, Wolfgang Dick* 4, Dr Ralf Schwarz* 5 * 1 Audi
More informationDeclaration of Conformity 21 CFR Part 11 SIMATIC WinCC flexible 2007
Declaration of Conformity 21 CFR Part 11 SIMATIC WinCC flexible 2007 SIEMENS AG Industry Sector Industry Automation D-76181 Karlsruhe, Federal Republic of Germany E-mail: pharma.aud@siemens.com Fax: +49
More informationCertifying Energy Efficiency of Android Applications
Proceedings of the 28th EnviroInfo 2014 Conference, Oldenburg, Germany September 10-12, 2014 Certifying Energy Efficiency of Android Applications Johannes Meier 1, Marie-Christin Ostendorp 1, Jan Jelschen
More informationAutomatic Test Data Generation for TTCN-3 using CTE
Automatic Test Data Generation for TTCN-3 using CTE Zhen Ru Dai, Peter H. Deussen, Maik Busch, Laurette Pianta Lacmene, Titus Ngwangwen FraunhoferInstitute for Open Communication Systems (FOKUS) Kaiserin-Augusta-Allee
More informationCaterpillar Automatic Code Generation
SAE TECHNICAL PAPER SERIES 2004-01-0894 Caterpillar Automatic Code Generation Jeffrey M. Thate and Larry E. Kendrick Caterpillar, Inc. Siva Nadarajah The MathWorks, Inc. Reprinted From: Electronic Engine
More informationRequirements engineering
Learning Unit 2 Requirements engineering Contents Introduction............................................... 21 2.1 Important concepts........................................ 21 2.1.1 Stakeholders and
More informationStatic Analysis for Software Verification. Leon Moonen
Static Analysis for Software Verification Leon Moonen Today s topics Software inspection it s relation to testing benefits and drawbacks Static (program) analysis potential benefits limitations and their
More informationA Framework for Software Product Line Engineering
Günter Böckle Klaus Pohl Frank van der Linden 2 A Framework for Software Product Line Engineering In this chapter you will learn: o The principles of software product line subsumed by our software product
More informationManage Software Development in LabVIEW with Professional Tools
Manage Software Development in LabVIEW with Professional Tools Introduction For many years, National Instruments LabVIEW software has been known as an easy-to-use development tool for building data acquisition
More informationTowards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder
Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder Matt Department of Computer Science and Engineering University of Minnesota staats@cs.umn.edu Abstract We present
More informationISO 26262 Exemplary Tool Classification of Model-Based Design Tools
ISO 26262 Exemplary Classification of Model-Based Design s Mirko Conrad 1, Ines Fey 2 1 The MathWorks, Inc., Natick, MA, USA mirko.conrad@mathworks.com 2 samoconsult GmbH Berlin, Germany ines.fey@samoconsult.de
More informationAn Agent-Based Concept for Problem Management Systems to Enhance Reliability
An Agent-Based Concept for Problem Management Systems to Enhance Reliability H. Wang, N. Jazdi, P. Goehner A defective component in an industrial automation system affects only a limited number of sub
More informationStandard Glossary of Terms Used in Software Testing. Version 3.01
Standard Glossary of Terms Used in Software Testing Version 3.01 Terms Used in the Expert Level Test Automation - Engineer Syllabus International Software Testing Qualifications Board Copyright International
More informationChap 1. Software Quality Management
Chap 1. Software Quality Management Part 1.1 Quality Assurance and Standards Part 1.2 Software Review and Inspection Part 1.3 Software Measurement and Metrics 1 Part 1.1 Quality Assurance and Standards
More informationMaking model-based development a reality: The development of NEC Electronics' automotive system development environment in conjunction with MATLAB
The V850 Integrated Development Environment in Conjunction with MAT...iles and More / Web Magazine -Innovation Channel- / NEC Electronics Volume 53 (Feb 22, 2006) The V850 Integrated Development Environment
More informationTesting of safety-critical software some principles
1(60) Testing of safety-critical software some principles Emerging Trends in Software Testing: autumn 2012 Matti Vuori, Tampere University of Technology 27.11.2012 Contents 1/4 Topics of this lecture 6
More informationEmbedded Software Development with MPS
Embedded Software Development with MPS Markus Voelter independent/itemis The Limitations of C and Modeling Tools Embedded software is usually implemented in C. The language is relatively close to the hardware,
More informationComprehensive Static Analysis Using Polyspace Products. A Solution to Today s Embedded Software Verification Challenges WHITE PAPER
Comprehensive Static Analysis Using Polyspace Products A Solution to Today s Embedded Software Verification Challenges WHITE PAPER Introduction Verification of embedded software is a difficult task, made
More informationInternational Journal of Advance Research in Computer Science and Management Studies
Volume 2, Issue 12, December 2014 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online
More information55. IWK Internationales Wissenschaftliches Kolloquium International Scientific Colloquium
PROCEEDINGS 55. IWK Internationales Wissenschaftliches Kolloquium International Scientific Colloquium 13-17 September 2010 Crossing Borders within the ABC Automation, Biomedical Engineering and Computer
More informationVirtual Integration and Consistent Testing of Advanced Driver Assistance Functions
Stuttgart, Testing Expo 2012 Virtual Integration and Consistent Testing of Advanced Driver Assistance Functions 2012-06-12 Jürgen Schüling Agenda Introduction and Motivation State of the Art Hardware in
More informationThe Role of Information Technology Studies in Software Product Quality Improvement
The Role of Information Technology Studies in Software Product Quality Improvement RUDITE CEVERE, Dr.sc.comp., Professor Faculty of Information Technologies SANDRA SPROGE, Dr.sc.ing., Head of Department
More informationModel-based Quality Assurance of Automotive Software
Model-based Quality Assurance of Automotive Software Jan Jürjens 1, Daniel Reiss 2, David Trachtenherz 3 1 Open University (GB) and Microsoft Research (Cambridge) 2 Elektrobit (Germany) 3 TU Munich (Germany)
More informationEmulated Digital Control System Validation in Nuclear Power Plant Training Simulators
Digital Control System Validation in Nuclear Power Training s Gregory W. Silvaggio Westinghouse Electric Company LLC silvaggw@westinghouse.com Keywords: Validation, nuclear, digital control systems Abstract
More informationIntroduction and Overview
Introduction and Overview Definitions. The general design process. A context for design: the waterfall model; reviews and documents. Some size factors. Quality and productivity factors. Material from:
More informationSoftware Development for Multiple OEMs Using Tool Configured Middleware for CAN Communication
01PC-422 Software Development for Multiple OEMs Using Tool Configured Middleware for CAN Communication Pascal Jost IAS, University of Stuttgart, Germany Stephan Hoffmann Vector CANtech Inc., USA Copyright
More informationInternational Journal of Advanced Engineering Research and Science (IJAERS) Vol-2, Issue-11, Nov- 2015] ISSN: 2349-6495
International Journal of Advanced Engineering Research and Science (IJAERS) Vol-2, Issue-11, Nov- 2015] Survey on Automation Testing Tools for Mobile Applications Dr.S.Gunasekaran 1, V. Bargavi 2 1 Department
More informationPart I. Introduction
Part I. Introduction In the development of modern vehicles, the infotainment system [54] belongs to the innovative area. In comparison to the conventional areas such as the motor, body construction and
More informationPeter Mileff PhD SOFTWARE ENGINEERING. The Basics of Software Engineering. University of Miskolc Department of Information Technology
Peter Mileff PhD SOFTWARE ENGINEERING The Basics of Software Engineering University of Miskolc Department of Information Technology Introduction Péter Mileff - Department of Information Engineering Room
More informationDIFFERENT PRAGMATIC APPROACHES OF TESTING THE CLOUD APPLICATION USING SIMULATORS/EMULATORS
DIFFERENT PRAGMATIC APPROACHES OF TESTING THE CLOUD APPLICATION USING SIMULATORS/EMULATORS Ms. Vaishali Jawale Assistant Professor ASM s Institute of Computer Studies Pimpri - Pune, Abstract: Computer
More informationSoftware Development Principles Applied to Graphical Model Development
Software Development Principles Applied to Graphical Model Development Paul A. Barnard * The MathWorks, Natick, MA 01760, USA The four fundamental principles of good software design communicate clearly,
More informationJOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at http://www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2006 Vol. 5, No. 6, July - August 2006 On Assuring Software Quality and Curbing Software
More informationModule 10. Coding and Testing. Version 2 CSE IIT, Kharagpur
Module 10 Coding and Testing Lesson 23 Code Review Specific Instructional Objectives At the end of this lesson the student would be able to: Identify the necessity of coding standards. Differentiate between
More informationChapter 8 Software Testing
Chapter 8 Software Testing Summary 1 Topics covered Development testing Test-driven development Release testing User testing 2 Program testing Testing is intended to show that a program does what it is
More informationHow CMMI contributes to Software Testing
How CMMI contributes to Software Testing Dr. Uwe Hehn method park Software AG Uwe.Hehn@methodpark.de Contents 1. Motivation for S/W Quality Models 2. Why Testers should have some knowledge of Quality Models
More informationClarifying a vision on certification of MDA tools
SCIENTIFIC PAPERS, UNIVERSITY OF LATVIA, 2010. Vol. 757 COMPUTER SCIENCE AND INFORMATION TECHNOLOGIES 23 29 P. Clarifying a vision on certification of MDA tools Antons Cernickins Riga Technical University,
More informationTEACHING STATISTICS THROUGH DATA ANALYSIS
TEACHING STATISTICS THROUGH DATA ANALYSIS Thomas Piazza Survey Research Center and Department of Sociology University of California Berkeley, California The question of how best to teach statistics is
More informationSERENITY Pattern-based Software Development Life-Cycle
SERENITY Pattern-based Software Development Life-Cycle Francisco Sanchez-Cid, Antonio Maña Computer Science Department University of Malaga. Spain {cid, amg}@lcc.uma.es Abstract Most of current methodologies
More information