UIDE FOR NDEPENDENT OFTWARE ERIFICATION ALIDATION prepared by/préparé par ESA property. reference/réference ESA ISVV Guide issue/édition 2 revision/révision 0 date of issue/date d édition December 29, 2008 status/état - Document type/type de document Technical Note Distribution/distribution ESTEC Keplerlaan 1-2201 AZ Noordwijk - The Netherlands Tel. (31) 71 5656565 - Fax (31) 71 5656040 ESA ISVV Guide Issue 2.0 29 dec 2008
Issue 2 Revision 0 Page ii Disclaimer: This ISVV Guide is the second issue of the document. The ISVV Guide is provided as is: the European Space Agency gives no warranty nor guarantees whatsoever as to its completeness, adequacy or suitability and shall not be held liable for any direct, indirect nor consequential damages. Use of the ISVV Guide by Readers/Users is made fully at the latter's own risk. Reproduction of all or part of this ISVV Guide is authorised providing the source is referenced. Feedback: Readers/users of this guide are invited to provide comments to ESA on eventual inconsistencies in the guide or suggestions for its improvements. The comments shall be send by e-mail to: INFO-ISVV-GUIDE@ESA.INT
Issue 2 Revision 0 Page iii APPROVAL Title titre ESA Guide for Independent Software Verification and Validation issue issue 2 revision revision 0 author auteur Maria Hernek Sabine Krueger date date December 29, 2008 approved by approuvé by Kjeld Hjortnæs date date CHANGE LOG reason for change /raison du changement Issue/issue revision/revision date/date 2 0 December 29, 2008 CHANGE RECORD Issue: 2 Revision: 0 reason for change/raison du changement page(s)/page(s) paragraph(s)/paragraph(s) Overall improvement of document to optimise the information and introduce results from R&D and ISVV process optimisation activities. ALL
Issue 2 Revision 0 Page iv contents: 1.0 Introduction...1 1.1 Background and Motivation...1 1.2 Purpose...1 1.3 Outline...1 2.0 What is Independent Software Verification and Validation?...2 2.1 Objectives of ISVV...2 2.2 ISVV Process Overview...3 2.3 Definition of Scope and Budgeting...7 2.4 Non-Disclosure and Security...8 2.5 Roles and Responsibilities...8 2.5.1 ISVV supplier... 8 2.5.2 ISVV Customer... 9 2.5.3 Other roles... 10 2.6 Types of Independence...10 3.0 ISVV Process Management...12 3.1 Activity Overview...12 3.2 Activity Inputs and Prerequisites...13 3.2.1 ISVV level definition... 13 3.2.2 Documents and Code from Software Development... 13 3.3 Activity Outputs...13 3.4 Activity Management...13 3.4.1 Initiating and Terminating Events... 13 3.4.2 Completion Criteria... 14 3.4.3 Relations to other Activities... 14 3.5 Task Descriptions...14 3.5.1 ISVV Process Planning... 14 3.5.2 ISVV Process Execution, Monitoring and Control.... 15 3.6 Methods...16 4.0 ISVV level definition...17 4.1 Activity Overview...17 4.2 Activity Inputs and Prerequisites...20 4.3 Activity Outputs...20 4.4 Activity Management...20 4.4.1 Initiating and Terminating Events... 20 4.4.2 Completion Criteria... 20 4.4.3 Relations to Other Activities... 20 4.5 Task Descriptions...21 4.5.1 System Level ISVV level definition... 21 4.5.2 Software Technical Specification ISVV level definition... 21 4.5.3 Software Design ISVV level definition... 22 4.5.4 Software Code ISVV level definition... 23 5.0 Technical Specification Analysis...25 5.1 Activity Overview...25 5.1.1 Software requirements verification... 26 5.2 Activity Inputs and Prerequisites...27
Issue 2 Revision 0 Page v 5.3 Activity Outputs...27 5.4 Activity Management...27 5.4.1 Initiating and Terminating Events... 27 5.4.2 Completion Criteria... 27 5.4.3 Relations to other Activities... 27 5.5 Task Descriptions...28 5.5.1 Software Requirements Verification... 28 6.0 Design Analysis...30 6.1 Activity Overview...30 6.1.1 Architectural Design Independent Verification... 31 6.1.2 Software Detailed Design Independent Verification... 32 6.1.3 Software User Manual Analysis... 33 6.2 Activity Inputs and Prerequisites...33 6.3 Activity Outputs...34 6.4 Activity Management...34 6.4.1 Initiating and Terminating Events... 34 6.4.2 Completion Criteria... 34 6.4.3 Relations to other Activities... 34 6.5 Tasks Description...34 6.5.1 Architectural Design Verification... 34 6.5.2 Detailed Design Verification... 38 6.5.3 Software User Manual Verification... 43 7.0 Code Analysis...44 7.1 Activity Overview...44 7.1.1 Source Code Verification... 45 7.1.2 Integration and Unit Test Specification and Data Verification... 46 7.2 Activity Inputs and Prerequisites...47 7.3 Activity Outputs...47 7.4 Activity Management...47 7.4.1 Initiating and Terminating Events... 47 7.4.2 Completion Criteria... 47 7.4.3 Relations to other Activities... 48 7.5 Tasks Description...49 7.5.1 Source Code Verification... 49 7.5.2 Integration Test Specification and Test Data Verification... 51 7.5.3 Unit Test Procedures and Test Data Verification... 53 8.0 Independent Validation...55 8.1 Activity Overview...55 8.1.1 Identification of Test Cases... 56 8.1.2 Construction of Test Procedures... 58 8.1.3 Execution of Test Procedures... 59 8.2 Activity Input and Prerequisites...60 8.3 Activity Outputs...60 8.4 Process Management...61 8.4.1 Initiating and Terminating Events... 61 8.4.2 Completion Criteria... 61 8.4.3 Relations to other Activities... 61 8.5 Task Descriptions...61 8.5.1 Identification of Test Cases... 61 8.5.2 Construction of Test Procedures... 62
Issue 2 Revision 0 Page vi 8.5.3 Execution of Test Procedures... 63 Annex A. Definitions and acronyms...65 A.1. Definitions...65 A.2. Acronyms...68 Annex B. ISVV activity outputs...70 B.1. ISVV Plan Outline...70 B.2. Requests for Clarification...73 B.3. ISVV Report (with ISVV Findings)...74 B.4. Progress Reports...74 B.5. ISVV Findings Resolution Report...74 Annex C. Review Item Discrepancy Form Example...75 Annex D. Summary of ISVV tasks, activities and methods and techniques...77 Annex E. ISVV Levels and Software Criticality Categories...83 E.1. ISVV Level definition overview...83 E.2. Error Potential Questionnaire...85 E.3. Procedures for Performing Simplified FMECA...86 Annex F. Methods...88 F.1. Formal Methods...88 F.2. Inspection...88 F.3. Modelling...89 F.4. Data Flow Analysis...91 F.5. Control Flow Analysis...91 F.6. Real-Time Properties Verification...91 F.7. Reverse Engineering...93 F.8. Simulation (Design execution)...93 F.9. Software Failure Modes, Effects and Criticality Analysis (SFMECA)...93 F.10. Static Code Analysis...94 F.11. Traceability Analysis...94 Annex G. Checklists...95 G.1. Requirements Review Checklists...95 G.2. Architectural Design Review Checklist...98 G.3. Detailed Design Review Checklist...101 G.4. Software User Manual Review Checklist...105 G.5. Code Inspection Checklist...105 G.6. Unit and Integration Test Review Checklist...108 G.7. Validation Checklist...109 G.8. Model conformance with applicable standards...109 Annex H. Software Validation Facility...110 Annex I. References...111
Issue 2 Revision 0 Page vii figures: Figure 1 ISVV Process Activities...3 Figure 2 ISVV tasks in parallel to SW supplier review milestones...5 Figure 3 ISVV tasks after SW supplier s review milestones...6 Figure 4 ISVV process management in context...12 Figure 5 ISVV Process Management Tasks...12 Figure 6 ISVV level definition in context...17 Figure 7 ISVV level definition tasks...18 Figure 8 Technical Specification Analysis in context...25 Figure 9 Technical Specification Analysis activity...26 Figure 10 Software Requirements Independent Verification...26 Figure 11 Design Analysis in context...30 Figure 12 Software design analysis...31 Figure 13 Software Architectural Design Independent Verification...32 Figure 14 Software Detailed Design Independent Verification...33 Figure 15 Software User Manual Independent Verification...33 Figure 16 Code Analysis in context...44 Figure 17 Code Analysis...45 Figure 18 Software Source Code Independent Verification...46 Figure 19 Integration/Unit Test Procedures and Test Data Verification...47 Figure 20 Independent Validation in context...55 Figure 21 Independent Software Validation...55 Figure 22 Subtasks to "Identification of Test Cases"...56 Figure 23 Subtasks to "Construction of Test Procedures"...58 Figure 24 Subtasks to "Execution of Test Procedures"...59 tables: Table 1: Competence requirements for ISVV personnel... 9 Table 2: ISVV levels... 19 Table 3: Dependency between ISVV level, input and analysis... 57 Table 4: Test case steps... 58 Table 5: RID Form... 75 Table 6: RID Problem Type Categories... 76 Table 7: RID Severity Classes...76 Table 8: ISVV levels... 83 Table 9: Matrix to derive ISVV level from Software Criticality Category and Error Potential... 84 Table 10: Error Potential Questionnaire... 85 Table 11: Mapping from error potential score to error potential level... 85 Table 12: UML 2 diagram types... 90
Issue 2 Revision 0 Page viii Foreword This ISVV Guide is the result of work carried out for the European Space Agency by the contribution of different companies. Most of the material presented in this Guidebook has been based directly or indirectly on a variety of sources (ESA, European space industry experiences and projects, technical literature sources). These sources as well as the contributors are too numerous to list here, nevertheless we would like to provide acknowledgment to this invaluable contribution from all participants who through different means (direct interviews, the ISVV e- mail address, specific R&D projects, etc) supported the improvement of the ISVV process and this document.
Issue 2 Revision 0 Page 1 1.0 Introduction 1.1 Background and Motivation Independent Software Verification and Validation (ISVV) is an engineering practice intended to improve quality and reduce costs of a software product. It is also intended to reduce development risks by having an organisation independent of the software developer s perform verification and validation of the specifications and code of a software product. The global objective of this guide is to aid in establishing an improved and coherent ISVV process across the European space industry in a consolidation of existing practice. Special emphasis is placed on process efficiency while the tasks are complementary to the nominal SW supplier s verification and validation tasks. It is hoped that the guide will also be found useful in other industries (e.g. Automotive, telecom, railway, etc) where software is a component of safety and dependability critical systems. The guide defines the ISVV process with management, verification, and validation activities. It provides advice on ISVV roles, responsibilities, planning, and communication as well as methods to use for the various verification and validation tasks. 1.2 Purpose The purpose of this guide is to: Define a uniform, cost effective and reproducible ISVV process across projects, and to guide in adapting it to each specific project; Assist the industry in getting predictable cost and quality out of the ISVV process; Clarify the benefits of applying ISVV; Improve ISVV project execution by highlighting the many different issues that need to be clarified and considered in the various phases of the project; Disseminate best practices with respect to recommended methods for the different verification and validation activities; Present a summary of the required capabilities of the Independent SVF in preparation of the development and utilization of a specific one for each project. The assumed readership of the ISVV Guide is primarily the customers and suppliers of ISVV services, but also software developers, system suppliers (primes) and system customers are likely to find the guide useful, be they verification / validation personnel, quality assurance managers or technical managers. The guide should be used in the preparation of a request for quotation for an ISVV service, in preparation for a bid, during planning, execution and re-planning of an ISVV project. 1.3 Outline The document consists of the following sections: The introduction of which this outline is a part describes the background and motivation for ISVV as well as the purpose of the ISVV guide. Section 2.0 elaborates on the topic of ISVV, describing types of independence, the objectives of ISVV as well as its relationship to development verification and validation. It also provides an overall view of the ISVV process. Section 3.0 describes the ISVV process management activity, detailing on ISVV roles, responsibilities, tasks and other aspects of management. Section 4.0 describes the ISVV level definition activity, and how it can be used to identify the scope of ISVV. Sections 5.0, 6.0, 7.0 describe the verification activities of Technical Specification Analysis, Design Analysis, and Code Analysis respectively. Section 8.0 describes the Independent Validation Activity. Finally, there are a number of annexes providing more detailed information related to the various ISVV activities.
Issue 2 Revision 0 Page 2 2.0 What is Independent Software Verification and Validation? 2.1 Objectives of ISVV As with any verification and validation activity, the objective of ISVV is to find faults and to raise confidence in the software subject to the ISVV process. The emphasis of either of these objectives may vary, depending on the maturity of the software, budget, time, the maturity of the software supplier, as well as the distribution of responsibility between the software developer s V&V and the ISVV supplier s V&V. Raising the confidence is particularly important for critical software, whose failure may lead to hazardous events, loss of life and exceptional costs, damage to health, environmental damage, grave economic losses, or loss of reputation. ISVV is therefore usually targeted to find faults in critical and/or safety or dependability related components (including resilience recovery capabilities). This is also the main emphasis of this guide. However, in other cases, ISVV may target other quality attributes, including security, reusability, and usability. ISVV should provide added value over the verification and validation carried out by the software developer. The approach of the ISVV supplier thus has to be complementary, introduced by having different: organisational missions and values, objectives, processes, methods, tools, people. The ISVV supplier focuses on finding possible weaknesses and faults, trying to break the software, with a destructive attitude. ISVV shall aim to using methods and tools different from those of the development organisation. In some cases, one method is an alternative to another; in others, methods are complementary and not substitutable. The ISVV activities shall emphasize on the manual verification and validation activities of the SW supplier (eg. the ISVV supplier could also perform automatic code verifications using different tools than the ones used by the SW supplier). These are the ones that could be better complemented by ISVV. When performing ISVV, other supporting processes should be performed in parallel to the different ISVV tasks: e.g. Configuration Management process, the documentation process, the SPA process, the Project management process, the risk management process, the problem resolution process.
Issue 2 Revision 0 Page 3 2.2 ISVV Process Overview The ISVV Process consists of 6 activities: two management activities, three verification activities, and one validation activity. The activities are shown in the figure below: MAN. Management MAN.PM.ISVV Process Management MAN.VV. ISVV level definition IVE. Independent Verification IVE.TA.Technical Specification Analysis IVE.DA.Design Analysis IVE.CA.Code Analysis IVA. Independent Validation IVA.Validation Figure 1 ISVV Process Activities ISVV Process Management (MAN.PM) is concerned with issues such as roles, responsibilities, planning, budgeting, communication, competence, confidentiality etc. It involves responsibilities of both the ISVV customer and the ISVV supplier. ISVV level definition (MAN.VV) is an activity supporting both ISVV Process Management and the Verification and Validation tasks. It provides important input for ISVV planning and how the available budget can be best utilized. The objective of the ISVV level definition task is to limit the scope and guide subsequent verification and validation activities as well as the methods proposed to be used to perform them. Technical Specification Analysis (IVE.TA) is verification of the Technical Specification, i.e. software requirements. The activity ends with a Technical Specification Analysis Review (TAR). Design Analysis (IVE.DA) is verification of the Software Architectural Design and the Software Detailed Design. The activity ends with a Design Analysis Review (DAR). Code Analysis (IVE.CA) is verification of the software source code. The activity ends with a Code Analysis Review (CAR). Validation (IVA) is testing of the software to demonstrate that the implementation meets the Technical Specification in a consistent, complete, efficient and robust way. The activity ends with an Independent Validation Review (IVR). Figure 2 and Figure 3 relate the ISVV activities to the software development processes and the review milestones defined by [ECSS-E-40B:2003]. Two different life cycle process dependencies are shown: a) when the ISVV activities are performed in parallel to the review milestones, and b) when they are performed after the review milestones. The four ISVV reviews are indicated in the figures to highlight the connection with the other milestones. The figures indicate possible early and likely start times as well as end times of the activities. More specific guidance is provided with the individual activity. Scheduling the ISVV project has a strong dependence on the progress of the software development projects. Delays in software development activities will cause corresponding delays in ISVV activities 1. 1 A scheduled date for the start of activities may be provided, with the understanding that this date may have to change if deliverables from the software development projects are late.
Issue 2 Revision 0 Page 4 ISVV suppliers shall follow the same life cycle as the SW supplier: e.g. supplier s iterative or incremental life cycle imply iterative or incremental ISVV tasks; or supplier s joint architectural design with detailed design, affects ISVV activities in similar way. Repetition of ISVV tasks as well as verification of new or largely modified parts of products should be carefully planned beforehand in cooperation with ISVV customer to ensure most coverage as well as highest efficiency. But not all correspondent ISVV tasks have to be performed when following de same life cycle. Examples of this might include: a) single ISVV tasks to be performed after a specific milestone, etc. This guide does not identify specific initiating events for ISVV activities. A general recommendation is that input documents and code from the software suppliers should be sufficiently mature. This is normally achieved at the delivery of review data package or after the implementation of project reviews: PDR, DDR, and CDR. The Independent Validation activity may already start with the identification of test cases as early as PDR, when stable documentation becomes available. To carry out the independent software validation testing effectively requires completion of the software development validation testing against the TS this usually finishes at CDR. Note: MAN.VV activities are drawn as VV in the figures. ISVV projects should aim at providing all ISVV findings before SW-QR, but this is to be defined per project. In some cases, documents may be mature earlier, or it may be desirable to have ISVV providing input to the development review along with other review comments. In this case, ISVV activities have to start earlier.
Issue 2 Revision 0 Page 5 Time ISVV Software Development Systems Engineering Requirements & Architecture Engineering VV VV TS Analysis VV AD Analysis Design and Implementation Engineering DD Analysis SUM analysis Validation Verification Delivery and acceptance VV Code Analysis Independent Validation SRR PDR DDR CDR QR TAR DAR CAR IVR Figure 2 ISVV tasks in parallel to SW supplier review milestones AR
Issue 2 Revision 0 Page 6 Time ISVV Software Development Systems Engineering Requirements & Architecture Engineering VV VV TS Analysis VV Design and Implementation Engineering Design Analysis Validation Verification Delivery and acceptance VV Code Analysis Independent Validation SRR PDR DDR CDR QR TAR DAR CAR IVR Figure 3 ISVV tasks after SW supplier s review milestones AR Each of the ISVV activities is described in detail in the following sections. The ISVV Process Management activity is given special treatment, but otherwise the main structure of an activity description is: Activity overview Activity inputs and prerequisites Activity outputs Activity management Initiating and terminating events Completion criteria Relations to other activities Task descriptions Methods Every activity is broken down into tasks and sometimes sub-tasks. Each task description (with the exception of project management tasks) is described in a table format with the following fields: Title: Name of the task Task ID: Task identifier Activity: Identifier and name of the activity Start Event: Start constraint for the task (might be tailored depending on the characteristics/objectives of specific ISVV projects) End Event: End constraint for the task (might be tailored depending on the characteristics/objectives of specific ISVV projects)
Issue 2 Revision 0 Page 7 Responsible: Identification of the organization responsible for the task execution, the ISVV supplier or the ISVV customer. Objectives: Main objectives to be accomplished by the task Inputs: Inputs to the task Sub Tasks (per ISVV Level): Task breakdown into subtasks, organised per ISVV level. Outputs: Outputs of the task A specific ISVV project may include all, one, or some of the previously referred verification and validation activities and their associated tasks and subtasks. There are dependencies between the activities; output of previous activities is often used as input for later activities 2. Users of this guide are encouraged to propose advanced techniques beyond the ones in the guide providing they can justify their performance. 2.3 Definition of Scope and Budgeting The budget for ISVV should reflect the criticality of the software to be scrutinised. The higher the criticality of the system (and the software) is, the bigger the effort for the verification and validation would be. This is the objective of the so-called ISVV level definition (see definition in Annex D), which identifies the ISVV Level of software items at various stages (software requirements, component, unit), both reducing the number of items subject to ISVV and determining which verification and validation tasks to carry out for each individual item. The ISVV costs are mainly composed by man-hour costs, travelling costs and tool costs. Manhour costs are usually dominant. Cost drivers are: Volume of specifications and code to be analysed. The ISVV Level definition is used to scope volume, so that it fits with available budget. The rigour with which an ISVV task is to be carried out. This depends on the ISVV Level. Methods to be used. Some methods are more labour intensive than other, but on the other hand they may also increase the value of the results. Repetitions in the development cycle. SW development is usually carried out in an iterative way (e.g. Incremental). Typically this means that ISVV activities have to be repeated on different versions of specifications and source code. This may have a big impact on costs, and for fixed-price contracts it is thus crucial that the number of repetitions is defined before the project starts. Complexity of the project. E.g., a project with many SW suppliers will require more management hours Travelling costs depends on whether ISVV supplier staff is required to be present at milestone reviews or not. This needs to be clarified in the Statement Of Work issued by the ISVV customer. Tools support specific methods for specific verification and validation tasks. The use of tools may greatly increase the efficiency of carrying out ISVV tasks, thereby reducing man-hours. 2 If one or more of the activities are defined to be outside the scope of the ISVV project, some of the tasks of the activity may nevertheless have to be performed for the prerequisites (in terms of required input) of other activities to be fulfilled. The dependencies are described as part of the individual activity descriptions.
Issue 2 Revision 0 Page 8 2.4 Non-Disclosure and Security Spacecraft software is high value intellectual property. It is therefore important that access to documents and code (both source and executable code) is strictly controlled when handed over to the ISVV supplier 3. The ISVV supplier and other stakeholders involved in the ISVV process must fulfil requirements both with respect to non-disclosure and with respect to secure handling of information. The ISVV customer must in cooperation with the software suppliers (or the intellectual property owner) and other stakeholders (system supplier/customer) determine the confidentiality requirements, including: whether there should be different confidentiality classes for documents and what those classes are; requirements for distribution and storage of confidential documents; requirements for personnel authorised to handle confidential documents. A Non Disclosure Agreement shall be established among the ISVV supplier, the ISVV customer and the SW supplier in order to preserve the confidentiality of the data. It is not recommended that the ISVV supplier performs ISVV activities at SW supplier s site. There are several reasons for this: independence might be jeopardised, increase in travelling cost, difficulties in tool and environmental share, timeframe of analyses may be limited etc. The ISVV customer must also identify the documents required for the ISVV process and their confidentiality class. The ISVV supplier should have an information security management system in place to ensure that distribution, storage, and handling of data fulfils the confidentiality requirements (e.g. based on [BS 7799-2:2002]). 2.5 Roles and Responsibilities Independent Software Verification and Validation is a service provided by an ISVV supplier to an ISVV customer. Their roles and responsibilities are described in subsequent sections. 2.5.1 ISVV supplier Independent software verification and validation requires special competence. Requirements on the competence on the individual should cover formal education, experience, as well as personal traits. There is still no consensus on what the requirements for ISVV personnel should be, but an example is provided here below: Formal education ISVV personnel should have a university degree in software engineering, computer science or similar. It will also be beneficial to have some formal background in hardware electronics and systems engineering as well as the domain itself, e.g. aerospace. Personnel should also have received proper training in quality assurance and quality control and testing. 3 The software developer will most likely require the ISVV supplier not to be involved in any kind of competing software development.
Issue 2 Revision 0 Page 9 Experience Personal traits ISVV personnel should have at least 5 years of working experience with software development of which at least 2 should be related to verification, validation or quality assurance. ISVV personnel should also have at last 2 years of experience within the space domain. If models are produced, the ISVV team is required to have a good knowledge of the modelling tool suite if not proficiency to fully utilise the verification and simulation capabilities of the tool suite and to correctly interpret the results. ISVV personnel should be creative destructors, rigorous, process mindful, objective, should have a critical attitude and be result-oriented. ISVV personnel should also have: - strong communication skills - be pragmatic, direct, able to simplify problems - have a critical mindset, able to explore and test, try - have generic knowledge about the domain environment and current and upcoming technologies Table 1: Competence requirements for ISVV personnel The ISVV personnel must thus exhibit creativity at breaking the system, at being destructive but respecting and considering the full scope of the mission requirement set. ISVV suppliers should also have the attitude of being the USER of the SW product, e.g.: with the operator's view focusing for example either on successfully controlling the software when recovering from errors or on how the system enters into safe mode. An ISVV project is usually carried out by an ISVV team, not a single individual. The ISVV team should compose a mix of complementary competencies. The team as such should be familiar with all methods and tools to be employed for the analyses. In addition, the ISVV team manager should be experienced with project management, including the management of ISVV projects. The project manager must also be able to handle the contractual and human relations aspects of the project, and should also have sufficient personal authority to defend the findings of the ISVV team. In addition to what is mentioned in above Table 1, for some ISVV tasks (e.g. for safety and dependability verification, or the Independent validation tasks, etc) someone at the ISVV supplier should have general space system knowledge expertise. The ISVV supplier should have a suitable quality management system (fulfilling the requirements of [ISO 9000:2000]) as well as an information management system). The ISVV process has many uncertainties (availability of inputs from the software supplier, etc) and risky elements (maturity of the elements under ISVV, maturity of the SVF, etc) that should be properly managed and controlled by the ISVV supplier through a formalised risk management process. 2.5.2 ISVV Customer The ISVV customer has the following responsibilities: Define the ISVV objectives Perform the initial ISVV level definition Define the ISVV scope and budget Approve the ISVV supplier s ISVV Plan and control its implementation Ensure NDAs are signed up between ISVV suppliers and SW suppliers Filter ISVV findings Ensure close out and implementation of ISVV findings
Issue 2 Revision 0 Page 10 In European space projects, the ISVV customer has traditionally been either the prime or the end customer. The prime should not be the ISVV customer if the prime itself (or any of its subsidiaries) is also developing software subject to ISVV. 2.5.3 Other roles In addition, the ISVV process may have interfaces to other roles: Software supplier (software developer) Software validation facility supplier System supplier (system integrator, prime, software customer) System customer (system owner) One of the two latter roles is also likely to be the ISVV customer. 2.5.3.1 Responsibilities of Software suppliers The involvement of the software supplier in ISVV includes: Assisting the ISVV customer in responding to requests for clarifications from the ISVV supplier; Assisting the ISVV customer in assessing the findings of the ISVV supplier, their criticality and resolution; Investigating and following up software problem reports resulting from ISVV findings. All communication between ISVV supplier and any of the software suppliers (when allowed) shall be copied to the ISVV customer. 2.5.3.2 Interface with Software Validation Facility supplier The Software Validation Facility supplier is the party providing the Software Validation Facility for the ISVV supplier s independent validation activity. The involvement of the SVF supplier could be minimal, i.e. just providing the SVF for a given period, or it could involve tasks such as specification and execution of test procedures, and reporting of test results. It is the ISVV customer s responsibility to ensure the ISVV supplier gets (or gets access to) the SVF. The recommendation of this ISVV guide is that the SVF is provided to the ISVV supplier. This secures the ISVV project s access to the SVF, also in critical phases of the project where resource contention would otherwise easily occur. 2.6 Types of Independence In the context of ISVV, independence is intended to introduce three important benefits: Separation of concerns. Any person or organisation is likely to discover that their activity inevitably produces conflicting demands and interests. Clearly separating roles and responsibilities ensures that such conflicts do not arise and also gives confidence of this to other stakeholders. A different view. All persons have a limited horizon of understanding within which texts (both written and oral) are interpreted and produced. A second opinion contributes to complement the view of the other by identifying omissions, ambiguities, factual errors, logic errors etc. Effectiveness and productivity in verification and validation activities. Staff specialised in independent software verification and validation develops technical competence and
Issue 2 Revision 0 Page 11 motivation that should lead to more effective and productive work. This is especially the case with verification and validation methods that necessitate the application of sophisticated tools. ISVV implies independence, additional and complementary activities from the SW supplier s verification and validation. Many standards (e.g. [IEC 61508-1:1998]) distinguish between: independent person, who may belong to the same department as the writer/developer, but should not have been involved in writing the specification or the code independent department, that requires verification to be carried out by people from a different department within the same organisation, and independent organisation that must be different legal entities with different management groups and preferably different owners The higher the criticality of the system (and the software see critical item definition in Annex A.1), the more independence is required. The IEEE Standard for Software Verification and Validation [IEEE 1012:1998] distinguishes between different types of independence addressing these concerns: technical independence, ("fresh viewpoint") by an independent person, is an important method to detect subtle errors overlooked by those too close to the solution. For software tools, technical independence means that the IV&V effort uses or develops its own set of test and analysis tools separate from the developer's tools managerial independence, allowed to submit to program management the IV&V results, anomalies, and findings without any restrictions (e.g., without requiring prior approval from the development group) or adverse pressures, direct or indirect, from the development group, and financial independence preventing situations where the IV&V effort cannot complete its analysis or test or deliver timely results because funds have been diverted or adverse financial pressures or influences have been exerted. In European space industry, full technical, managerial and financial independence is required for ISVV of critical software. The ISVV supplier is required to be an organisation independent of the software supplier as well as the prime (system integrator).
Issue 2 Revision 0 Page 12 3.0 ISVV Process Management 3.1 Activity Overview The objective of ISVV Process Management is to define the overall ISVV plan and to control and monitor the ISVV process. As can be seen from Figure 4 it is a management activity of the ISVV process. MAN. Management MAN.PM.ISVV Process Management MAN.VV. ISVV level definition IVE. Independent Verification IVE.TA.Technical Specification Analysis IVE.DA.Design Analysis IVE.CA.Code Analysis IVA. Independent Validation IVA.Validation Figure 4 ISVV process management in context ISVV Process Management (PM) consists of two tasks as shown in Figure 5: ISVV Process planning ISVV Process monitoring and control The figure also shows the inputs and outputs of each task. Criticality analyses (System engineering) Software Development Plan Software product Assurance Plan Software Technical ISVV Process Planning ISVV Plan Request for Clarification Software Verification and Validation Plan ISVV process management ISVV Report (with ISVV findings) Documents and code (SW development) ISVV level definition ISVV Plan ISVV Process Monitoring and Control Progress reports ISVV findings resolution report ISVV Report (with ISVV findings) Figure 5 ISVV Process Management Tasks 4 4 Note that figure shows only the most important inputs and outputs.
Issue 2 Revision 0 Page 13 3.2 Activity Inputs and Prerequisites The input work products are shown in above Figure 5 defining the ISVV Process Management Tasks. There are no particular prerequisites for starting the ISVV Management Activity. 3.2.1 ISVV level definition The ISVV level definition activity (see Annex E with the definition of the ISVV levels and software criticality categories) produces critical items lists defining the scope for subsequent verification and validation activities. The activities are either carried out by the ISVV customer or the ISVV supplier. In addition to the software criticality categorization it also identifies other factors that influence the ISVV level to be assigned to the software product subject of ISVV. The scope resulting from the ISVV level definition must be reflected in the ISVV plan. The initial ISVV plan is based (among other inputs) on the initial ISVV level definition task results and updated as later ISVV level definition refines the scope. 3.2.2 Documents and Code from Software Development The main input to the ISVV project is the software documents and code to be verified and validated. Additional documentation may also be required, e.g. system requirements, system architecture, safety analyses, etc. In addition, process documents such as development standards and quality assurance procedures may also be required. Documents and code to be verified or validated should be reasonably mature and stable before being subject to ISVV activities. This normally means that they have been submitted for or been through development reviews. Earlier versions of the documents and code may be provided to the ISVV supplier for familiarisation, especially if deadlines for the actual ISVV are short. All documentation available from the software development should be delivered to the ISVV supplier when published in order to favour his knowledge acquisition, even when no ISVV activity is due to be performed (eg: RB release, etc).. Other inputs from the SW supplier to be delivered to the ISVV Customer are needed for proper management of the ISVV activities. These inputs are: SW development plans and schedules, NDAs, managerial constraints like participation or synchronization with reviews, ISVV goals, critical items lists, etc. Then also part of these information shall be transferred/filtered to the ISVV supplier by the ISVV Customer. To ensure the efficiency of ISVV activity it is important that inputs are delivered and received in electronic searchable original format. 3.3 Activity Outputs The output work products are shown in above Figure 5 defining the ISVV Process Management Tasks. 3.4 Activity Management 3.4.1 Initiating and Terminating Events The ISVV Management activity starts for the ISVV customer when it becomes clear that a software product for which the customer is responsible (either as developer or integrator) will require ISVV. This may be at an early stage in the ISVV customer s process of bidding for the development of the software or the system containing the software. The ISVV customer starts the activity when preparing a tender package for ISVV services.
Issue 2 Revision 0 Page 14 ISVV Management ends with the close of the ISVV contract, i.e. with the acceptance by the ISVV customer of all deliverables required by the contract and described in the ISVV Plan. 3.4.2 Completion Criteria The ISVV Management activity completes when all ISVV activities, tasks and subtasks defined by the ISVV Plan have been carried out and all deliverables have been accepted by the ISVV customer. 3.4.3 Relations to other Activities The ISVV Management activity shall manage all of the other activities of the ISVV project. The ISVV level definition activities provide important input to the ISVV Management activity for budgeting and planning. 3.5 Task Descriptions Note that responsibility for the ISVV Management tasks is shared between the ISVV Customer and the ISVV Supplier. Responsibility is defined at the subtask level. 3.5.1 ISVV Process Planning TASK DESCRIPTION Title: ISVV Process Planning Task ID: MAN.PM.T1 Activity: ISVV Management Start event: Start of project End event: End of project Responsible: ISVV Customer and ISVV Supplier Objectives: Plan the ISVV process Inputs: - From ISVV Customer: - Criticality Analyses (from System Engineering) - ISVV level definition - Software Development Plan - Software Product Assurance Plan - Software Verification and Validation Plan - From ISVV Supplier: - ISVV level definition Sub Tasks (per ISVV Level): - MAN.PM.T1.S1: Define ISVV objectives (ISVV Customer) The main objectives of the ISVV (what quality attribute(s) are critical?) must be defined by the ISVV Customer. See also section 2.1. - MAN.PM.T1.S2: Perform System Level ISVV level definition (ISVV Customer) The ISVV Customer should perform an analysis to identify the need for ISVV, its scope and the initial critical items list (see section 4.5.1). - MAN.PM.T1.S3: Define the ISVV scope and determine the ISVV budget (ISVV Customer) The ISVV Customer should determine the overall ISVV budget frame based on the mission costs, size of SW product (documents and code) and the ISVV scope and level (see sections 4.1). - MAN.PM.T1.S4: Perform Technical Specification ISVV level definition (ISVV Customer or ISVV Supplier) Perform the Technical Specification analyse to identify the ISVV scope, level, and critical software requirements list. This may be carried out by the ISVV Customer or the ISVV Supplier. See also section 4.5.2.
Issue 2 Revision 0 Page 15 Outputs: - MAN.PM.T1.S5: Estimate ISVV scope and budget (ISVV Supplier) The ISVV Supplier should do an independent estimation of the ISVV budget. See section 4.1. - MAN.PM.T1.S6: Develop ISVV plan (ISVV Supplier) The ISVV Supplier must define an ISVV plan (a draft could be part of the proposal). The plan should be approved by the ISVV Customer. The developer s software development plan, software product assurance plan, and software verification and validation plan should be taken into account if available (overall coordination planning data is to be provided by the ISVV Customer). An outline of a sample ISVV plan is found in Annex B.1. - MAN.PM.T1.S7: Approve ISVV Plan (ISVV Customer) The ISVV Customer should approve the ISVV plan developed by the ISVV Supplier. An outline of a sample ISVV plan is found in Annex B.1. - MAN.PM.T1.S8: Determine confidentiality issues and prepare NDAs (ISVV Customer) It is the responsibility of the ISVV Customer to clarify confidentiality requirements and ensure these are kept throughout the project through the signing of Non-Disclosure Agreement with the ISVV Supplier and any of its sub-contractors (see Section 2.4). - MAN.PM.T1.S9: Approve scope definition resulting from ISVV level definition (ISVV Customer) All ISVV level definition results must be approved by the ISVV Customer. See also section 3.2.1. - ISVV plan (ISVV Supplier) 3.5.2 ISVV Process Execution, Monitoring and Control. TASK DESCRIPTION Title: ISVV Process monitoring and control Task ID: MAN.PM.T2 Activity: ISVV Management Start event: Project start End event: Project end Responsible: ISVV Customer and ISVV Supplier Objectives: Execute, monitor, and control the ISVV process Inputs: - From ISVV Customer: - ISVV level definition (from System Engineering) - Software Development Plan - Software Product Assurance Plan - Documents and Code from Software Development - ISVV Findings Resolution Report - From ISVV Supplier: - ISVV level definition [MAN.VV] - ISVV Plan [MAN.PM.T1] Sub Tasks (per ISVV Level): - MAN.PM.T2.S1: Manage ISVV project (ISVV Supplier) The ISVV Supplier must manage the project in accordance with the ISVV plan. This includes schedule management ( the dependency of SW supplier s schedule changes), budget management, resource management, activity management, risk management, quality management, document management, and security management. - MAN.PM.T2.S2: Submit documentation and code to ISVV Supplier (ISVV Customer) It is the responsibility of the ISVV Customer to provide all documentation and code necessary for ISVV planning and for the verification and validation activities to the ISVV Supplier. See also section 3.2.2. - MAN.PM.T2.S3: Check received documentation (ISVV Supplier) Any documentation and code received from the ISVV Customer or other parties of the ISVV should be registered and checked by the ISVV Supplier. - MAN.PM.T2.S4: Familiarization with software and system product under ISVV (ISVV Supplier)
Issue 2 Revision 0 Page 16 Outputs: The ISVV Suppliers shall familiarize themselves with the system where the SW is to be working in, supplier s development environment, the details of the SW product to be ISVV ed, the software validation facility and tools in which it will be validated, etc. - MAN.PM.T2.S5: Submit the verification and validation testing environment (ISVV Customer) 3.6 Methods Both the development environment and the validation testing environment from the SW supplier shall be available to the ISVV Supplier either delivered by the ISVV Customer or to be acquired by the ISVV supplier. - MAN.PM.T2.S6: Perform verification and validation activities (ISVV Supplier) The ISVV Supplier must carry out the verification and validation activities as described in the ISVV plan. - MAN.PM.T2.S7: Request clarifications (ISVV Supplier) The ISVV Supplier may request clarification from the ISVV Customer. See Annex B.2. - MAN.PM.T2.S8: Respond to Requests for Clarification (ISVV Customer) Whenever the ISVV Supplier issues a Request for Clarification, the ISVV Customer should provide feedback in a timely manner (see Annex B.2) - MAN.PM.T2.S9: Report early ISVV findings (ISVV Supplier) The ISVV Supplier may provide early feedback on findings to the ISVV Customer. - MAN.PM.T2.S10: Review early ISVV Findings (ISVV Customer) The ISVV Customer shall review received early ISVV findings for criticality and impact on the software/system, and shall take action as appropriate. - MAN.PM.T2.S11: Produce ISVV report (ISVV Supplier) For each ISVV activity (as defined by the ISVV plan), the ISVV Supplier must produce an ISVV report in which all of the findings shall be reported and main findings shall be highlighted. See Annex B.3.The ISVV supplier shall perform an internal review of the findings before submission. - MAN.PM.T2.S12: Filter ISVV findings (ISVV Customer) The ISVV Customer shall review and filter the ISVV findings in order to optimise their disposition and eventual implementation. - MAN.PM.T2.S13: Draft disposition of ISVV findings (SW supplier) The ISVV findings shall be replied by the SW supplier to be later discussed during a review meeting (see below). - MAN.PM.T2.S14: Conduct Review Meeting (ISVV Customer) The ISVV findings and their resolution are discussed during a review meeting (or off line discussion meeting) with participation of all related parties. The meeting is the responsibility of the ISVV Customer. - MAN.PM.T2.S15: Produce ISVV findings resolution report (ISVV Customer) In response to each ISVV report, the ISVV Customer should produce an ISVV findings resolution report, describing how each finding is resolved. The reports should be distributed to the ISVV Supplier and to the end customer (see section 3.3). in case of discrepancy between ISVV supplier and software provider it is up to the ISVV customer to determine the verdict. - MAN.PM.T2.S16: Implement resolutions (ISVV Customer) The ISVV Customer is responsible for ensuring that the resolutions described in the ISVV findings resolution report are implemented. The ISVV Supplier is not responsible for following-up the findings. - MAN.PM.T2.S17: Update ISVV level definition (ISVV Supplier) The ISVV level definition is may be updated throughout the project to further limit the scope of subsequent verification and validation activities. This is the responsibility of the ISVV Supplier, although the ISVV Customer may also be involved. See sections 4.5.3 and 4.5.4. - Requests for Clarification (ISVV Supplier) (Optional) - ISVV Plan (ISVV supplier update) - ISVV Report (with ISVV Findings) (ISVV Supplier) - ISVV findings resolution report (ISVV Customer) - Progress Reports (ISVV Supplier) Methods used for ISVV Process Management are not different from project management methods in general and will not be further described in this document.
Issue 2 Revision 0 Page 17 4.0 ISVV level definition 4.1 Activity Overview The objective of the ISVV level definition 5 task is to limit the scope and guide subsequent verification and validation activities as well as the methods proposed to be used to perform them. As can be seen from Figure 6, it is a management activity of the ISVV process. MAN. Management MAN.PM.ISVV Process Management MAN.VV. ISVV level definition IVE. Independent Verification IVE.TA.Technical Specification Analysis IVE.DA.Design Analysis IVE.CA.Code Analysis IVA. Independent Validation IVA.Validation Figure 6 ISVV level definition in context ISVV level definition (MAN.VV) consists of four tasks as shown in Figure 7 below: System level ISVV level definition Software technical specification ISVV level definition Software design ISVV level definition Software code ISVV level definition The figure also shows the inputs and outputs of each task. 5 Activity formerly called Criticality analysis but re-named now for clarity reasons to avoid confusion with Safety and Dependability analysis activities.
Issue 2 Revision 0 Page 18 Critical function list with criticality categories assigned Mission and system requirement specification System architecture Requirements Baseline System safety/dependability analyses Software criticality scheme Critical system function list Error potential questionnaire Technical specification (including ICDs) Software safety/ dependability analyses (based on req) [PAF] ISVV findings (from TS analysis) System Level ISVV level definition Software Technical Specification ISVV level definition ISVV level definition Software Design ISVV level definition Critical system function list Software criticality scheme Error potential questionnaire ISVV level definition Critical software requirements list Critical software components list Critical software units list Critical software requirements list Software architectural design [DDF] Software code ISVV level definition Software detailed design [DDF] Software safety/ dependability analyses (based on design) [PAF] Critical software components list Software code [DDF] Software safety/ dependability analyses (based on code) [PAF] ISVV findings (from design analysis) Figure 7 ISVV level definition tasks 6 6 Note that figure shows only the most important inputs and outputs.
Issue 2 Revision 0 Page 19 The ISVV Level is a number on an ordinal scale assigned to a system function, software requirement, component or unit to designate the required level of verification and validation. The following ISVV Levels are defined: Level ISVVL 0 ISVVL 1 ISVVL 2 Description No ISVV activities are required. Basic ISVV is required. Full ISVV is required. Table 2: ISVV levels If ISVV is to be limited to only a subset of the verification and validation activities, some of the ISVV level definition tasks may not be included. For example, for the verification activities, if Code Analysis is not to be carried out, there is no need to carry out Software Code ISVV level definition. If both Design Analysis and Code Analysis are to be excluded, both corresponding ISVV level definition activities may be excluded as well. The System Level ISVV level definition and the Software Technical Specification ISVV level definition will always have to be carried out - also in the case where ISVV consists only of Independent Validation. In the case where earlier verification activities have been left out (e.g. there is no Technical Specification Analysis or Design Analysis), but there are still verification activities included in ISVV, ISVV level definition of left out verification activities may still have to be carried out to ensure that prerequisites for the remaining ISVV activity/activities are fulfilled. For the ISVV level definition both the Software Criticality Analysis and the analysis of other error prone factors are needed. It is important for the ISVV supplier to ensure proper scope of the ISVV activities versus planned time and budget. The Software Criticality Analysis is carried out using (Software) Failure Modes, Effects and Criticality Analysis ((S)FMECA), supported by traceability analysis, control flow/call graphs analysis, and complexity measurements. It is important to emphasise that the use of these methods would not be as rigorous for the purpose of defining the ISVV level as for the Safety and Dependability Analyses to be carried out as part of the development activities. The purpose here is to scope the verification and validation activities, not to find all potential problems (hazards, failures, etc). Also, the performance of these specific analyses depends to some degree on what analyses are already available from the software developer or system integrator. The main outputs of the ISVV level definition activity are: Critical system functions list Software criticality schema Critical software requirements list Critical software components list Critical software units list Error potential questionnaire ISVV level definition Items of the lists will usually be grouped by software product. For a given software product, the list will include all items included in the product with a software criticality category and an ISVV level assigned to each of them. Annex D discusses important concepts of ISVV level definition. Each project shall tailor the applicability of the different tasks and activities to the different software products, components, etc, following Annex D for the ISVV level definition. The intention is to have full and rigorous
Issue 2 Revision 0 Page 20 ISVV to software parts assigned ISVV Level 2, reducing the number of activities and the rigor while the ISVV Level decreases. All independent verification and validation activities presented in chapters 5.0 to 8.0.are already pre-tailored for different ISVV levels (ISVV level 1 and/or ISVV level 2). This pre-tailoring is presented only as guidance material. In particular, focus shall be put in identifying which technique and method is required to be used for the different activities (often different methods are listed for each activity and not all of them shall be used). The more rigorous is the method the more rigorous will be the ISVV findings, but it will increase the effort needed. 4.2 Activity Inputs and Prerequisites The input work products are shown in above Figure 7 with the ISVV level definition tasks. The ISVV level definition activity is split into four tasks with different starting points in time. A prerequisite for starting any of these activities is the availability of the required input at a satisfactory level of maturity. Please refer to the individual tasks for a more detailed view. 4.3 Activity Outputs The output work products are shown in above Figure 7 with the ISVV level definition tasks. 4.4 Activity Management 4.4.1 Initiating and Terminating Events The four tasks of ISVV level definition will normally be carried out prior to the corresponding verification or validation activity. The initiating event of each verification/validation activity thus may be seen also as the initiating event of the ISVV level definition that defines the scope of the activity. The initial ISVV level definition (System Level ISVV level definition) will normally be carried out by the ISVV customer (and reviewed by the ISVV supplier at the tendering process) as it is an important input for the cost estimation of the ISVV project. 4.4.2 Completion Criteria The outputs of each of the ISVV level definition tasks shall be reviewed in a joint review meeting between the ISVV supplier and the ISVV customer to determine whether the output provides a sufficient basis for the execution of subsequent verification and validation activities. 4.4.3 Relations to Other Activities The primary relation of the ISVV level definition to other activities is the Verification and Validation activities, which uses the output of the ISVV level definition to limit scope and guide the performance of the different analyses. Input to the ISVV level definition activity comes from System and Software Engineering activities as well as from Independent Verification activities previously carried out (ISVV findings). The initial ISVV level definition (System Level ISVV level definition) is also an important input to the cost estimation task of ISVV management. These ISVV level definition activities will normally not provide any feedback to the System or Software Engineering activities, they are only to be used to scope the ISVV activities.
Issue 2 Revision 0 Page 21 4.5 Task Descriptions 4.5.1 System Level ISVV level definition TASK DESCRIPTION Title: System Level ISVV level definition Task ID: MAN.VV.T1 Activity: MAN.VV ISVV level definition Start event: SRR System Requirements Review End event: PDR Preliminary Design Review Responsible: The System Level ISVV level definition shall be carried out by the ISVV customer. The result of the analysis will be reviewed by the ISVV supplier during the tendering process. Objectives: Inputs: Sub Tasks (Procedure): Identify the ISVV level - From ISVV Customer: - Software criticality scheme [from Software Engineering] - Critical function list with criticality categories assigned [from System Engineering] - Mission and system requirements specification [from System Engineering] - System architecture [from System Engineering] - Requirements Baseline [from System Engineering] - System safety/dependability analyses [from System Engineering] - MAN.VV.T1.S1: Identify the software criticality scheme used for the mission. - MAN.VV.T1.S2: Evaluate whether the defined software criticality scheme is relevant for the ISVV objective. If it is not, then define a new software criticality scheme for ISVV. - MAN.VV.T1.S3: If there is a Critical Function List and the criticality scheme it is based on is relevant for the ISVV objective, then use this CFL. - MAN.VV.T1.S4: If there is no Critical Function List or the ISVV objective does not match the criteria used to derive it, perform a simplified system FMECA along the lines described in Annex E.3. - MAN.VV.T1.S5: Identify each software product and its supplier. Fill in the error potential questionnaire (see Annex D) for each software product (by the Error potential assessment described in E.2). - MAN.VV.T1.S6: Assign ISVV level to each system function based on the software criticality category of the function and error potential. Outputs: - Critical system functions list - Error potential questionnaires - Software criticality scheme - ISVV level definition 4.5.2 Software Technical Specification ISVV level definition TASK DESCRIPTION Title: Software Technical Specification ISVV level definition Task ID: MAN.VV.T2 Activity: MAN.VV ISVV level definition Start event: PDR Preliminary Design Review End event: TAR Technical Specification Analysis Review Responsible: This task will be performed by the ISVV customer or the ISVV supplier as determined by the project contract. If carried out by the ISVV supplier, the results shall be reviewed and approved by the ISVV customer. Objectives: - Define ISVV level
Issue 2 Revision 0 Page 22 Inputs: - From ISVV Customer: - Technical Specification including Interface Control Documents [from Software Engineering] - Software safety/dependability analyses based on Technical Specification (if existent) [from Software PA] - Critical system functions list [from System Level ISVV level definition MAN.VV.T1] - Error potential questionnaires [from System Level ISVV level definition MAN.VV.T1] - Software criticality scheme [from System Level ISVV level definition MAN.VV.T1] Sub Tasks (Procedure): - MAN.VV.T2.S1: For each software product implementing critical system functions, identify any SFMECA based on the Technical Specification available. - MAN.VV.T2.S2: If an SFMECA exists and the criticality scheme used as a basis is relevant for the ISVV objective, then it may be used as a basis for deriving the critical software requirements list. - MAN.VV.T2.S3: If no such analyses have been carried out, the quality is too poor, or the ISVV objective differs from the presumptions of the SFMECA, perform a simplified SFMECA based on the Technical Specification including Interface Control Documents. Another simplified way of doing this step is described in Annex E.3 (by Simplified SFMECA). - MAN.VV.T2.S4: Verify the consistency of the SFMECA with the Critical systems function list. If discrepancies are found, notify the ISVV customer who will have to consider consequences in terms of re-analysis. - MAN.VV.T2.S5: For each software requirement, derive the software criticality category by identifying the highest criticality category of any failure mode associated with it. - MAN.VV.T2.S6: Assign an ISVV level to each software requirement based on the software criticality category of the requirement and error potential (there is no need to reassess error potential unless different answers to the error potential questionnaire are expected at this level). Outputs: - Critical system functions list (update) - Critical software requirements list - Error potential questionnaire - ISVV level definition 4.5.3 Software Design ISVV level definition TASK DESCRIPTION Title: Software Design ISVV level definition Task ID: MAN.VV.T3 Activity: MAN.VV ISVV level definition Start event: PDR Preliminary Design Review End event: TAR Design Analysis Review Responsible: This task will be performed by the ISVV supplier and the result reviewed and approved by the ISVV customer. Objectives: - Define ISVV level Inputs: - From ISVV Customer: - Technical Specification including Interface Control Documents [from Software Engineering] - Design Definition File: Software architectural design and traceability matrices [from Software Engineering] - Design Definition File: Software detailed design and traceability matrices (optional) [from Software Engineering] - Software safety/dependability analyses based on software architectural design or software detailed design (if existent) [from Software PA]. - From ISVV Supplier: - Critical system functions list [from System Level ISVV level definition MAN.VV.T1] - Error potential questionnaires [from Software Technical Specification ISVV level definition MAN.VV.T2] - Software criticality scheme [from System Level ISVV level definition MAN.VV.T1] - Critical software requirements list [from Software Technical Specification ISVV level definition MAN.VV.T2]
Issue 2 Revision 0 Page 23 Sub Tasks (Procedure): - Software safety/dependability analysis based on Technical Specification (if existent) [from Technical Specification ISVV level definition MAN.VV.T2] - ISVV Findings [from Technical Specification Analysis IVE.TA] - MAN.VV.T3.S1: Review the findings of and the safety/dependability analysis performed as part of the Technical Specification Analysis. Evaluate the consistency with the critical function list and the critical software requirements list produced by the preceding Criticality Analyses. If discrepancies are found, notify the ISVV customer who will have to consider consequences in terms of re-analysis. - MAN.VV.T3.S2: If design level safety and dependability analyses exist from the developer, investigate whether these may be used to assign software criticality categories to design components. The software criticality scheme should be relevant for ISVV, the analysis should be based on the same versions of documents as ISVV (or else a delta analysis must be carried out), and the results of any higher level analyses it is based on should not be in conflict with the results of the Technical Specification Analysis. - MAN.VV.T3.S3: If not, trace the software requirements to software architectural design components. Assign to each software component the highest software criticality category of any requirement tracing to it (by Inspection of traceability matrices). - MAN.VV.T3.S4: Alternatively, extend the SFMECA carried out at software requirements level by identifying software components as causes for requirements failure modes. This creates an alternative trace from requirements to design components. Assign to each software component the highest software criticality category of any failure mode to which it may contribute (by Simplified SFMECA). - MAN.VV.T3.S5: Identify any dependency mechanisms for the design language used (e.g. use or call relationships). - MAN.VV.T3.S6: Analyse the dependency of critical components on other components and adjust the software criticality category of these components to be the same as the critical component depending on them. Some components may be used by several critical components. For these, assign the highest criticality category of any dependent component (by Inspection or Modelling 7 ). - MAN.VV.T3.S7: Assign an ISVV level to each software component based on the software criticality category of the component and error potential (there is no need to reassess error potential unless different answers to the error potential questionnaire are expected at this level). - MAN.VV.T3.S8: Software criticality categories and ISVV levels may also be assigned to detailed design software components. The benefit of going to this level of detail for the ISVV level definition should be balanced by the costs induced. Outputs: - Critical system functions list (update) - Critical software requirements list (update) - Critical software component list - Error potential questionnaire - ISVV level definition 4.5.4 Software Code ISVV level definition TASK DESCRIPTION Title: Software Code ISVV level definition Task ID: MAN.VV.T4 Activity: MAN.VV ISVV level definition Start event: CDR Critical Design Review End event: CAR Code Analysis Review Responsible: This task will be performed by the ISVV supplier and the result reviewed and approved by the ISVV customer. Objectives: Define ISVV level Inputs: - From ISVV Customer: - Design Definition File: Software architectural design and traceability matrices [from Software Engineering] - Design Definition File: Software detailed design and traceability matrices (optional) [from Software 7 Note that the objective here is not necessarily to build a separate representation of the software design, but to use existing models to understand dependencies between software components
Issue 2 Revision 0 Page 24 Sub Tasks (Procedure): Engineering] - Design Definition File: Software code [from Software Engineering] - Software safety/dependability analyses based on code (if existent) [from Software PA] - From ISVV Supplier: - Critical system functions list [from System Level ISVV level definition MAN.VV.T1] - Error potential questionnaires [from Software Design ISVV level definition MAN.VV.T3] - Software criticality scheme [from System Level ISVV level definition MAN.VV.T1] - Critical software requirements list [from Software Technical Specification ISVV level definition MAN.VV.T2] - Critical software components list [from Software Design ISVV level definition MAN.VV.T3] - Software safety/dependability analysis based on Design Definition (if existent) [from Software Design ISVV level definition MAN.VV.T3] - ISVV Findings [from Design Analysis IVE.DA] - MAN.VV.T4.S1: Review the findings of and the safety/dependability analysis performed as part of the Design Analysis. Evaluate the consistency with the critical system function list, the critical software requirements list and the critical software component list produced by earlier criticality analyses. If discrepancies are found, notify the ISVV customer who will have to consider consequences in terms of re-analysis. - MAN.VV.T4.S2: If code level safety and dependability analyses exist from the developer, investigate whether these may be used to assign software criticality categories to software units. The software criticality scheme should be relevant for ISVV, the analysis should be based on the same versions of code as ISVV (or else a delta analysis must be carried out), and the results of any higher level analyses it is based on should not be in conflict with the results of the Design Analysis. - MAN.VV.T4.S3: If not, identify mapping rules from software design components to software units. For each software component (either architectural design component or detailed design component) trace the software component to source code. Assign to each software unit the software criticality category of the software component it implements (by Inspection of traceability matrices). - MAN.VV.T4.S4: When software complexity is a risky matter, define complexity measure for the software units and calculate the complexity measurement (Software metrics analysis). The complexity measure could be e.g. based on cyclomatic complexity of procedures contained in the unit as well as the number of other units using this unit. Define a threshold to distinguish non-complex from complex units. Note: The metric and threshold are to be agreed between customer and ISVV supplier. -.MAN.VV.T4.S5: Fill in the error potential questionnaire (see Annex D) for each software unit, taking into account the complexity measures when applicable. - MAN.VV.T4.S6: Assign ISVV level to each software unit based on the software criticality category of the software unit and error potential (by Error potential assessment). Outputs: - Critical system functions list (update) - Critical software requirements list (update) - Critical software component list (update) - Critical software unit list - Error potential questionnaire - ISVV level definition
Issue 2 Revision 0 Page 25 5.0 Technical Specification Analysis 5.1 Activity Overview Technical Specification Analysis is one of the verification activities of the ISVV process (Figure 8). The Technical Specification Analysis is generally the first verification activity to be performed after the ISVV level definition at the software requirements level. MAN. Management MAN.PM.ISVV Process Management MAN.VV. ISVV level definition IVE. Independent Verification IVE.TA.Technical Specification Analysis IVE.DA.Design Analysis IVE.CA.Code Analysis IVA. Independent Validation IVA.Validation Figure 8 Technical Specification Analysis in context The Technical Specification Analysis activity aims to verify the software requirements against the following criteria: correctness and completeness of the software requirements software requirements externally consistency to system partitioning and system requirements software requirements externally and internally consistent (not implying formal proof consistency) software requirements unambiguous and verifiable the software requirements related to safety and dependability correct (as shown by suitably rigorous methods) The Activity also aims to identify potential test cases which may be given special attention during subsequent activities of the independent software verification and validation processes.
Issue 2 Revision 0 Page 26 System Requirements allocated to Software [RB] Software Requirements Specification [TS] Software-Hardware interface requirements [RB] Interface Control Documents [TS] Critical Software Requirements List (ISVV) Software Requirements Verification Technical specification verification Report Contribution to Independent Validation Software Logical Model [TS] Software Criticality Analysis Report [PAF] Figure 9 Technical Specification Analysis activity 8 5.1.1 Software requirements verification The Technical Specification contains the software requirements which have been defined by the software supplier to represent the system requirements allocated to software in the Requirements Baseline. It is necessary to verify that this representation in terms of function, capability, performance, safety, dependability, qualification, human factors, data definitions, documentation, installation and acceptance, and operation and maintenance is complete, correct, consistent, accurate, readable, and testable. This verification is shown in Figure 10 below. Software Requirements Verification System Requirements allocated to Software (SRR) SW-HW Interface Requirements (SRR) (SRR) SW Requirements Specification (PDR) Interfaces Control Document (PDR) Verify SW requirements correctness with respect to system requirements and Interfaces Verify consistent documentation of SW requirements Verify dependability and safety of requirements Verify readability of SW requirements Verify timing and sizing budgets of SW requirements Verify feasibility that SW requirements of producing are an testable architectural design Verify SW requirements conformance with applicable standards Figure 10 Software Requirements Verification 8 Note that figure shows only the most important inputs and outputs.
Issue 2 Revision 0 Page 27 5.2 Activity Inputs and Prerequisites The input work products are listed in Figure 9 showing the Technical Specification Analysis activity. The inputs to the Technical Specification Analysis activity should comprise a mature, stable, and self-consistent set to ensure that the analysis conducted on them is useful. A set of inputs which meet these criteria is available for the customer s Preliminary Design Review. 5.3 Activity Outputs The output work products are listed in Figure 9 showing Technical Specification Analysis activity. Verification reports include at least overall analysis of the work products analysed, findings, list of open issues to probe further on subsequent analysis, suggested modifications (if any), and inputs for independent validation test cases specification. Traceability matrices might be provided as annexes of verification reports or as separate documents. 5.4 Activity Management 5.4.1 Initiating and Terminating Events The activity will be initiated on receipt of the required inputs. A suitable set of input documents will be contained in the Data pack submitted by the software supplier for the customer s Preliminary Design Review. The activity will be terminated on completion of the verification tasks which have been selected by the customer during verification process implementation as identified in the ISVV Plan. The final outputs will be submitted to the customer s ISVV Technical Specification Analysis Review Meeting. 5.4.2 Completion Criteria The completion of the Requirements Traceability Matrices and Requirements Verification Report and their submission to the ISVV customer contribute to the completion of the activity. The customer s ISVV Technical Specification Analysis Review Meeting, with the participation of all involved parties, will allocate final dispositions to the findings of the activity. 5.4.3 Relations to other Activities Safety-critical and mission-critical design drivers may be identified for further analysis in the Design Analysis Activity. Potential test cases may be identified for the Validation Activity.
Issue 2 Revision 0 Page 28 5.5 Task Descriptions 5.5.1 Software Requirements Verification TASK DESCRIPTION Title: Software Requirements Verification Task ID: IVE.TA.T1 Activity: IVE.TA - Technical Specification Analysis Start event: PDR - Preliminary Design Review End event: TAR - Technical Specification Analysis Review Responsible: ISVV Supplier Objectives: - Verify that the representation by the software requirements of the system requirements allocated to software is complete, correct, consistent, accurate, readable, and testable. Inputs - From the ISVV Customer: Sub Tasks (per ISVV Level): - System Requirements allocated to Software [RB; SRR] - Software Requirements Specification [TS; PDR] - Software Logical model [TS, PDR] - Interface Control Document [ICD(TS); PDR] - Hardware-Software Interface Requirements [RB; SRR] - Software Criticality Analysis Report [PAF; SRR] - From the ISVV Supplier: - ISVV Level 1 and Level 2: - Critical Software Requirements List (refer to [MAN.VV.T2]) - ISVV level definition (refer to MAN.VV.T2) - IVE.TA.T1.S1: Verify the Software Requirements external consistency with the system requirements (by Inspection - reviewing the traceability matrices produced by the software supplier): Ensure that all system requirements allocated to software are traceable to software requirements (forward traceability). Ensure that every software requirement is traceable to a system requirement (backward traceability). Ensure that the relationships between the software requirements and their originating system requirements are specified in a uniform manner (in terms of level of detail and format). Ensure that the characteristics specified in the system requirements allocated to software are accurately specified by the traced software requirements. - IVE.TA.T1.S2: Verify the Interface Requirements external consistency with the system requirements (by Inspection - reviewing the traceability matrices produced by the software supplier and in case they do not exist, by producing them): Ensure that all system requirements referring to interfaces and all interface requirements are traceable to interface specifications (forward traceability). Ensure that every interface specification is traceable to a system or interface requirement (backward traceability). Ensure that the interface specifications correctly represent the system interface requirements allocated to software and the interface requirements. Ensure that data and control flows, data usage and format, and performance are considered. Ensure that the relationships between the interface specifications and their originating system or interface requirements are specified to a consistent level of detail. Ensure that the characteristics specified in the system requirements referring to interfaces and the interface requirements are accurately specified by the traced interface specifications. - IVE.TA.T1.S3: Verify software requirements correctness (by Inspection or Modelling 9 or Formal methods 9 ) Ensure that the relationship between each software requirement and its originating system requirement is correct. Ensure that the precision specified for interfaces and calculations represents the requirements of the system. Ensure that the modelled physical phenomena agree with system accuracy requirements and physical laws. - IVE.TA.T1.S4: Verify the consistent documentation of the software requirements (by Inspection or Modelling 9 or Formal Methods 9 ) 9 Applicable only to a range of software requirements.
Issue 2 Revision 0 Page 29 Ensure that the software requirements are documented to a consistent level of detail. Ensure that the interface specifications are documented to a consistent level of detail. Ensure that interactions between software requirements and assumptions embedded in them are consistent and represent system requirements. - IVE.TA.T1.S5: Verify software requirements completeness (by Inspection or Modelling 9 ) Ensure that the software requirements, within the assumptions and constraints of the system, represent all the characteristics of the system designated to the software including functional and performance specifications, software product quality requirements, security specifications, human factors engineering (ergonomics) specifications and data definition and database requirements. Ensure that the software requirements include also the specification of the interfaces external to the software item. When in flight modification is specified for flight software, ensure also that the software requirements include specifications for in flight. - IVE.TA.T1.S6: Verify the dependability and safety requirements (by Inspection or SFMECA 10 or Modelling) Ensure that the software requirements and interface specifications correctly represent the system requirements relating to safety and dependability allocated to software. Ensure that the software requirements and interface specifications address all the safety and dependability aspects introduced by the system requirements allocated to software. Ensure that requirements are defined to control software s contribution to system hazardous events by analysing software failure modes and their possible propagation to system level. (e.g. ensure robustness related mechanisms, such as exception handling, are properly defined. ). Ensure that the software requirements describe proper features for Fault Detection Isolation And Recovery (FDIR) in accordance with the system requirements allocated to software. Ensure that the implemented FDIR mechanisms are independent of the faults that they are supposed to deal with. - IVE.TA.T1.S7: Verify the readability of the software requirements (by Inspection) Ensure that the software requirements documentation has a clear and consistent structure and that software requirements are free from ambiguous terms. Ensure that the documentation is intelligible for its target readers and that all the required elements for its understanding are provided (e.g. definition of acronyms, terms, and conventions). - IVE.TA.T1.S8: Verify the timing and sizing budgets of the software requirements (by Inspection) Ensure that the software requirements for timing and sizing budgets (e.g. memory usage, CPU utilization, etc.) correctly represent the system performance requirements allocated to software. Ensure that the software requirements for timing and sizing budgets (e.g. memory usage, CPU utilization, etc.) are specified with the accuracy required by the system performance requirements allocated to software. Ensure that the acceptance criteria for validating the software timing and sizing budgets (e.g. memory usage, CPU utilization, etc.) requirements are objective and quantified. - IVE.TA.T1.S9: Identify test areas and test cases for Independent Validation (from IVE.TA.T1 tasks or Modelling) Identify software requirements which cannot be analysed adequately for independent verification and which, therefore, require execution of independent validation tests. Annotate this information (e.g. requirements, test case.) as a contribution to the Independent Validation activities. - IVE.TA.T1.S10: Verify that the software requirements are testable (by Inspection) Ensure that the acceptance criteria for validating the software requirements are objective and quantified. Ensure that each software requirement is testable to objective acceptance criteria. Ensure that software requirements are unambiguous. - IVE.TA.T1.S11: Verify software requirements conformance with applicable standards (by Inspection) Ensure that the software requirements are compliant to applicable standards, references, regulations, policies, physical laws, and business rules. Outputs: - Technical Specification Verification Report (including the ISVV findings) - Contribution to Independent Validation 10 In case the SFMECA was applied in the criticality analysis performed at requirements level for the ISVV level definition (MAN.VV activity) any RIDs generated then should be added to this technical specification verification report.
Issue 2 Revision 0 Page 30 6.0 Design Analysis 6.1 Activity Overview Design Analysis is one of the verification activities of the ISVV process (Figure 11). The Design Analysis is performed in general after the Technical Specification Analysis and having performed the Criticality Analysis at the component/software unit level. MAN. Management MAN.PM.ISVV Process Management MAN.VV. ISVV level definition IVE. Independent Verification IVE.TA.Technical Specification Analysis IVE.DA.Design Analysis IVE.CA.Code Analysis IVA. Independent Validation IVA.Validation Figure 11 Design Analysis in context The Design Analysis consists on the evaluation of the design of each software product i.e. analysis of the Design Definition File (DDF) and Design Justification File (DJF), focusing on aspects such as: reliability, availability and safety, ensuring that sufficient and effective fault detection and isolation and recovery mechanisms are included, error handling mechanisms, initialisation / termination of software components interfaces between software components and between software and hardware components threads / processes synchronisation and resource sharing, and budget analysis, including schedulability analysis Design Analysis focus on two main products, Software Architectural Design and Detailed Design, corresponding to two main phases of the analysis. In addition Design Analysis should analyse the software user manual.
Issue 2 Revision 0 Page 31 Design Justification File Software Requirements Specification Architectural Design Verification Interface Control Documents Contribution to IVA Software Architectural Design Verification Report Software safety/dependability analysis reports Technical budgets/sched analysis Software model simulations Detailed Design Verification Software Design Analysis Contribution to IVA Software Detailed Design Verification Report Software User Manual Verification Report ISVV level definition Requirements verification report Software User Manual Verification Design Justification file Software Architectural Design Software Detailed Design Software Item (Application) SW User Manual Traceability matrices Figure 12 Design analysis 6.1.1 Architectural Design Independent Verification The architectural design expresses the high-level organisation of the software (i.e. how the software will be arranged in terms of components). The architectural design is derived from the software requirements expressed in the Technical Specification and the interfaces described in the applicable Interface Control Documents. It is necessary to verify that the architectural design fulfils the requirements and presents an adequate quality level, and that the architectural design has been completely, correctly, consistently and accurately driven from the Technical Specification and the applicable ICDs. The architectural design itself shall also be verified in order to check whether it is consistent, correct, complete and readable such that it can be effectively tested, and is in conformance with the applicable standards.
Issue 2 Revision 0 Page 32 If autocode is to be produced from models, this task also includes activities to verify these models, the tools, methods, languages and environment to produce both the models and the autocode. Figure 13 below illustrates the verification subtasks to be performed as part of the software architectural design independent verification. Architectural Design Verification Technical Specification (PDR) SW Architectural Design (PDR) Interfaces Control Doc (PDR) Verify interfaces consistency between different SW components Verify architectural design correctness with respect to Technical Specification Verify architectural design completeness Verify the dependability & safety of the design Verify the readability of the architectural design Verify that the software architectural design components are testable Verify architectural design conformance with applicable standards Figure 13 Software Architectural Design Verification 6.1.2 Software Detailed Design Independent Verification The detailed design expresses the low-level organisation of the software (i.e. how the software will be organised in terms of units). The detailed design is primarily derived from the architectural design however it also derives from the Technical Specification and the applicable ICDs. To verify the detailed design one shall verify the external consistency by inspecting the detailed design against the Architectural Design, the Technical Specification and the Interface Control Documents matrices in order to assert whether the detailed design has been correctly derived. One shall also verify the detailed design itself in order to check whether it is consistent, correct, complete and readable such that it can be effectively tested, and is in conformance with the applicable standards. The independent verification of detailed design can be reconsidered where high level of automation was applied by SW supplier when creating detailed design documents. If autocode is to be produced from models, this task also includes activities to verify these models, the tools, methods, languages and environment to produce both the models and the autocode. Figure 14 below illustrates the verification tasks to be performed as part of the software detailed design independent verification.
Issue 2 Revision 0 Page 33 Detailed Design Verification Technical Specification (DDR) SW Architectural Design (DDR) Interfaces Control Doc (PDR) Detailed Design (DDR) Interfaces Control Doc (DDR) Verify interfaces consistency between different SW units Verify detail design correctness with respect to Technical Specification Verify detailed design completeness Verify the dependability & safety of the design Verify the readability of the detailed design Verify the timing and sizing budgets of the software Verify that the software units are testable Verify detail design conformance with applicable standards Figure 14 Software Detailed Design Verification 6.1.3 Software User Manual Analysis The software user manual describes the aspects of the software that are relevant for the end user. It constitutes an essential aspect for the operation of the software. The software user manual mainly derives from the user requirements but it is also affected by all the other software project lifecycle phases. It is therefore necessary to verify the software user manual in terms of completeness, correctness and readability from users perspective and with the system point of view. Figure 15 below illustrates the verification tasks to be performed as part of the software user manual verification. Software User Manual Verification Technical Specification (DDR) SW Architectural Design (DDR) Detailed Design (DDR) Software User Manual (DDR) SW Item (Application) (CDR) Verify the readability of the User Manual Verify the completeness of the User Manual Verify the correctness of the User Manual 6.2 Activity Inputs and Prerequisites Figure 15 Software User Manual Verification The input work products are listed in above Figure 12 showing the Design analysis. The prerequisite for starting the design analysis activity is the availability of the listed inputs. Moreover the design artefacts shall present a satisfactory maturity level.
Issue 2 Revision 0 Page 34 6.3 Activity Outputs The output work products are listed in above Figure 12 showing the Design analysis. Verification reports include at least overall analysis of the work products analysed, findings, list of open issues to probe further on subsequent analysis, suggested modifications (if any), and inputs for independent validation test cases specification. Traceability matrices might be provided as annexes of verification reports or as separate documents. 6.4 Activity Management 6.4.1 Initiating and Terminating Events The Design Analysis activities may be initiated as soon as mature Design artefacts are available. In general this coincides with the approval of the Software Architectural Analysis at the PDR. Although several iterations of the design activity may be performed, thus extending it potentially until the end of the development project, the Design Analysis activity ends with the Design Analysis Review (DAR) (as defined in above section 2.2). 6.4.2 Completion Criteria Design Analysis becomes complete after Architectural Design, Detailed Design, and Software User Manual verified in accordance with tasks IVE.DA.T1 to IVE.DA.T3 (refer to section 6.5). 6.4.3 Relations to other Activities This section identifies the relations between this activity and the remaining ISVV activities. The tailoring of the Design Analysis activity is performed as part of the ISVV Level definition activity. Criticality Analysis may also provide useful inputs to the Design Analysis activity, namely the subtask Verify the safety and dependability of the design (refer to section 6.5). Strong relations exist between the Technical Specification Analysis and the Software Design Analysis. The outputs of the Technical Specification Analysis are applicable inputs to the Design Analysis. In addition Technical Specification Analysis may raise issues to be closed during Design Analysis. Design Analysis is also likely to provide inputs to independent validation test cases specification. 6.5 Tasks Description 6.5.1 Architectural Design Verification TASK DESCRIPTION Title: Architectural Design Verification Task ID: IVE.DA.T1 Activity: IVE.DA - Design Analysis Start event: PDR Preliminary Design Review End event: DAR Design Analysis Review Responsible: ISVV Supplier Objectives: Evaluate the software architectural design for external and internal consistency, correctness, completeness, testability, feasibility of detailed design, readability, timing & sizing budgets and dependability & safety. Inputs: - From ISVV Customer: - Software Requirements Specification [TS; PDR]
Issue 2 Revision 0 Page 35 - Interface Control Documents [ICD(TS); PDR] - Software Architectural Design including software models if produced [DDF; PDR] - Software Dependability and Safety Analysis Reports [PAF; PDR] - Schedulability Analysis (including WCET) [DJF; PDR] - Technical Budgets [DJF; PDR] - Applicable standards (including modelling standards if models are produced) - Software model simulations (including relevant test plans, if any) [DJF; PDR] - Design Justification file, including: - The choice of model type(s) and modelling language and tool suite(s) - The modelling environment version, configuration, and options - The choice of code generator and compiler and their options intended to be used - Modelling language reference manuals - From ISVV Supplier: Sub Tasks (per ISVV Level): - ISVV Level 1 and Level 2: - Technical Specification Verification Report [IVE.TA.T1] - ISVV level definition (from the Technical Specification ISVV level definition) [MAN.VV.T2]] - Contribution to Independent Validation (from TS Analysis- IVE.TA.T1) - IVE.DA.T1.S1: Verify the SW architectural design external consistency with the Technical Specification (by Inspection - reviewing the traceability matrices produced by the software developer): Ensure that all software item requirements (whether in text or defined in models) are traceable to a software component (including software component models) and that the functionality described in the requirement is implemented by the software component (or software component model) (forward traceability). Ensure that all software components (or model components) incorporated in the architectural design, have allocated requirements (and a corresponding model in the TS if applicable) and that each software component (or component model) is not implementing more functionalities than the ones described in the requirements allocated to it (backward traceability) and that functionality is clear and unambiguous (NOTE: Ensure that any model component in the architectural design, which is not 100% complete adequately carries over the software requirements allocated to it in an unchanged form. Eventually, the IVE.DA.T2 task shall be carried out for such models.). For each requirement, whether expressed using a model or text, traced to more than one component ensures that implementation of functionalities is not repeated. Ensure that all the relationships between the architectural design elements and the technical specification are specified in a uniform manner (in terms of level of detail and format). Ensure that both the static architecture (e.g. software decomposition into software elements such as packages, and classes or modules) and the dynamic architecture (e.g. specification of the software active objects such as thread / tasks and processes), described in the software architectural design, adequately implement the software requirements. When software components models are produced by the SW suppliers, then also ensure that these requirements they are not in conflict with or are adversely affected by a) the limitations and constraints of the software models incorporated in the architectural design or by b) the modelling languages used to express them. - IVE.DA.T1.S2: Verify the SW architectural design external consistency with the Interface Control Documents (by Inspection - reviewing the traceability matrices produced by the software developer): Ensure that the interface design (with other software components, hardware, the user, etc.) is consistent with the applicable Interface Control Documents.(NOTE: Ensure that any traces between hardware interfaces and any software model are adequately justified since modelling languages are in general not able to work at the hardware interface level). Ensure that interfaces are designed in a uniform way. Ensure that each interface provides all the required information from the underlying component. - IVE.DA.T1.S3: Verify interfaces consistency between different SW components (by Inspection and or Modelling 11 ) Ensure that software item internal interfaces (e.g. interfaces between software components) are consistent. Consider both data and control flows. Include verification of data format, accuracy, and timing/performance. Ensure that all inputs of one software component are produced by some other component and that all outputs of a component are consumed by some other component. 11 UML: component, activity, communication, interaction, sequence, timing
Issue 2 Revision 0 Page 36 When modelling is used by the SW suppliers then also: - Ensure that each model component is integrated with the rest of the software in a coherent manner with low coupling and that the wrapper implements all parts of the interface and nothing more. - Ensure that all model component input parameters are provided by the wrapper software component and that all model component outputs are consumed by the wrapper software. - Ensure that all model components are integrated into their wrapper software components according to any hypothesis, assumptions, and principles applicable to the corresponding modelling languages. - Ensure that the interfaces between the model and imported functions and libraries, if any, are correctly and consistently defined. - Ensure that imported functions can be regarded as terminal objects, i.e. the imported function should not call any other elements from the model. - Ensure that the use of Modelling Bridges has been carefully investigated and justified with respect to semantic incompatibility between the languages, tool weaknesses, or any limitations and shortcomings. - IVE.DA.T1.S4: Verify architectural design correctness (by Inspection or Modelling 12 or Simulation 13 ) Ensure that architectural design complexity and modularity are in accordance with quality requirements. Ensure that the software architectural design implements proper sequence of events, inputs, outputs and interfaces logic flow. For real-time software ensure the correctness and the consistency of the computational model (when provided) and when software components models are produced by the SW suppliers, then also ensure that the computational model is not in conflict with or adversely affected by the limitations and constraints of the software models incorporated in the architectural design or the modelling languages used to express them. When software models are produced by the SW suppliers, if the software model includes state diagrams at this point, ensure that they have been correctly designed, e.g. decisions on whether to use simple/flat state machines or extended/safe state machines. Ensure a well documented justification of the design solution is included in the design document it is important that later the operations team knows why solution and what dynamic and constraints are (later, at the maintenance stage, it is difficult to understand the how things are done, why and any change impact). - IVE.DA.T1.S5: Verify architectural design completeness (by Inspection) Ensure that the software architectural description includes: Hierarchy, dependency and interfaces of software components, process, data and control aspects of the software components, static and dynamic architecture of the software and mapping between them and when modelling is used by the SW suppliers, that a description of how the model components fit into the static and dynamic architecture of the software. For real-time software ensure also that a computational model is provided as part of the software architectural design and when modelling is used by the SW suppliers, that a description of how the model components fit into the computational model. When modelling is used by the SW suppliers, also: - Ensure that the design is clear about which parts of the software shall be developed with manual coding, code reuse and auto-code. - Ensure that the justification for selecting the software components targeted for model-based development is provided. - IVE.DA.T1.S6: Verify the dependability & safety of the design (by Inspection, Modelling, Simulations, SFMECA, Formal methods) Ensure that the software architectural design minimises the number of critical software components without introducing undesirable software complexity. Ensure that software architectural design is defined to control software s contribution to system hazardous events by analysing software failure modes and their possible propagation to system level (e.g. ensure robustness related mechanisms, such as exception handling), are properly defined, or ensure any new critical parts of the software model introduced by the design are identified). Ensure that the software architectural design (including its model components if any) implements proper features for Fault Detection Isolation And Recovery (FDIR) in accordance with the allocated technical specification requirements. Ensure that the implemented FDIR mechanisms are independent of the faults that they are supposed to deal with. Ensure that the integration of the FDIR mechanisms into the complete software is properly designed with particular attention to the management of irregular fault-events in synchronous or cyclic processes. If model components are produced by the SW suppliers, ensure that proper simulations have been performed in order to verify the architectural design of the FDIR mechanisms, and that these simulations are as complete as possible for the available software models. - IVE.DA.T1.S7: Verify the readability of the architectural design (by Inspection) 12 UML: component, composite, deployment, package, activity, sequence, state machine 13 Simulation may be used to validate high level algorithms
Issue 2 Revision 0 Page 37 Ensure that the architectural design documentation has a clear and consistent structure. Ensure that the documentation is intelligible for the target readers and that all the required elements for its understanding are provided (i.e. acronyms, terms, conventions used, etc.) including an overview of the system context in which the model components if produced appear and an overview of each model component. - IVE.DA.T1.S8: Verify the timing and sizing budgets of the software (by Inspection or Schedulability analysis including WCET) Ensure that software architectural design implements proper allocation of timing and sizing budgets (e.g. memory usage, CPU utilization, etc.) by reviewing the analysis and simulations (when simulations are produced by the SW suppliers) performed by the software developer. For real-time software verify developer s schedulability analysis and simulations (when simulations are produced by the SW suppliers). - IVE.DA.T1.S9: Identify test areas and test cases for independent Validation (from other IVE.DA.T1 tasks) Identify areas and items that can not be sufficiently analysed by means of Independent Verification only and therefore that require execution of validation tests. Annotate this information (test areas/items, test case, etc.) as a contribution to the Independent Validation activities. NOTE: This subtask shall receive, refine and update the contribution to Independent Validation from the TS Analysis (IVE.TA.T1.S9). - IVE.DA.T1.S10: Verify architectural design conformance with applicable standards (by Inspection) Ensure that the design is compliant to applicable standards, references, regulations, policies, physical laws, and business rules (including model component naming, and also complexity and modularity standards are followed when models are produced by the SW suppliers). - IVE.DA.T1.S11: Verify the test performed on the high level model (if models are produced by the SW suppliers) (by inspection, Modelling, or Simulations) Verify the tests and simulations performed by the developers. Check the model coverage and ensure that all components have been activated, if not, ensure that there is adequate justification. Identify and execute additional tests that complement the tests performed by the developers. - IVE.DA.T1.S12: Verify the development and verification and testing methods and environment (if models are produced by the SW suppliers) (by inspection) The qualification of development and verification/validation tools, plus the tools configuration (e.g. compilation options, autocoding options, etc) shall be checked Ensure that adequate modelling methods are chosen (documented and justified) to model the various, different aspects of the software: - Infrastructure of concurrent processes (for real-time software) - Communication/Message exchange - Behaviour - Performance - Pure functional software Ensure that adequate modelling language, modelling tools, code generator, modelling standards and coding standards are chosen (documented and justified) by the developers, and that the choices have been performed according to the application category of the software to be modelled Ensure that the justifications correctly identify and adequately address the limitations and constraints of the model components, the modelling languages, and tools used to express them. Ensure that non-supported modelling features are clearly identified and corresponding workarounds are devised in advance to avoid problems during later completion of models Ensure that the available code generator for the chosen modelling method are demonstrated in advance to meet quality requirements in terms of performance and coding standards to avoid problems during later code generation Ensure that standard interfaces to modelling tools are demonstrated to be well supported (if relevant) in advance to avoid problems during later model transformations Ensure that known quality problems with the code generator are avoided by verifying that no problematic modelling language constructs are used. Ensure that all tool customisations are the most optimal and appropriate ones (documented and justified). Ensure that the code generator and compiler options that will be used to generate the target code are the most optimal and appropriate ones (documented and justified). Ensure that the code generator and compiler versions are documented. - IVE.DA.T1.S13: If models are produced by the SW suppliers, then construct model test cases Ensure Evaluate Task Input: Evaluate the results from the other verification activities. Achieve basic knowledge about the software, either during the other verification activities or during this evaluating subtask.
Issue 2 Revision 0 Page 38 Achieve detailed knowledge about the subjects from the critical function list. Perform Analysis, e.g.: Check that events happening too early/late are handled correctly. Analyze FDIR requirements. Check that injections outside or at the specified boundaries are handled correctly. Check that potential runtime errors and overflow/underflow are taken care of. Investigate dataflow conflicts. Check that worst case situations are handled correctly. Inspect the Software User Manual focusing on operator failure. Analyze the interaction with external software with focus on degraded functionality of the external software products. Investigate worst case load scenarios, covering robustness of the model with respect to deviations from timing requirements, e.g., events happening too early/late. Investigate worst case load scenarios, including robustness of the model with respect to injections outside or at the boundary. Investigate potential runtime errors, overflow/underflow, and dataflow conflicts Identify properties of the model, which are related to safety and dependability. Design and execute destructive tests which are attempting to falsify these properties. Ensure that the final code behaves as expected. Even if the model is compliant with the textual specification and requirements, it does not mean that the behaviour of the model is compliant with the system engineer s expectations. Extensive feedback from functional and behavioural test of the model seems to be necessary. The ISVV supplier can propose additional testing and obtain feedback from system engineering. Test/Ensure that the model is robust, i.e. it does not break and it keeps functioning and performing even under the most extreme conditions. Probe at the limitations and restrictions of the modelling tools. Independently identify the parts of the model, which are dependent on the computational precision. Design and execute tests, which are attempting to make the model produce inaccurate results. If possible, independently identify required properties (software requirements) which have not been adequately demonstrated by the developers at model level. Design tests aiming at demonstrating that these properties are not implemented correctly. Write Independent Model Validation Test Plan. Describe each identified model test case in terms of: Test Rationale, Test Overview, Test Environment, Input Data, Output Data, Starting Conditions and Detailed test Step descriptions including step description, expected results and pass/fail criteria. - IVE.DA.T1.S14: If models are produced by the SW suppliers, then construct model test procedures Achieve knowledge about the verification and simulation capabilities of the modelling tool suite. Achieve knowledge about the test language to implement the test cases, the method to store test cases for subsequent execution, and the method for capturing and saving test results. Implement Model Test Cases into Model Test Procedures. Express the test cases in the test language provided by the modelling tool suite: Relate test case parameters with software parameters, Relate test case actions with software functions, Setup conditions for test procedure failure/success, Include test report generation commands. Update the Independent Model Validation Test Plan with possibly additional test cases. - IVE.DA.T1.S15: If models are produced by the SW suppliers, then execution of model test procedures Execute all the implemented Model Test Procedures and generate the test report. Investigation of failed tests: Check that the failure is due to the model under test. Produce Test Report: Describe all tests and observations and attach all test procedures (scripts) and test logs to the report. Outputs: - Software Architectural Design Verification Report (including ISVV findings) - Contribution to Independent Validation (updated with Design Analysis findings) 6.5.2 Detailed Design Verification TASK DESCRIPTION Title: Detailed Design Verification Task ID: IVE.DA.T2 Activity: IVE.DA - Design Analysis Start event: DDR Detailed Design Review End event: DAR Design Analysis Review Responsible: ISVV Supplier
Issue 2 Revision 0 Page 39 Objectives: Evaluate the software detailed design for internal consistency, correctness, completeness, accuracy, readability and consistency with software detailed design of other software items. Inputs: - From ISVV Customer: - Software Requirements Specification [TS; DDR] - Interface Control Documents [ICD(TS); DDR] - Software Architectural Design including software models if produced [DDF; DDR] - Software Detailed Design including software models if produced [DDF; DDR] - Software Dependability and Safety Analysis Reports [PAF; DDR] - Traceability Between TS and SW Detailed Design including software models if produced [DJF; CDR] - Traceability Between SW Architectural Design and SW Detailed Design including software models if produced [DJF; CDR] - Traceability Between ICD and SW Detailed Design including software models if produced [DJF; CDR] - From ISVV Supplier: Sub Tasks (per ISVV Level): - ISVV Level 1 and Level 2: - Technical Specification Verification Report [IVE.TA.T1] - Software Architectural Design Verification Report [IVE.DA.T2] - ISVV level definition (from Technical Specification Analysis MSN.VV.T2]) - ISVV level definition (from Architectural Design Analysis [MAN.VV.T3]) - Contribution to Independent Validation (from Architectural Design Verification) [IVE.DA.T1] - IVE.DA.T2.S1: Verify the detailed design external consistency with the Technical Specification (by Inspection - reviewing the traceability matrices produced by the software developer): Ensure that all software requirements allocated to a software component (or component model if produced) are traceable to its software units (or model elements) and that the functionality described in the requirements is correctly implemented by the corresponding software component (or model element) (forward traceability) Ensure that all software component (or model elements) have allocated requirements and that each software component (or model element) is not implementing more functionalities than the ones described in the requirements allocated to it (backward traceability). For each requirement traced to more than one software component (or model elements) ensure that implementation of functionalities is complete and not repeated. Ensure that the relationship between the software component (or model elements) and the software requirements are specified to a uniform manner (in terms of level of detail and format). - IVE.DA.T2.S2: Verify the detailed design external consistency with the Interface Control Documents (by Inspection - reviewing the traceability matrices produced by the software developer): Ensure that the interface detailed design (with other software components, hardware, the user, etc.) or model interfaces if models are produced by the SW suppliers, is consistent with the applicable Interface Control Documents (NOTE: Ensure that any traces between hardware interfaces and any software model are adequately justified since modelling languages are in general not able to work at the hardware interface level). - IVE.DA.T2.S3: Verify the detailed design external consistency with the Architectural Design 14 (by Inspection - reviewing the traceability matrices produced by the software developer): Ensure that the static architecture (e.g. software decomposition into software elements such as packages, and classes or modules) and dynamic architecture (e.g. specification of the software active objects such as threads / tasks and processes) described in the software detailed design implements adequately the software requirements. In case models are produced by the SW supplier, this also means to a) Ensure that each software model correctly addresses all functionality assigned to it and that the static and dynamic structures of the model adequately implement the design. NOTE: The verification concerns the pure functionality as well as the compliance of the model with the tasking/process/thread model, the synchronous vs. asynchronous aspects, and the event driven vs. data driven control aspects; and b) If there is a logical model developed by system engineers, ensure that there is a corresponding software engineering model developed by the software engineers. (NOTE: The logical and software engineering models should be functionally equivalent.). c) Ensure that all model components identified in the architectural design are traceable to a software model and that the functionality described for the model component is implemented by the software model (forward 14 The verification of the Detailed Design traceability to Architectural Design is only considered under the condition that traceability matrices be available at the beginning of Detailed Design Traceability Verification.
Issue 2 Revision 0 Page 40 traceability). d) Ensure that all software models, traceable to a model component identified in the architectural design, do not implement more functionality than what is described for that model component (backward traceability). e) For model components identified in the architectural design that are traced to more than one software model ensure that the implementation of the functionality described for the model component is complete and not repeated. f) Ensure that all software components of foreign code designated in the architectural design to be imported in model components are traceable to placeholders in the software models and that the placeholders correctly implements the interfaces of the software units. Ensure that the software components (and model elements) implement correctly the internal interfaces described in the software architectural design (and model components) Ensure that the software design method used for the detailed design is consistent with the one used for software architectural design (when models are produced by the SW suppliers, it also means to ensure that the modelling method used for software models is consistent with the one used for model components in the architectural design). For real-time systems ensure the consistency of the detailed design (and model elements) with the computational model defined in the software architectural design (e.g. the SW components implementing a given component are consistent with the computational model of that component). - IVE.DA.T2.S4: Verify interfaces consistency between different SW components (by Inspection and or Modelling 11 ) Ensure that software item internal interfaces (e.g. interfaces between software components) are consistent. Consider both data and control flows. Include verification of data format, accuracy, and timing/performance. Ensure that interfaces (and model interfaces if any) are designed in a uniform way. Ensure that each interface (or model interfaces) provides all the required information from the underlying component (or component model respectively). Ensure that all inputs of one software unit are produced by the SW suppliers by some other component and that all outputs of a component are consumed by some other component. Ensure that all function inputs are set before used (no undefined variables). Ensure that for each function called, the output of the function is consumed. In case models are produced by the SW supplier, then also: - Ensure that all static rules imposed by the modelling language are adhered to. Note that modelling environments usually provide automated checking for conformance to the static semantics of the modelling language, including data formats. - Ensure that the specified input / output profile (signature) of an imported entity defined in a foreign language is consistent with the signature of the placeholder in the model. This check can be skipped if it has been performed in the scope of IVE.DA.T2.S1 and no new external entities have been imported since then - IVE.DA.T2.S5: Verify detailed design correctness (by Inspection or Modelling 15 or Simulation 16 ) Ensure that detailed design (and any model element) complexity and modularity is in accordance with quality requirements. Ensure that the software detailed design implements (and model elements) proper sequence of events, inputs, outputs, interfaces logic flow, allocation of timing and sizing budgets, FDIR and error handling. Ensure that the detailed design (and element models) is compatible with the target platform (i.e. ensure that platform dependent issues are compatible with the target hardware). If element models are produced by the SW suppliers, also: - Ensure that the underlying assumptions, basic mechanics, and inherent limitations for the selected type of model and its particular application are correctly identified, documented, and justified with reference to the requirements. - Ensure that the retained data model is as required (we refer to the part of the state vector which is internal to the model). - Ensure that the basic nodes (i.e. model atoms) have clear, unambiguous, and unique functionality. - Ensure that the model behaves as expected. Even if the model is compliant with requirements it may still not behave as the customer or developers expected. This check may be based on model behavioural test via simulations. Note that the ISVV supplier can provide a complementary probing of the customer satisfaction with the functionality and behaviour of the model. - Ensure that assertions expressed in the model are consistent with the requirements and are used correctly and as intended for the particular modelling language and tools. 15 UML: class, component, package, activity, interaction, sequence, state machine 16 Simulation can be used to verify sporadic algorithms (e.g. communication protocols) 17 This methods used here aims at verifying the independence of the FDIR mechanisms from the faults they are supposed to handle
Issue 2 Revision 0 Page 41 - Ensure that there are no assertions on input and no assertions on output unless they come from system level or otherwise justified. - Ensure that formal proofs provided by the developers are considered as complementary indications of absence of faults and they are not replacing simulations, i.e. they are never used as a justification to omit simulations, except where it is obvious that there is no risk of numerical surprise (e.g. pure Boolean logic) - IVE.DA.T2.S6: Verify detailed design completeness (by Inspection) Ensure that the detailed design description includes decomposition of the software into software low level components, update of the software item internal interfaces design, and the physical model of the software items described during the software architectural design. For real-time software ensure also that a computational model is provided as part of the software detailed design. In case models are produced by the SW suppliers and even of there is no clear formalise requirements on what models shall contain, and that the modelling environment will normally provide mechanisms and tools for automatic check of formal completeness, leaving very little if anything to be verified manually, also - Ensure that all declared functions in the model are called (this is usually checked automatically). - Ensure that all called functions in the model are declared (this is usually checked automatically). - Ensure that no declared function in the model has an empty body unless it is a placeholder for an imported entity defined in a foreign language. - Ensure that modelling environment version and configuration is documented. This is particularly important if new versions of the modelling environment are released frequently. There is a risk of incompatibility between previously developed libraries of reusable model elements and newly developed model elements. There is also a risk of confusion if the system engineers and the software engineers are not using the same versions of the modelling tools. - IVE.DA.T2.S7: Verify the dependability & safety of the design (by Inspection or Modelling or Simulations or SFMECA 17 or Formal Methods) Ensure that the software detailed design (and the model) minimises the number of critical software components without introducing undesirable software complexity. In case models are produced by the SW suppliers, ensure that the properties of the model, which are related to safety and dependability, are identified and either proven to hold or have withstood any attempts of falsification at the model level. Ensure that software design (and model) is defined to control software s contribution to system hazardous events by analysing software failure modes and their possible propagation to system level. (E.g. robustness related mechanisms are properly defined such as exception handling where relevant ). Ensure that the software detail design (and model) implements proper features for Fault Detection Isolation And Recovery (FDIR) in accordance with the technical specification. Ensure that the implemented FDIR mechanisms are independent of the faults that they are supposed to deal with. Ensure that the software design (and the model) correctly handles hardware faults and that the implemented software logic is not harming the hardware in any way (NOTE: that in case models are produced by the SW suppliers, modelling languages are usually not very good at handling hardware (low level) interfaces. This has the consequence that the parts of the software to handle the low-level interface and the parts of the software logic to process the low-level data (i.e. the model) might have to be separated into different software and model elements. This increases the risk of faults). Ensure that no run-time errors can happen and defensive programming techniques (and if models produced, also modelling techniques) are used, e.g. implementation of proper numerical protection at the model level: no divide by zero, no logarithms to zero, no tan (PI/2 + npi), etc. Ensure that the detailed design (and the model) includes proper verification of inputs and consistency checking. Ensure that software detailed design (and the model) implements proper error handling mechanisms. If models are produced by the SW suppliers, also: - Ensure that relevant events are reported by the model using the appropriate channels. - Ensure that the model does not include any hazardous modelling language constructs. - Ensure that no dead or deactivated model elements exist. If deactivated model elements exist, ensure that their activation will not lead to hazardous conditions. Note that model coverage measurements can be used to prove that there are no dead model elements. - For concurrent systems ensure that the model does not contribute to any dead-locks or race conditions. - IVE.DA.T2.S8: Verify the readability of the detailed design (by Inspection) Ensure that the detailed design documentation has a clear and consistent structure. Ensure that the documentation (and the models) is intelligible for the target readers and that all the required elements for its understanding are provided (i.e. acronyms, terms, conventions used, etc.) including an overview of the system context in which the model elements (if produced) appear and the purpose of each model element.is described In case models are produced by the SW suppliers, ensure that no model component is over-complex due to an
Issue 2 Revision 0 Page 42 inadequate modelling language and should rather be replaced with a simple unit written in a foreign language, preferably from a library of reusable components. - IVE.DA.T2.S9: Verify the timing and sizing budgets of the software (by Inspection or schedulability analysis (including WCET)) Ensure that software detailed design implements proper allocation of timing and sizing budgets (e.g. memory usage, CPU utilization, etc.) by reviewing the analysis performed by the software developer. For real-time software verify developer s schedulability analysis. - IVE.DA.T2.S10: Verify the accuracy of the model (in case models are produced by the SW suppliers) (by Inspection) Ensure that information necessary to evaluate computational precision, units (kilograms, meters, seconds, etc.), and error reporting is present at the model level. Ensure that the model implements the required computational precision (e.g. rounding vs. truncation, single vs. double precision, etc.). Note that the verification of required computational precision by analysis at the model level can only be indicative unless the code generator and the compiler have been qualified and can be trusted. It is therefore recommended to consider the verification at model level to be indicative and valuable to prevent gross errors at an early stage. The ISVV supplier can perform an independent and complementary verification of the computational precision of the model by simulation. Due to the difference between the simulation platform and the target platform, this verification by simulation can only be indicative and should be used to prevent gross errors in the model at an early stage. Ensure that the numerical types (int, float, double, etc.) used in the model are portable to the target platform. Ensure that the granularity of the reported error information is sufficient to trigger the necessary corrective actions. Ensure that the parameter values and computations are conformant to the required units (kilograms, meters, seconds, etc.) - IVE.DA.T2.S11: Identify test areas and test cases for independent Validation (from other IVE.DA.T2 tasks) Identify areas and items that can not be sufficiently analysed by means of Independent Verification only and therefore that require execution of validation tests. Annotate this information (test areas/items, test case, etc.) as a contribution to the Independent Validation activities. NOTE: This subtask shall receive, refine and update the contribution to Independent Validation from the Architectural Design Verification (IVE.DA.T1.S9). - IVE.DA.T2.S12: Verify detailed design conformance with applicable standards (by Inspection) Ensure that the detailed design (and models if produced) is compliant to applicable standards, references, regulations, policies, physical laws, and business rules (including naming standards, typing rules e.g. strong typing). When modelling is used by the SW suppliers, ensure conformance to modelling standards (see questionnaire in Annex G.8) Outputs: - Software Detailed Design Verification Report - Contribution to Independent Validation (updated)
Issue 2 Revision 0 Page 43 6.5.3 Software User Manual Verification Although Software User Manual Analysis is described in a single verification task, it can be executed in two iterations. The first iteration will start at DDR 18 and end at DAR. The second iteration will start at CDR and end at CAR. The ISVV supplier shall verify the SUM as being the user/operator of the software product. The verification focus shall be on the FDIR mechanisms, stressing the use of the software with the system view, etc. TASK DESCRIPTION Title: Software User Manual Verification Task ID: IVE.DA.T3 Activity: IVE.DA - Design Analysis Start event: DDR Detailed Design Review End event: DAR Design Analysis Review Responsible: ISVV Supplier Objectives: Ensure the User Manual readability, completeness and correctness Inputs: - From ISVV Customer: Sub Tasks (per ISVV Level): - ISVV Level 1 and 2 : Outputs: - Software User Manual [DDF; DDR] - Software Technical Specification [TS; DDR] - Software Architectural Design [DDF; DDR] - Software Detailed Design [DDF; DDR] - Software Item (application) [DDF; CDR] - IVE.DA.T3.S1: Verify the timing and sizing budgets of the software (by Inspection or by independent validation tests) Ensure that the SUM contains information about timing and sizing aspects of the software - IVE.DA.T3.S2: Verify the dependability & safety aspects on the product are specified in the SUM (by Inspection or by Independent validation tests) Ensure that the SUM contains correct and complete information on how the software s contribution to system hazardous events is documented. This may done by analysing software failure modes and their possible propagation to system level. (E.g. robustness related mechanisms are properly defined. Ensure that the SUM contains correct and complete information about features for Fault Detection Isolation And Recovery (FDIR) in accordance with the technical specification and how it deals with the faults that they are supposed to deal with.. Ensure that the SUM contains correct and complete information about the handling of hardware faults and that the implemented software logic is not harming the hardware in any way. - IVE.DA.T3.S3; Verify the readability of the User Manual (by Inspection) Ensure that the user manual has a clear and consistent structure. Ensure that the user manual is intelligible for the target software users and that all the required elements for its understanding are provided (i.e. acronyms, terms, conventions used, etc.). - IVE.DA.T3.S4; Verify the completeness of the User Manual (by Inspection) Ensure that the User Manual describes all the functionalities implemented by the software. Check if all the necessary information for performing the required operations is provided. - IVE.DA.T3.S5: Verify the correctness of the User Manual (by Inspection) Ensure that the information provided in the User Manual is consistent with the software implementation i.e. the software behaves as described. Software User Manual Verification Report 18 Please note that one of the pre-requisites for the verification tasks is that the documentation should be mature. Usually this does not happens with the Software User Manual at DDR.
Issue 2 Revision 0 Page 44 7.0 Code Analysis 7.1 Activity Overview Code Analysis is one of the verification activities of the ISVV process (see below Figure 16). The Code Analysis is performed in general after the Design Analysis and having performed the Criticality Analysis at the software unit level. MAN. Management MAN.PM.ISVV Process Management MAN.VV. ISVV level definition IVE. Independent Verification IVE.TA.Technical Specification Analysis IVE.DA.Design Analysis IVE.CA.Code Analysis IVA. Independent Validation IVA.Validation Figure 16 Code Analysis in context The Code Analysis consists on the evaluation of the source code of each selected software product focusing on aspects such as: reliability, availability and safety, ensuring that the sufficient and effective fault detection and isolation and recovery mechanisms are included, error handling mechanisms, initialisation / termination of software components interfaces between software components and between software and hardware components threads / processes synchronisation and resource sharing, and budget analysis, including schedulability analysis detection of any programming error causing the software to behave in a way that violates any of the applicable specifications If auto-code is produced from models, this task also includes activities to verify this code and its integration with any other code part of the same SW product.
Issue 2 Revision 0 Page 45 Software Requirements Specification Software Architectural Design Source Code Verification Software Detailed Design Interface Control Documents Source Code Software Safety/dependability analysis report Contribution to IVA Integration Test Specification and Test Data Verification Source Code Verification Report Contribution to IVA Integration Test Specification and Test Data Verification Report Unit test Procedure and test Data Verification Report Schedulability analysis/technical budgets ISVV level definition Unit Test Procedure and Test Data Verification Technical Specification Verification report SW architectural design/detailed design verification report SW Integration test Plan/Report SW Unit test Plan/Report Traceability matrices from ISVV Design Justification file Figure 17 Code Analysis 19 7.1.1 Source Code Verification The software source code is the ultimate product of a software project. Within the software development process it is the expression of the software design. Source code independent verification is one of the most demanding undertakings of an ISVV project. The software source code independent verification encompasses the following tasks: First, one shall verify the source code external consistency by inspecting the traceability matrices against the different elements it derives from, i.e. the Detailed Design, the 19 Note that figure shows only the most important inputs and outputs.
Issue 2 Revision 0 Page 46 Architectural Design, the Technical Specification and the applicable Interface Control Documents. Note that traceability matrices inspection to source code level should only be done if detailed design does not specify to the lowest level. Second, one shall verify the source code itself in order to check whether it presents consistency, correctness and accuracy. It shall also be verified whether dependability and safety issues have been correctly addressed, whether the source code is readable and maintainable and whether it can effectively be tested. In the case of real-time software one shall also verity the timing and sizing budgets. Figure 18 below illustrates the verification tasks to be performed as part of the source code verification. Code Analysis Technical Specification (DDR) SW Architectural Design (CDR) SW Units Source Code (CDR) Detailed Design Document (CDR) Interfaces Control Doc (CDR) Verify interfaces consistency between different SW units Verify source code correctness with respect to technical specification, architectural design and detailed design by verifying the traceability matrices respectively Verify the source code readability, maintainability and conformance with the applicable standards Verify the dependability & safety of the source code Verify the accuracy of the source code Verify that the source code is testable Verify the timing and sizing budgets of the software Figure 18 Software Source Code Independent Verification 7.1.2 Integration and Unit Test Specification and Data Verification The integration/unit test plan includes the integration/unit test specification and test data. It is necessary to verify the consistency of the test procedures and test data against the design (architectural and detailed respectively), the technical specification and the ICDs. Furthermore it is necessary to verify their correctness, completeness and feasibility. These tasks are not intended to re-execute the unit nor integration tests.
Issue 2 Revision 0 Page 47 Integration/Unit Test Procedures and Test Data Verification Technical Specification (DDR) SW Architectural/Detailed Design (DDR) Interfaces Control Doc (PDR) Verify consistency with Technical Specification Verify consistency with Software Architectural Design Integration/Unit Test Plan (DDR) Interfaces Control Doc (DDR) Verify integration test procedures correctness and completeness Verify integration test procedures feasibility Figure 19 Integration/Unit Test Specification and Test Data Verification 7.2 Activity Inputs and Prerequisites The input work products are listed in Figure 17 defining the Code Analysis task. The prerequisite for starting the code analysis activity is the availability of the listed inputs. Moreover the listed inputs shall present a satisfactory maturity level. 7.3 Activity Outputs The output work products are listed in Figure 17 defining the Code Analysis task. Verification reports include at least overall analysis of the work products analysed, findings, list of open issues to probe further on subsequent analysis, suggested modifications (if any), and inputs for independent validation test cases specification. Traceability matrices might be provided as annexes of verification reports or as separate documents. 7.4 Activity Management 7.4.1 Initiating and Terminating Events The Code Analysis activities may be initiated as soon as mature Source Code is available. In general this coincides with the CDR. Although several iterations of the Code Analysis activity may be performed, thus extending it potentially until the end of the development project, the Code Analysis activity ends with the Code Analysis Review (CAR) (as defined in above section 2.2), which in general takes place before the QR. 7.4.2 Completion Criteria Code Analysis becomes complete after Source Code, Unit and Integration Test Specification and Test Data verified in accordance with tasks IVE.CA.T1, IVE.CA.T2 and IVE.CA.T3 (refer to section 7.5).
Issue 2 Revision 0 Page 48 7.4.3 Relations to other Activities This section identifies the relations between this activity and the remaining ISVV activities. The tailoring of the Code Analysis activity is performed as part of the Criticality Analysis activity. Criticality Analysis may also provide useful inputs to the Code Analysis activity, namely the subtask Verify the safety and dependability of the source code (refer to section 7.5). Strong relations exist between the Technical Specification Analysis, and the Software Design Analysis and the Code Analysis. The outputs of the Technical Specification Analysis and Design Analysis are applicable inputs to the Code Analysis. In addition Technical Specification Analysis and Design Analysis may raise issues to be closed during Code Analysis. Code Analysis is also likely to provide inputs to independent validation test cases specification.
Issue 2 Revision 0 Page 49 7.5 Tasks Description 7.5.1 Source Code Verification TASK DESCRIPTION Title: Source Code Verification Task ID: IVE.CA.T1 Activity: IVE.CA Code Analysis Start event: CDR Critical Design Review End event: CAR Code Analysis Review Responsible: ISVV Supplier Objectives: Evaluate the software source code for internal consistency, correctness, completeness, accuracy, testability, and readability. Inputs: - From ISVV Customer: - Software Requirements Specification [TS; DDR] - Interface Control Documents [DDF; CDR] - Software Architectural Design including software models if produced [DDF; CDR] - Software Detailed Design including software models if produced [DDF; CDR] - Software dependability and safety analysis reports [DJF; CDR] - Schedulability Analysis (including WCET) [DJF; CDR] - Technical Budgets [DJF; CDR] - From ISVV Supplier: - Technical Specification Verification Report [IVE.TA.T1] - Software Architectural Design Verification Report [IVE.DA.T1] - Software Detailed Design Verification Report [IVE.DA.T2] - ISVV level definition (from Design Analysis MAN.VV.T3) Implementation: - Contribution to Independent Validation (from Design Analysis) IVE.DA.T2 - ISVV Level 1 and 2: - IVE.CA.T1.S1: Verify source code external consistency with Technical Specification (by Inspection - reviewing the traceability matrices produced by the software developer): Ensure that all software item requirements are traceable to a software unit (source code) and that the functionality described in the requirement is implemented by the source code unit (forward traceability); Ensure that all software units (source code) have allocated requirements and that each software unit (source code) is not implementing more functionalities than the ones described in the requirements allocated to it (backward traceability) For each requirement traced to more than one software unit (source code) ensure that implementation of functionalities is not repeated. Ensure that the relationship between the software units (source code elements) and the software requirements are specified in a uniform manner (in terms of level of detail and format). - IVE.CA.T1.S2: Verify source code external consistency with Interface Control Documents (by Inspection - reviewing the traceability matrices produced by the software developer): Ensure that the interfaces implementation (with other software units, hardware, the user, etc.) is consistent with the applicable Interface Control Documents. - IVE.CA.T1.S3: Verify source code external consistency with Architectural Design and Detailed Design (by Inspection - reviewing the traceability matrices produced by the software developer): Ensure that the static architecture (e.g. software decomposition into software elements such as packages, and classes or modules) and dynamic architecture (e.g. specification of the software active objects such as thread / tasks and processes) are implemented according the design. Ensure that the software units (source code) implement correctly the internal interfaces described in the software
Issue 2 Revision 0 Page 50 architectural design. - IVE.CA.T1.S4: Verify interfaces consistency between different SW units (by Inspection and or Reverse Engineering 20 ) Ensure that software component internal interfaces (e.g. interfaces between software units) are consistent. Consider both data and control flows. Include verification of data format, accuracy, and timing/performance. Ensure that all inputs of one software unit are produced by the SW suppliers by some other unit and that all outputs of a unit are consumed by some other unit. Ensure that interfaces are designed in a uniform way. Ensure that each interface provides all the required information from the underlying component. If auto-code is generated from models then also: - Ensure that interfaces between the software units generated automatically from the functional software models and the manually written code are consistent. Consider data-flow (input/output vectors). Consider control-flow (model initialisation and compute function integration patterns). Include verification of data format and accuracy (e.g. integer vs. float, 32 bit vs. 64 bit, signed vs. unsigned, scalar vs. composite, alignment, in other words, all components whether manual or auto should use a common types.h ). Include verification of variable initialisation, input parameter initialisation, absence of unused variables, and absence of unused output parameters. - Ensure that wrapper software (or other software) does not use or modify variables that are supposed to be internal to the auto-code. - Ensure that the interfaces between automatically generated code and imported foreign code are consistent. Consider data-flow (input/output vectors). Include verification of data format and accuracy (e.g. integer vs. float, 32 bit vs. 64 bit, signed vs. unsigned, scalar vs. composite, alignment). Include verification of variable initialisation, input parameter initialisation, absence of unused variables, and absence of unused output parameters. - IVE.CA.T1.S5: Verify source code correctness with respect to technical specification, architectural design and detailed design (by Inspection and or Reverse Engineering 20,, Bug pattern identification) Ensure that the static architecture (e.g. software decomposition into software elements such as packages, and classes or modules) and dynamic architecture (e.g. specification of the software active objects such as thread / tasks and processes) implements adequately the software design. Ensure that source code complexity and modularity is in accordance with quality requirements. Ensure that the software source code implements proper sequence of events, inputs, outputs and interfaces logic flow. Ensure that correct use of programming language, libraries, system calls, etc. is being made. - IVE.CA.T1.S6: Verify the source code readability, maintainability and conformance with the applicable standards (by Inspection, software static analysis, metrics, coding standard conformance checking). Ensure that the source code is written in a clear way and that it is properly documented. Ensure that all source code files adhere to the same coding style and that the applicable coding conventions, if any, are followed. Ensure that applicable coding standards are followed (e.g. Ada RAVEN, MISRA C, etc.). Note: a different tool than the possible one used by the SW supplier should be used by the ISVV supplier to check for coding standards. Note: Detailed RIDs shall be raised against coding issues where there is a problem. Ensure that a description is provided for every single subprogram. Ensure that the basic types used are defined in an implementation independent way. E.g. int could mean 32 bit or 64 bit integers depending on the target compiler. Look for implementation independent definitions of types with specific characteristics (size, precision, etc.). If auto-code is produced from models: - Ensure that the applicable coding standards and quality metrics, if any, are followed and fulfilled by all imported code, also from libraries of reusable model elements/reusable code (which may have been verified according to a different standard for previous uses) - Ensure that the applicable coding standards, if any, are followed by the automatically generated code (e.g. Ada RAVENSCAR, MISRA C). - Ensure that foreign code that is integrated with the automatically generated code is constructed according to any hypothesis, assumptions, and principles applicable to the corresponding modelling languages. NOTE: this subtask concerns foreign code that is imported into the auto-code at source level, not at model level. Importing code at model level is in the scope of IVE.DA.T1.S3 and/or IVE.DA.T2.S4 and IVE. DA.T2.S6.. - IVE.CA.T1.S7: Verify the dependability & safety of the source code (by Inspection, Modelling, Simulations, SFEMCA, Software Metrics Analysis, Bug pattern identification) Ensure that the software source code minimises the number of critical software units without introducing undesirable software complexity (e.g. critical software unit are not sharing resources with non-critical software units thus increasing their criticality) and that the assigned criticality level is kept till the end of the 20 The Reverse engineering will create Control Flow, Data Flow, etc. from the source code. The ISVV supplier should notice the often difficulty to compile the code at the ISVV site when requiring the use of some automatic verification tools
Issue 2 Revision 0 Page 51 development. Ensure that code is defined to control software s contribution to system hazardous events by analysing software failure modes and their possible propagation to system level. (Eg robustness related mechanisms are properly defined and used such as exception handling, or sub-set of coding standards ) Ensure any new critical parts of the software model introduced by the design is identified and controlled Ensure that the software source code implements proper features for Fault Detection Isolation And Recovery (FDIR) in accordance with the technical specification or the detailed design information. Ensure that the implemented FDIR mechanisms are independent of the faults that they are supposed to deal with. Ensure that the software correctly handles hardware faults and that the implemented software logic is not harming the hardware in any way. Ensure that defensive programming techniques are used Ensure that the source code includes proper verification of inputs and consistency checking. Ensure that all relevant events are reported by the software using the appropriated channels. Ensure that adequate and proper implementation of numerical protection is in place at the source level: no divide by zero, no logarithms to zero, no tan (PI/2 + npi), etc. Ensure that restricted code constructs (e.g. known code generator or compiler limitations or defects) have been identified. If there are any, ensure that they are avoided in the source code (via configuration of code generator, and/or proven absence via static analyser tools, or otherwise). Ensure that the source code does not include any hazardous programming language construct or library function (non-deterministic, dynamic, non-portable, implicit/explicit recursion, etc.) or inefficient programming language constructs or library function calls are generated in the source code. Ensure that no dead or deactivated code exists. If deactivated code exists ensure that its activation will not lead to a hazardous condition. For concurrent systems ensure that no dead-lock or race conditions exist. - IVE.CA.T1.S8: Verify the accuracy of the source code (by Inspection or Numeric analysis) Ensure that the source code implements the required computational precision (e.g. rounding vs. truncation, single vs. double precision, etc.). Ensure that the granularity of the reported error information is sufficient to trigger the necessary corrective actions. Ensure that the parameter values and the computation made are conformant with the required units (e.g. meters, inches, volts, etc.). - IVE.CA.T1.S9: Identify test areas and test cases for independent Validation (from other IVE.CA.T1 tasks) Identify areas and items that can not be sufficiently analysed by means of Independent Verification only and therefore that require execution of validation tests. Annotate this information (test areas/items, test case, etc.) as a contribution to the Independent Validation activities. NOTE: This subtask shall receive, refine and update the contribution to Independent Validation from the Design Analysis (IVE.DA.T1.S9. IVE.DA.T2.S11). - IVE.CA.T1.S10: Verify the timing and sizing budgets of the software (by Inspection or Schedulability analysis (including WCET)) Verify the developer s schedulability analysis of the implemented application (should be based on computed WCETs). Verify the sizing budgets of the software (e.g. executable image size, stack size, buffers, etc.) and verify them with respect to the design and requirements. Outputs: - Software Source Code Verification Report - Contribution to Independent Validation (updated) 7.5.2 Integration Test Specification and Test Data Verification TASK DESCRIPTION Title: Integration Test Specification and Test Data Verification Task ID: IVE.CA.T2 Activity: IVE.CA Code Analysis Start event: CDR Critical Design Review End event: QR Responsible: ISVV Supplier Objectives: Evaluate Integration Test Specification and Data for consistency with technical specification and architectural design and correctness and completeness. Inputs: - From ISVV Customer: - Software Requirements Specification [TS; DDR]
Issue 2 Revision 0 Page 52 - Interface Control Documents [ICD;CDR] - Software Architectural Design including software models if produced [DDF; CDR] - Integration Test Plan including test procedures and data [DJF; CDR] - Integration test report (including model test report) [DJF; CDR] - Traceability of Architectural Design (including the software models if produced) to Integration Tests [DJF; CDR] - Design Justification file, including: - The choice of model type(s) and modelling language and tool suite(s) - The modelling environment version, configuration, and options - The choice of code generator and compiler and their options intended to be used - Modelling language reference manuals - From ISVV Supplier: - Traceability Between TS and SW Architectural Design [I]VE.DA.T1] Implementation: - Traceability Between ICD and SW Architectural Design [IVE.DA.T1] - ISVV Level 2 only: - IVE.CA.T2.S1: Verify consistency with Technical Specification (by Inspection - reviewing the traceability matrices produced by the software developer) Ensure that test specification and data are traceable to software requirements. - IVE.CA.T2.S2: Verify consistency with Software Architectural Design (by Inspection - reviewing the traceability matrices produced by the software developer Ensure that test specification and data are traceable to software architectural design components (and also to both model components and model elements). - IVE.CA.T2.S3: Verify integration test specification correctness and completeness (by Inspection) Ensure that the integration test plan is in accordance with the defined test strategy, namely with respect to: the types of tests to be performed (e.g. functional, boundary, performance, usability, etc.) and test coverage goals (such as call graph and parameter passing or, for example model test coverage goals if models are produced by the SW suppliers). Verify if the integration test specification and data are in accordance with the integration test plan, namely in which respect to: the types of tests to be performed (e.g. functional, robustness, performance, usability, etc.); and test coverage goals (such as call graph and parameter passing). Verify if there is a clear acceptance criterion for every single test case. If models are produced by the SW suppliers then also: - Ensure that properties of the model which are related to safety and dependability are put on the list to be validated by independent testing on the final code level. Note that a property might hold in theory, yet fail in simulation and in operation due to the limited precision of computer arithmetic operations. - Ensure that schedulability is demonstrated by the developer by simulation. - Ensure that the required functionality and behaviour of the model (algorithmic computations and state transitions) are put to demonstration and validation by simulations. - Ensure that all required properties of the model are identified and demonstrated via proof and/or put to test by simulation. - Ensure that properties, the functionality and the behavioural aspects that for some reason cannot be verified at model level are put on the list to be verified at the code level. - IVE.CA.T2.S4: If models are produced by the SW suppliers, then evaluate model verification and validation test results (by Inspection) Ensure that results are reported for all verification and validation tests performed at the model level. Ensure that the qualitative assessment of simulation runs contains an assessment of the results (passed/failed). Verify that the conclusions drawn from the test results by the developers are correct. Ensure that the test results are correctly interpreted and give a correct and complete picture of the model aspects under test. Ensure that model coverage is conforming to applicable standards. As an example, ensure that the desired level of test coverage has been achieved at the model level. Pay special attention to model constructs that do not have a unique code implementation and ensure that in such cases the coverage requirements guarantee full coverage in all of the likely implementations. - IVE.CA.T2.S5: Verify integration test reports Ensure that test results are evaluated against reference data when these are available, focusing on problem areas
Issue 2 Revision 0 Page 53 like lack of coverage achievement, etc). Note that a comparison with reference data is not always needed (e.g. in the case of robustness testing where only an anomaly has to be flagged). Outputs: - Integration Test Specification and Test Data Verification Report - If models are produced by the SW suppliers: Independent model validation Plan (including procedures and data) - If models are produced by the SW suppliers: Independent model validation Report 7.5.3 Unit Test Procedures and Test Data Verification TASK DESCRIPTION Title: Unit Test Procedures and Test Data Verification Task ID: IVE.CA.T3 Activity: IVE.CA Code Analysis Start event: CDR Critical Design Review End event: QR Responsible: ISVV Supplier Objectives: Evaluate Unit Test Procedures and data and unit for consistency with software architectural design, and software detailed design and correctness, and completeness Inputs: - From ISVV Customer: - Software Requirements Specification [TS; DDR] - Interface Control Documents [ICD;CDR] - Software Architectural Design including software models if produced [DDF; CDR] - Software Detailed Design including software models if produced [DDF;DCDR] - Software Unit Test Plan including test procedures and data [DJF; CDR] - Unit test report (including model test report) [DJF; CDR] - Traceability of Detailed Design (including the software models if produced) to Unit Tests [DJF; CDR] - Design Justification file, including: - The choice of model type(s) and modelling language and tool suite(s) - The modelling environment version, configuration, and options - The choice of code generator and compiler and their options intended to be used - Modelling language reference manuals - From ISVV Supplier: - Traceability Between TS and SW Architectural Design [IVE.DA.T1] - Traceability Between ICD and SW Architectural Design[IVE.DA.T1] - Traceability Between TS and SW Detailed Design [IVE.DA.T2] - Traceability Between ICD and SW Detailed Design [IVE.DA.T2] Implementation: - Traceability Between SW Architectural Design and SW Detailed Design [IVE.DA.T2] - ISVV Level 2 Only: - IVE.CA.T3.S1: Verify consistency with Software Detailed Design by Inspection - reviewing the traceability matrices produced by the software developer Ensure that unit test procedures data are traceable to detailed design elements (including models element if produced). - IVE.CA.T3.S2: Verify unit test procedures correctness and completeness (by Inspection) Verify if the unit test procedures and data are in accordance with the unit test plan, namely in which respect to: the types of tests to be performed, e.g. functional, robustness, performance, usability, etc.; test coverage goals (such as statement, decision and branch condition coverage). Verify if there is a clear acceptance criterion for every single test case. If autocode generation or models are used, then also ensure that test cases used in simulation are reused for unit testing the real code.
Issue 2 Revision 0 Page 54 - IVE.CA.T3.S3: Verify unit test reports Ensure that test results are evaluated against reference data when these are available. Note that a comparison with reference data is not always needed (e.g. in the case of robustness testing where only an anomaly has to be flagged)... If autocode generation is applied, then also: - Ensure that verification and validation testing done on the model level is repeated on the code level, preferably using the same test input. - Ensure that the properties of the model, which are related to safety and dependability, are identified and challenged with attempts of falsification via destructive testing at the source code level. - Ensure that it is demonstrated by testing at the source code level that the numerical accuracy of the computation satisfies the applicable requirements. - Ensure that properties and functionality that could not be verified at model level (for whatever reasons), are verified by testing at the code level - Ensure that simulation runs done on model level are repeated at the code level (and that a) a qualitative assessment of the results is performed to ensure that any differences are justified, and b) if using different input data or giving different output data compared to the simulation then the differences are justified). - Ensure that the test setup is capable of detecting cycle overruns, unexpected timeouts, and unexpected messages. - Ensure that no unexpected events (e.g. cycle overruns, timeouts) occurred during testing, or alternatively, the events are completely analysed and justified. - Ensure that test results are evaluated against reference data when these are available. Note that a comparison with reference data is not always needed (e.g. in the case of robustness testing where only an anomaly has to be flagged). - Ensure that if code coverage is obtained with different input data than used to obtain model coverage then the differences are justified. Outputs: - Unit Test Procedures and Test Data Independent Verification Report
Issue 2 Revision 0 Page 55 8.0 Independent Validation 8.1 Activity Overview Independent validation can in general be performed after the independent verification activities. MAN. Management MAN.PM.ISVV Process Management MAN.VV. ISVV level definition IVE. Independent Verification IVE.TA.Technical Specification Analysis IVE.DA.Design Analysis IVE.CA.Code Analysis IVA. Independent Validation IVA.Validation Figure 20 Independent Validation in context However, sub-activities can be executed in parallel with the independent verification (if this is carried out), and can start earliest at CDR as shown in Figure 2 and Figure 3 that define the ISVV tasks versus the SW supplier s ones. This is described in the following subsections. Independent validation is made out of three tasks as shown in Figure 21 Independent Validation. Requirements Baseline System safety dependability analyses Technical Specification (including ICDs) Software Architectural Design Development test Report Input from IVE Identification of Test Cases Construction of Test procedures Test Plan Test Procedure Test Report Software user Manual Execution Of Test Procedures Software validation Facility Figure 21 Independent Validation 21 21 Note that figure shows only the most important inputs and outputs.
Issue 2 Revision 0 Page 56 Notice that the Construction of Test Procedures subtask can start when the SVF is delivered. The Execution of Test Procedures requires the object code of the software under test, and can be started after SW-CDR. At SW-CDR the first version of the software is delivered and it is expected that the software has been through development validation. 8.1.1 Identification of Test Cases The purpose of this task is to identify areas to be subject to independent validation. The task relies on input from the ISVV Customer and the ISVV Supplier. The identification of test cases is an iterative process, where new test cases might be identified during the establishment and execution of previously identified test cases. The Identification of Test Cases task is divided into the subtasks shown in Figure 22 below; input to the task is also illustrated. The needed documents are highlighted in the figure. This first task in the IVA activity will take 20%-40% of the total effort. Requirements Baseline System safety dependability analyses Evaluate Task Input Technical Specification (including ICDs) Software Architectural Design Perform Analysis Test Plan Development test Report Input from IVE Write Independent Validation test Plan Software user Manual Figure 22 Subtasks to "Identification of Test Cases" The input to the IVA activity to this task originates from the ISVV customer and from the ISVV supplier. 8.1.1.1 Evaluate Task Input It is a prerequisite that the ISVV supplier has a basic knowledge about the software. This is best achieved by performing the preceding IVE activities. An evaluation of the Test Cases and Test Reports delivered by the software supplier might also be useful at this stage of the IVA activity 22 to ensure complementarity of ISVV IVA tests. A software user manual (if existing) might provide valuable overview of the system. 22 If the ISVV supplier uses the test cases and test report from the development validation to identify missing test cases, the ISVV supplier must be careful not to adopt the developer s way of thinking.
Issue 2 Revision 0 Page 57 8.1.1.2 Perform Analysis The identification of test cases takes into account that the validation performed by the software supplier has demonstrated that the user and software requirements have been satisfied. The analysis will reuse as much as possible from the preceding IVE activity. If none or only part of the IVE activities have been performed it can be necessary to include elements of the verification analysis in preparation of the IVA activity. Table 3 shows when analyses are performed by the IVA activity dependent on the existence of independent verification results and ISVV level. ISVV Level IVE IVA 1 None Analysis based on checklists 1 Performed Reuse IVE results 2 None Dedicated analysis performed 2 Performed Additional analysis pointed at in the IVE analysis is performed. Table 3: Dependency between ISVV level, input and analysis A full independent validation (ISVV Level 2) must always rely on independent analysis and shall preferably be performed within the IVA activity to ensure that the analysis focus is on the validation. The most important analyses for identification of test cases are: Worst Case Analysis Investigate combinations of worst case situations, e.g. several inputs at the boundary at the same time. Worst Case Load Analysis If scheduling is a critical item a worst case load analysis must be performed. The analysis looks into e.g., blocking time, execution time, and response time. Attempt to provoke runtime errors, race conditions, missed deadlines etc Requirement stretching: what happens if data outside the boundaries are given? (see IVE.TA.T1.S5 on requirement completeness verification) FDIR analysis: inspect Fault Detection, Isolation and Recovery requirements. The independent validation shall be defined and performed to validate fit for use with the user s hat on, and performing stress tests and with operational mind. Stress tests for software robustness, stability and reliability: out of limit testing, for critical issues and components. Interface testing: all variables at their extreme values, all variables from one domain at extreme values, interesting combinations of variables values. Tests should be constructed for the RB with the user s view 8.1.1.3 Writing the Independent Validation Test Plan The Independent Validation Test Plan (IVTP) describes each identified test mostly needing a programming language. The test plan contains: The basic understanding of the software Detailed understanding of the software behaviour in areas where independent tests are foreseen.
Issue 2 Revision 0 Page 58 Description of identified Test Cases Each Test Case consist of the following sections: Test Overview Purpose of the Test Case How was the Test Case identified Test Environment Input Data Output Data Starting Conditions Detailed Test Steps Each step will be described in a table with the following contents: Step No. Step Description Expected Result Comment Pass/Fail Table 4: Test case steps At the end of the task, it is important that the ISVV customer reviews and accepts the test plan. This will make it possible for the ISVV customer and ISVV supplier to discuss the intended behaviour of the software and focus on essential areas. 8.1.2 Construction of Test Procedures Test procedures are the implementation of the test cases, i.e., test cases expressed in the test language as provided by the software validation facility. The Construction of Test Procedures task is divided into three subtasks. Figure 23 shows the subtasks and input to each of the subtasks. Achieve knowledge About the SVF IVA Test Plan Test Procedure Implement Test Procedure Software Validation Facility Updated Test Plan Update Test Plan Figure 23 Subtasks to "Construction of Test Procedures"
Issue 2 Revision 0 Page 59 8.1.2.1 Achieve Knowledge about the Software Validation Facility or operational test platform A prerequisite for starting the SVF or operational test platform familiarisation is the SVF user guide or operational test platform user guide. Requirements to an SVF or operational test platform are described in Annex H. A review of the environment for testing shall be thoroughly done since problems can be found: - Difficult to test SW patched for HW corrections. - HW/SW interfaces are normally having problems because of lack and incomplete documentation from development 8.1.2.2 Implementation of Test Case into Test Procedures The test procedure shall be automated to the extent possible, i.e. minimize the need for user interaction during execution. Also the test result analysis should be done without user intervention. The intention is to allow for automatic regression testing using the IVA tests procedures. The test procedures are part of the Independent Validation Test Report along with the test results. 8.1.2.3 Updating the Independent Validation Test Plan When implementing the test cases into test procedures new test cases might appear, these test cases must be added to the independent validation test plan to be used as documentation for the test execution. 8.1.3 Execution of Test Procedures The Execution of Test Procedures task is divided three subtasks as shown in Figure 24 below.. Execute Test Procedures Test Procedures Software User Manual Investigate Failed Tests Test Report Software Validation Facility Produce Test Report Figure 24 Subtasks to "Execution of Test Procedures" 8.1.3.1 Execute the Test Procedures Information about how to perform the tests goes into the Test Report. It is important to notice that this also might identify additional test cases resulting in a repetition of Construction of Test Procedures.
Issue 2 Revision 0 Page 60 8.1.3.2 Investigation of Failed Tests Failed tests must be analyzed to ensure that the failure is due to problems in the software under test, and not to inconveniences in the software validation facility or errors in the test procedures. When investigating the failed tests it might leads to additional or modified test cases. If new test cases are identified they must be added to the IVTP, implemented into test procedures and then executed. 8.1.3.3 Produce Test Reports This task will result in two test reports, the recommended contents is: Independent Validation Test Report Description of how to execute the tests Description of observations and problems Test procedures (scripts) Test results (log files) including pass/fail status Independent Validation Findings Summarized findings during IVA Lists of failed Tests 8.2 Activity Input and Prerequisites The input work products are listed in Figure 22 defining the Subtasks to "Identification of Test Cases", Figure 23 defining the Subtasks to "Construction of Test Procedures", and Figure 24.defining the Subtasks to "Execution of Test Procedures". The input to the individual tasks are discussed in section 8.1 and listed in section 8.5. The prerequisite for starting the independent validation and to ensure efficiency of the activity, it is important that the software under test is in a mature and healthy state: The software under test has already been validated by the software supplier. Because the ISVV supplier is not expected to redo or replace the software supplier s validation activities, these must have been performed prior to the independent validation. A suitable software validation facility or operational test platform is available. The SVF or operational test platform can either be constructed by the ISVV supplier, but can also be delivered by another supplier. If possible, the independent verification analysis should have been performed in order to support the identification of the test cases. If this is not the case, corresponding activities must be performed as part of the test case identification. 8.3 Activity Outputs The output work products are listed in above mentioned Figure 22, Figure 23, and Figure 24. The Independent Validation Test Plan (IVTP) contains the ISVV supplier s basic knowledge of the software and the identified test cases. This test plan is an output of the activity, but also a document used during the IVA activity. The IVTP is reviewed by the ISVV customer before continuing with the Construction of Test Procedures task. The test procedures delivered can be used for regression testing of future versions of the software product, i.e., they might be added to the set of acceptance tests that the software customer will request executed as part of acceptance of a delivery.
Issue 2 Revision 0 Page 61 The IVA activity will produce a test report holding all results of the test execution and the findings from test execution. Before the test report is produced, the ISVV supplier must investigate failed tests to ensure that the problem revealed is located in the software under test, and not in the software validation facility or in the test procedures. It is the responsibility of the ISVV customer to investigate the Independent Validation Test Report, and decide if failed tests should result in problem reports. In cases when ISVV cannot reasonably identify the cause of a problem in the items under its responsibility, the finding should be reported. The ISVV team should be careful in not spending its time and budget debugging the OBSW, instead of identifying other possible problems in the OBSW. 8.4 Process Management 8.4.1 Initiating and Terminating Events The IVA activity can be initiated as soon as sufficient documentation is available. This means that the independent validation activity can start at CDR or as soon as corresponding information is available and mature. If independent verification is being performed, the IVA activity is recommended to start during the independent code analysis. The completion of the independent software validation activity does not have to be linked with the development process, but should take place while the software supplier is still available for maintenance of the software and preferably close to the AR. 8.4.2 Completion Criteria The independent validation activity closes with the delivery and review of the test report. It can be an advantage to deliver the applied software validation facility to the operation phase along with the test procedures, SVF User Guide and the independent validation test plan. This will enable execution of the independent validation test suite when updates to the software are to be investigated. 8.4.3 Relations to other Activities The IVA activity is dependent of the result of the IVE activities. If important analyses do not exist and sufficient knowledge about the software under tests is not achieved, sufficient activities enforcing these must be executed prior to the formal IVA. 8.5 Task Descriptions 8.5.1 Identification of Test Cases TASK DESCRIPTION Title: Identification of test cases Task ID: IVA.T1 Activity: IVA - Independent Validation Start event: DAR - Design Analysis Review (during the independent code analysis) End event: Responsible: Objectives: IVTPR Independent Validation Test Plan Review ISVV Supplier Inputs: - From ISVV Customer: The purpose of this task is to identify areas to be subjected for independent validation. The task relies on the development documentation for the system, including requirements and design specifications and output from the IVE analysis. The identified test cases must be described in the test plan to be used when implementing the tests. - ISVV level definition at System level MAN.VV.T1) - Requirement Baseline[RB; SRR] - Software Requirements Specification [TS; DDR] - Interface Control Documents[ICD; CDR]
Issue 2 Revision 0 Page 62 - Software User Manual [DDF; CDR] - Safety and Dependability Analysis Reports (SFMECA and SFTA reports) [PAF; CDR] - Unit and Integration Test Cases & Test Reports [DJF, CDR] - Software Architectural Design including software models if produced [DDF; PDR] - Software Detailed Design including software models if produced [DDF; CDR] - Software User Manual [DDF, CDR] - From the ISVV Supplier (if available) - Technical Specification Verification Result (including ISVV findings) [IVE.TA.T2] - Architectural Design verification Result (including ISVV findings) [IVE.DA.T2] - Detailed Design verification Result (including ISVV findings) [IVE.DA.T4] - Code Verification Result (including ISVV findings) [IVE.CA.T2] - Criticality System Functions List [MAN.VV.T1] - Critical Software Requirements List [MAN.VV.T2] - Critical Software Components List [MAN.VV.T3] - Critical Software Units List [MAN.VV.T4] - Test plan, test data, and test results from independent model validation (when applicable) Sub Tasks (per ISVV Level): - ISVV Level 1 and 2: - IVA.T1.S1: Evaluate Task Input (by Inspection) - ISVV Level 1 only: Evaluate the results from the IVE analysis (if performed). Evaluate the validation test performed by the software supplier. Achieve basic knowledge about the software, either during the IVE activities or during this evaluating subtask. Achieve detailed knowledge about the subjects from the critical function list. - IVA.T1.S2: Perform Analysis - ISVV Level 2 only: Use checklist in Annex G.7. - IVA.T1.S2: Perform Analysis - ISVV Level 1 and 2: Analyze the interaction with external software with focus on degraded functionality of the external software products. Investigate worst case load scenarios, covering robustness of the software with respect to deviations from timing requirements, e.g., events happening too early/late. Investigate worst case scenarios, including robustness of the software with respect to injections outside or at the boundary. Investigate potential runtime errors, overflow/underflow, and dataflow conflicts. - IVA.T1.S3: Writing Independent Validation Test Plan (by Inspection) Describe each identified test case in terms of: Test Rationale, Test Overview, Test Environment, Input Data, Output Data, Starting Conditions and Detailed test Step descriptions including step description, expected results and pass/fail criteria. NOTE: In case models and autocode generation is used, a significant portion of test cases can probably be reused directly from the validation Test performed for the models, see IVE.CA.T3. Outputs: - Independent Validation Test Plan 8.5.2 Construction of Test Procedures TASK DESCRIPTION Title: Construction of Test Procedures Task ID: IVA.T2
Issue 2 Revision 0 Page 63 Activity: Start event: End event: Responsible: IVA - Independent Validation At delivery of the SVF User Guide When all Test Cases are implemented and described in the Test Plan. ISVV Supplier Objectives: - The purpose of this task is to express the test cases in the test language provided by the software validation facility Inputs: - From the ISVV Customer - Object code [DDF; CDR] - Source code (including autocode and external code) [DDF; CDR] - From the SVF Supplier or ISVV Customer - Software Validation Facility (SVF) User Guide - Software Validation Facility or operational test platform - SVF Hardware (if any) - SVF Software - From the ISVV Supplier Sub Tasks (per ISVV Level): - Independent Validation Test Plan [IVA.T1] - ISVV Level 1 and 2: - IVA.T2.S1: Achieve knowledge about the SVF (by Inspection or dry runs) Achieve knowledge about the SVF. Achieve knowledge about the test language to implement the test cases, the method to store test cases for subsequent execution, and the method for capturing and saving test results. - IVA.T2.S2: Implement Test Cases into Test Procedures Express the test cases in the test language provided by the software validation facility: Relate test case parameters with software parameters Relate test case actions with software functions Setup conditions for test procedure failure/success Include test report generation commands Describe test data TM/TC in the Test Plan if applicable - IVA.T2.S3: Updating the Independent Validation Test Plan Update the independent Validation test plan with possibly additional test cases. Outputs: - Independent Validation Test Procedures - Updated Independent Validation Test Plan 8.5.3 Execution of Test Procedures TASK DESCRIPTION Title: Execution of Test Procedures Task ID: IVA.T3 Activity: IVA - Independent Validation Start event: When IVA.T2 is performed. End event: IVR Independent Validation Review Responsible: ISVV Supplier Objectives: - The purpose of this task is to execute the test procedures and generate a test report Inputs: - From the ISVV Customer - Object code [DDF; CDR] - Source code (including autocode and external code) [DDF; CDR]
Issue 2 Revision 0 Page 64 - From the SVF Supplier or operational test platform supplier - Software Validation Facility (SVF) User Guide - Software Validation Facility or operational test platform - SVF Hardware (if any) - SVF Software - From the ISVV Supplier - Independent Validation Test Procedures [IVA.T2] - Independent Validation Test Plan [IVA.T1] Sub Tasks (per ISVV Level): - ISVV Level 1 and 2: - IVA.T3.S1: Execute the Test Procedures Execute all implemented test procedures and generate report. - IVA.T3.S2: Investigation of failed tests Check that the failure is due to the software under test. - IVA.T3.S3: Produce Test Report Describe all tests and observations. Attach all test procedures (scripts) and test logs to the report. Outputs: - Independent Validation Test Report (including the log and trace files of the test front-ends and the test nodes)
Issue 2 Revision 0 Page 65 Annex A. Definitions and acronyms A.1. Definitions The definitions presented herein are for the purpose of readability of this document. These definitions prevail when any discrepancy occurs with other standards definition. activity critical item critical software components list critical software requirements list critical software units list critical system functions list criticality dependability error potential level ISVV customer ISVV level A defined body of work to be performed, including its required input and output information [IEEE 1074:1997] Component, material, software, sub-assembly, function, process or technology, which requires special project attention [ECSS-P- 001B:2004]. NOTE: In this document critical item is used as a common term denoting critical system function, critical software requirement, critical software component, or critical software unit. List of critical software component as determined by Design Analysis Criticality Analysis (ISVV task) with assigned software criticality categories and ISVV levels. List of critical software requirements as determined by Technical Specification Analysis Criticality Analysis (ISVV task) with assigned software criticality categories and ISVV levels. List of critical software units as determined by Code Analysis Criticality Analysis (ISVV task) with assigned software criticality categories and ISVV levels. List of critical system functions as determined by system level safety and dependability analyses with assigned software criticality categories and ISVV levels. A measure of the consequence of an undesirable event. NOTE: The consequence may be in terms of safety, dependability, maintainability, security, environmental impact, economic impact etc. Collective term used to describe the availability performance and its influencing factors: reliability performance, maintainability performance and maintenance support performance. NOTE: Dependability is used only for general descriptions in nonquantitative terms. [ISO 9000:2000] in [ECSS-P-001B:2004]. An assessment of the potentially negative impact of characteristics of the development organisation, the development process or the software itself on software quality. An organisation or person that receives an ISVV service. The ISVV customer is one of the two parties of an ISVV contract, the other being the ISVV supplier. The ISVV customer is usually either a System supplier (system integrator, prime, software customer) or System customer (system owner). NOTE: See also the definition of customer in [ECSS-P-001B:2004]. A number on an ordinal scale assigned to a software component or function to designate the required level of verification and validation to
Issue 2 Revision 0 Page 66 ISVV supplier item process safety software software component software critical item list software criticality analysis apply to the component or function. The ISVV level is defined from 0 to 2, with 0 meaning no ISVV, 1 meaning light ISVV, 2 meaning rigorous ISVV. An organisation or person that provides an ISVV service. The ISVV supplier is one of the two parties of an ISVV contract, the other being the ISVV customer. The ISVV supplier must have full technical, managerial, and financial independence with respect to the ISVV customer (in some cases independence is reduced; there can be various degrees of independence). NOTE: See also the definition of supplier in [ECSS-P-001B:2004]. Item is used as a common term denoting system function, software requirement, software component, or software unit. Set of interrelated or interacting activities which transform inputs into outputs NOTE 1: Inputs to a process are generally outputs of other processes. NOTE 2: Processes in an organization are generally planned and carried out under controlled conditions to add value. NOTE 3: A process where the conformity of the resulting product cannot be readily or economically verified is frequently referred to as a special process. [ISO 9000:2000] in [ECSS-P-001B:2004]. System state where an acceptable level of risk with respect to: - fatality, - injury or occupational illness, - damage to launcher hardware or launch site facilities, - damage to an element of an interfacing manned flight system, - the main functions of a flight system itself, - pollution of the environment, atmosphere or outer space, and - damage to public or private property is not exceeded NOTE 1: The term safety is defined differently in ISO/IEC Guide 2 as freedom from unacceptable risk of harm. [ECSS-P-001B:2004] See software product. Part of a software system. NOTE 1: Software component is used as a general term NOTE 2: Components can be assembled and decomposed to form new components. In the production activities, components are implemented as modules, tasks or programs, any of which can be configuration items. This usage of the term is more general than in ANSI/IEEE parlance, which defines a component as a basic part of a system or program ; in this Standard, components are not always basic as they can be decomposed. [ECSS-E-40B:2003] A general term covering critical system functions list, critical software requirements list, critical software components list, critical software units list. Software criticality analysis is an analysis resulting in the definition of a software critical item list; it is carried out with the purpose of defining the scope of ISVV. Criticality is related with safety and dependability but may refer to, for example, security, maintainability or any other
Issue 2 Revision 0 Page 67 software criticality category software criticality scheme software product software unit test case property defined by the ISVV Customer. NOTE: This definition deviates from the usage (there is no definition) in [ECSS-E-40B:2003] and [ECSS-Q-80B:2003] where software criticality analysis is a safety and dependability analysis and a requirement for all software. A number or letter designating the criticality of a failure mode or an item. Software criticality categories are defined as part of a software criticality scheme. The definition of a set of software criticality categories used for a specific project or purpose. The categories are ordered from low to high criticality. NOTE 1: There are usually 4 or 5 software criticality categories in a software criticality scheme. NOTE 2: In space projects, software criticality categories are usually named A to D, with A being the most critical. For ISVV, software criticality categories are numbered 1 to 4. This numerical scale will normally correspond to the alphabetical software criticality categories so that 4 is equivalent to A, 3 to B, 2 to C, and 1 to D. However, the software criticality categories assigned to functions, software products, software requirements, software components, and software units for the purposes of ISVV may be different from those assigned for development. The two scales are intended to avoid confusion. Set of computer programs, procedures, documentation and their associated data [ECSS-E-40B:2003] NOTE: software and software item are synonyms of software product Separately compilable piece of source code NOTE: In this Standard no distinction is made between a software unit and a database; both are covered by the same requirement. [ECSS-E-40B:2003] A specification of a test in terms of: - a description of the purpose of the test - preconditions (e.g., the state of the software under test) - actions - post condition (expected reaction) test procedure The instantiation of a test case, using a specific test language validation Confirmation, through the provision of objective evidence that the requirements for a specific intended use or application has been fulfilled NOTE: The validation process (for software) is the process to confirm that the requirements baseline functions and performances are correctly and completely implemented in the final product. [ECSS-E-40B: 2003] verification Confirmation through the provision of objective evidence that the specified requirements have been fulfilled. NOTE The verification process (for software) is the process to confirm that adequate specifications and inputs exist for any activity, and that the outputs of the activities are correct and consistent with the specifications and input. [ECSS-E-40B:2003]
Issue 2 Revision 0 Page 68 A.2. Acronyms B CA CAR CCS CFL CSP DAR FDIR FMEA FMECA FSM ISVVF ISVVL IV&V IVA IVE IVR IVTP IVTPR LOTOS MISRA C NCR NDA OCL P&F PN RAISE RAVEN RID SCC SDL SFMECA SFTA SPR SRR SUM SVF TA TAR The B method (Formal Methods) Code Analysis Code Analysis Review Calculus of Communicating Systems (Formal Methods) Critical Function List Communicating Sequential Processes (Formal Methods) Design Analysis Review Fault Detection, Isolation and Recovery Failure Mode and Effects Analysis Failure Modes, Effects, and Criticality Analysis Finite State Machines ISVV Facility ISVV Level Independent Verification and Validation. Synonym for ISVV. IV&V is the acronym used by NASA and in [IEEE 1012:1998] Independent Validation Independent Verification Independent Validation Review Independent Validation Test Plan Independent Validation Test Plan Review Language Of Temporal Ordering Specifications (Formal Methods) Motor Industry Software Reliability Association (C language) Non-conformance Report Non-Disclosure Agreement OMG Object Constraint Language Process & Facility Petri Nets (Formal Methods) Rigorous Approach for Industrial Software Engineering (Formal Methods) Reliable Ada Verifiable Executive Needed Review Item Discrepancy Software Criticality Category Specification Description Language (Formal Methods) Software Failure Mode Effects and Criticality Analysis Software Fault Tree Analysis Software Problem Report System Requirements Review Software User Manual Software Validation Facility Technical Specification Analysis Technical Specification Analysis Review
Issue 2 Revision 0 Page 69 UML V&V VDM WCET Z Unified Modelling Language Verification and Validation Vienna Development Model (Formal Methods) Worst Case execution Time The Z method or, the Z Notation (Formal Methods)
Issue 2 Revision 0 Page 70 Annex B. ISVV activity outputs B.1. ISVV Plan Outline This annex contains an outline of the ISVV supplier s plan for the execution of an ISVV project. Included in the outline is also some text describing the contents of each section as well as pointers to previous sections of this document with relevance to the plan. <1> Introduction <1.1> Background The background should set the stage for the ISVV project, describing the context in which the software subject to ISVV is to be used, the end customer, other relevant organisations involved in the construction of the final system, the overall schedule of the project etc. The section should also describe the background for the current ISVV project: why is ISVV being done for this project? <1.2> Purpose This section should describe the purpose of the ISVV plan. Who are the readers of the plan and how will it be used? <1.3> Scope and Objectives The scope shall define the boundaries of the work. It should provide a short description of the software products to be subject to ISVV (as identified by the initial ISVV level definition activities see section 4.0 and Annex D) as well as a summary of the verification and validation activities to be carried out. The verification and validation objectives should also be described (see section 2.1). <1.4> Assumptions and Constraints This section should describe any assumptions and constraints of the ISVV plan. This could be related to the size or maturity of documents and code, timing of execution of ISVV activities or any other aspect whose change may impact the execution of the ISVV project. These assumptions and constraints should of course also be reflected by the ISVV contract. <1.5> Outline This section should provide an outline of the rest of the plan. <2> Applicable and reference documents This section should list all references made in the plan. References may be divided into applicable and referenced documents. Typical documents to refer to include: ISVV customer s requirements for project management and reporting, Applicable international standards on software engineering, software product assurance or other, Applicable project specific standards, e.g. the ISVV customer s software product assurance plan, software development standards etc. ISVV project Statement of Work, ISVV project Contract, ISVV supplier s plans for Configuration Management, Document Control, Product Assurance etc. ISVV supplier s quality management system and ISVV process definition.
Issue 2 Revision 0 Page 71 <3> Terms, definitions and abbreviated terms <3.1> Definitions This section should list all definitions used in the plan. <3.2> Acronyms and Abbreviations This section should list all acronyms and abbreviations used in the plan. <4> Project Plan Overview <4.1> General The purpose of the Project Plan Overview is to provide a short summary of the ISVV project, describing roles, work packages, schedule, and deliverables. This is an executive summary of the plan, also serving as a map to later sections. <4.2> Project Organisation and Management <4.2.1> Organizational structure This section should identify the organizational structure including the relationships with other organizational parties involved in the ISVV contract. <4.2.2> Roles and Responsibilities This section should identify the parties involved in the ISVV contract (ISVV supplier and customer), parties with which communication is or may be necessary (e.g. software developers, end customer of system, sub-contractors) as well as other parties relevant to the project (see section 2.5). The section should also describe the ISVV supplier s internal organisation, clearly identifying the roles of contractual manager, project manager, technical manager, product assurance manager, configuration manager as well as technical personnel. The responsibilities and authorities associated with each role should be described. The names and contact information (phone/fax number and email/mail address) of each person taking up the different roles should also be included. <4.3> Scheduling and Milestones The schedule and milestones section should identify the major milestones of the project and map out the activities described in the preceding section in time (see section 2.2). The schedule should be described in terms of a Gantt chart or some other graphical representation (which may be a separate document, but which should be referred to). Assumptions and constraints of the scheduling should be clearly defined. The schedule may also show budget and resource allocation of individual activities. See the description of the various ISVV activities for guidance on when activities should be initiated. <4.4> Resources and Infrastructure This section should describe the non-personnel resources required for the execution of the project, including office tools, configuration management tool, verification and validation tools, special facilities etc. Purchase of new software or equipment should be planned. The availability of resources should be planned for; if possible and necessary, resources may have to be reserved. In some cases, the ISVV supplier may not be in control of all necessary infrastructure (e.g. if validation is to be performed on the validation facility of the ISVV customer). Such use should be planned (and regulated by contract).
Issue 2 Revision 0 Page 72 <4.5> Meetings and Reviews The meetings and reviews section should list the progress and review meetings to be held throughout the project with their scheduled date and location. It should also provide instructions for preparation of the meetings as well as writing of minutes etc. In addition, the possibility of technical meetings when needed should be mentioned. See section 2.2 for an overview of the review milestones defined for the ISVV process and section 3.5.1 for setting the review meetings in context. <5> Control procedures for verification and validation activities The plan shall contain information (or reference to) about applicable management procedures concerning the following aspects: a. problem reporting and resolution; b. deviation and waiver policy; c. control procedures. <6> ISVV activities and tasks identification <6.1> Management activities <6.1.1> Quality Management The plan should describe the quality assurance scheme of the ISVV project. This may be a reference to the quality management system of the ISVV supplier. <6.1.2> Document and Code Management This section should describe (or provide reference to a document describing) handling of documents and code received from outside the project and produced by the project. This includes the identification, registration, configuration management, review, approval, and filing of documents and code. <6.1.3> Risk Management This section should describe how the ISVV supplier intends to identify, register, analyse, prioritise, and follow-up risks related to the execution of the ISVV project. The plan may contain an initial risk analysis. However, it is advised to keep a risk management database outside the plan; it should be continuously updated throughout the project. <6.1.4> Security Management The confidentiality requirements of the project should be referred in the plan. Risks related to breach of confidentiality should be identified and suitable security measures described and implemented, both for sending information across the internet or via mail and for physically securing documents and PCs. See also section 2.4. <6.1.5> Metrics This section should list the metrics that should be collected during the ISVV project, also describing the purpose and use of the metrics. <6.2> Work Breakdown Structure The work breakdown structure describes how the ISVV work is split into individual work packages, possibly organised in a hierarchical fashion. The section should be introduced by a graphical overview of all of the work packages followed by a subsection (or sub-subsection) per work package.
Issue 2 Revision 0 Page 73 The work breakdown structure should reflect the ISVV supplier s defined process (based on the ISVV process of this guide) and the defined scope of the process. The process description could be part of the ISVV plan or is a separate document which should then be referred to. Each work package should be described in terms of: Identification Name Input documents Output documents Controlling documents Activities to be carried out with supporting methods and tools Budget Responsibility, start and end event may also be described here or it should be described in the schedule if this is more convenient. <7> Communication and Reporting This section should describe the communication which will take place between the ISVV supplier and other parties within the scope of the ISVV project, both formal and informal and both technical and managerial. It should clarify responsibilities for producing and distributing documents and indicate when communication will take place. It should also specify what communication must be formal and how such communication shall take place. This section should include: Formal communications, such as software supplier s documentation and code, the ISVV reports, the ISVV project progress reports Internal communications, such as early findings, etc. <Appendix A>: Project Directory The Project Directory should include contact information to all persons involved in the project. <Appendix B>: Deliverables The deliverables section should list all the formal deliverables of the project with reference to the work package producing the deliverable, title, and document identification. B.2. Requests for Clarification In some cases, the contents of documents and/or code may be considered unclear or difficult to understand by the ISVV supplier. The way to handle this depends on how strict one wants to keep the technical independence of the ISVV supplier. There are two main options: either to report the obscurity as an ISVV finding or to request a clarification from the ISVV customer. Reporting everything as findings has the advantage that: with time, it will probably lead to better, more precise specifications. The disadvantage is that: a number of these findings are likely not to be problems, but may reflect the ISVV supplier s limited understanding of the system and the software application (especially early in the ISVV project). Receiving too many findings which are seen as irrelevant, may undermine the credibility of the ISVV supplier in the view of the ISVV customer and software developers. Requesting clarifications has the advantage of:
Issue 2 Revision 0 Page 74 allowing the ISVV supplier to continue with the ISVV activity with a better understanding of system and the software. The disadvantage is that: the ISVV supplier may in this way adopt faulty assumptions and presuppositions held by the ISVV customer, thus effectively masking a potential problem. This is a real dilemma. Both approaches have been tried in real projects and there is no definite conclusion as to what is best. The recommendation is to allow requests for clarification, but to ensure that such requests are properly recorded so that the obscurity found is not just forgotten about. The clarifications made and the responses given should be included in the ISVV report. B.3. ISVV Report (with ISVV Findings) The findings of ISVV are reported in an ISVV report. There will usually be several ISVV reports produced by an ISVV project, e.g. one per ISVV activity per software product. The ISVV report shall highlight all the potential problems found as identified by the ISVV activity. The ISVV supplier shall classify the findings into e.g. major, minor, and comment, based on an assessment of the potential consequence of the finding. The classification must later be assessed by the ISVV customer who will possibly make a reclassification. The ISVV report should be presented to the ISVV customer who should review it and accept it. If requests for clarification have been allowed, the request and the clarification should also be included in the verification report. In the end, the ISVV customer must approve the ISVV report. An example form for reporting individual review item discrepancies is included in Annex C. The cost of implementing any ISVV finding to correct the supplied software is lower the earlier it is being done. The ISVV supplier may therefore provide early feedback on major issues to the ISVV customer. It is the responsibility of the ISVV customer to filter the ISVV findings as presented in the ISVV report and consider whether a particular finding warrants the creation of a software problem report. B.4. Progress Reports For ISVV projects of more than a few weeks duration (as is likely to be the case for most projects), the ISVV supplier should provide regular progress reports to the ISVV customer. The progress report will report the progress of the project with respect to plan and (if not a fixed price contract) budget, also notifying the customer of any problem areas. Progress reports will often be issued in conjunction with progress meetings. B.5. ISVV Findings Resolution Report For each ISVV report, the ISVV customer should produce an ISVV findings resolution report, describing how the ISVV findings have been dealt with, whether they are discarded or result in a software problem report. This may be part of the review item discrepancy form in which findings are reported.
Issue 2 Revision 0 Page 75 Annex C. Review Item Discrepancy Form Example The form below is an example of a RID form. Each project shall decide whether the same tool and RID forms as used in the project are to be used instead for the ISVV findings, making sure homogeneity is preserved for later set of RIDs (eg after the review).. Review Item Discrepancy Title Item Nº Date Author Originating subtask Document reference Problem location Problem description Problem type (see Table 6) Severity classification (see Table 7) Unique identifier of the RID. Origination data of the RID (e.g. in DD.MM.YYYY format). Author/originator of the RID. Title of the RID. Identification of the verification subtask in which the RID was identified (e.g. IVE.CA.T2.S1). Identification of the document against who the RID was raised. Identification of the inconsistency location (document page, software component, source file and line). Claim: The description of the problem found. Recommendation: Recommendation aiming at the resolution of the RID. This is an optional field. RID classification according to the problem types from Table 6. RID severity classification according to Table 7. The severity classification is originally assigned by the ISVV Supplier and then reviewed and possibly updated together with the ISVV Customer. Table 5: RID Form The Problem Type may be described in terms of the categories below: Problem type External consistency Internal consistency Correctness Description The item presents an inconsistency against an item of an applicable or referenced document (e.g. the component design is not according to an applicable software requirement). What is implemented is not what is required. The item presents an inconsistency against another item in the same document (e.g. the description of an interface is not consistent between the interface user and the interface provider). The item is incorrectly implemented, technical issues are being violated. Consider for instance an activity diagram that contains a deadlock condition. That would be a case to which correctness will apply.
Issue 2 Revision 0 Page 76 Problem type Technical feasibility Readability & Maintainability Completeness Description The item is not technically feasible taking into account the applicable constraints (e.g. the architecture makes extensive use of advanced object oriented techniques but the application is to be written in C language). The item is hard to read and/or to maintain. An individual other than the author will have serious difficulties in implementing and/or maintaining the item. The information provided is confuse and therefore may lead to wrong interpretations. The item is not completely defined or the provided information is not sufficient (e.g. the description of the service 5 telemetry provided in the user manual do not allow for a clear identification of the error cause). If an item do not completely implements a requirement or interface defined in an applicable or reference document, one shall use external consistency instead. Table 6: RID Problem Type Categories Each RID may be described in terms of the following severity classes: Severity classification Major Minor Comment Description The discrepancy found presents a major thread to the system. Its correction is pertinent. The discrepancy found is a minor issue. Although it does not present a major threat to the system, its correction should be done. The discrepancy found does not present any threat to the system. The RID was raised as a recommendation that aims at improving the quality of the affected item. The implementation of the recommended correction is optional. Table 7: RID Severity Classes
Issue 2 Revision 0 Page 77 Annex D. Summary of ISVV tasks, activities and methods and techniques Note: When ISVV Level is not specified in right column, it always applies Task Sub-task Method/Technique ISVV Level MAN.PM.T1 MAN.PM.T1.S1: Define ISVV objectives (ISVV Customer) MAN.PM.T1.S2: Perform System Level ISVV level definition (ISVV Customer) MAN.PM.T1.S3: Define the ISVV scope and determine the ISVV budget (ISVV Customer) MAN.PM.T1.S4: Perform Technical Specification ISVV level definition (ISVV Customer or ISVV Supplier) MAN.PM.T1.S5: Estimate ISVV scope and budget (ISVV Supplier) MAN.PM.T1.S6: Develop ISVV plan (ISVV Supplier) MAN.PM.T1.S9: Approve scope definition resulting from ISVV level definition (ISVV Customer) MAN.PM.T2 MAN.PM.T2.S1: Manage ISVV project (ISVV Supplier) MAN.PM.T2.S2: Submit documentation and code to ISVV Supplier (ISVV Customer) MAN.PM.T2.S3: Check received documentation (ISVV Supplier) MAN.PM.T2.S4: Familiarization with software and system product under ISVV (ISVV Supplier) MAN.PM.T2.S5: Submit the verification and validation testing environment (ISVV Customer) MAN.PM.T2.S6: Perform verification and validation activities (ISVV Supplier) MAN.PM.T2.S7: Request clarifications (ISVV Supplier) MAN.PM.T2.S8: Respond to Requests for Clarification (ISVV Customer) MAN.PM.T2.S9: Report early ISVV findings (ISVV Supplier) MAN.PM.T2.S10: Review early ISVV Findings (ISVV Customer) MAN.PM.T2.S11: Produce ISVV report (ISVV Supplier) MAN.PM.T2.S12: Filter ISVV findings (ISVV Customer) MAN.PM.T2.S13: Draft disposition of ISVV findings (SW supplier) MAN.PM.T2.S14: Conduct Review Meeting (ISVV Customer) MAN.PM.T2.S15: Produce ISVV findings resolution report (ISVV Customer) MAN.PM.T2.S16: Implement resolutions (ISVV Customer) MAN.PM.T2.S17: Update ISVV level definition (ISVV Supplier) MAN.VV.T1 MAN.VV.T1.S1: Identify the software criticality scheme used for the mission. MAN.VV.T1.S2: Evaluate whether the defined software criticality scheme is relevant for the ISVV objective. If it is not, then define
Issue 2 Revision 0 Page 78 Task Sub-task Method/Technique ISVV Level MAN.VV.T2 MAN.VV.T3 a new software criticality scheme for ISVV. MAN.VV.T1.S3: If there is a Critical Function List and the criticality scheme it is based on is relevant for the ISVV objective, then use this CFL. MAN.VV.T1.S4: If there is no Critical Function List or the ISVV objective does not match the criteria used to derive it, perform a simplified system FMECA along the lines described in Annex E.3. MAN.VV.T1.S5: Identify each software product and its supplier. Fill in the error potential questionnaire (see Annex D) for each software product (by the Error potential assessment). MAN.VV.T1.S6: Assign ISVV level to each system function based on the software criticality category of the function and error potential. MAN.VV.T2.S1: For each software product implementing critical system functions, identify any SFMECA based on the Technical Specification available. MAN.VV.T2.S2: If an SFMECA exists and the criticality scheme used as a basis is relevant for the ISVV objective, then it may be used as a basis for deriving the critical software requirements list. MAN.VV.T2.S3: If no such analyses have been carried out, the quality is too poor, or the ISVV objective differs from the presumptions of the SFMECA, perform a simplified SFMECA based on the Technical Specification including Interface Control Documents. MAN.VV.T2.S4: Verify the consistency of the SFMECA with the Critical systems function list. If discrepancies are found, notify the ISVV customer who will have to consider consequences in terms of re-analysis. MAN.VV.T2.S5: For each software requirement, derive the software criticality category by identifying the highest criticality category of any failure mode associated with it. MAN.VV.T2.S6: Assign an ISVV level to each software requirement based on the software criticality category of the requirement and error potential MAN.VV.T3.S1: Review the findings of and the safety/dependability analysis performed as part of the Technical Specification Analysis. MAN.VV.T3.S2: If design level safety and dependability analyses exist from the developer, investigate whether these may be used to assign software criticality categories to design components. MAN.VV.T3.S3: If not, trace the software requirements to software architectural design components. Assign to each software component the highest software criticality category of any requirement tracing to it. MAN.VV.T3.S4: Alternatively, extend the SFMECA carried out at software requirements level by identifying software components as causes for requirements failure modes. MAN.VV.T3.S5: Identify any dependency mechanisms for the design language used (e.g. use or call relationships). MAN.VV.T3.S6: Analyse the dependency of critical components on other components and adjust the software criticality category of these components to be the same as the critical component depending on them. MAN.VV.T3.S7: Assign an ISVV level to each software component based on the software criticality category of the component and error potential (there is no need to reassess error potential unless different answers to the error potential questionnaire are expected at this level). Error potential assessment (Annex D) SFMECA Error potential assessment (Annex D) review or re-assessment Inspection of traceability matrices SFMECA Error potential assessment (Annex D) review or re-assessment
Issue 2 Revision 0 Page 79 Task Sub-task Method/Technique ISVV Level MAN.VV.T4 MAN.VV.T3.S8: Software criticality categories and ISVV levels may also be assigned to detailed design software components. MAN.VV.T4.S1: Review the findings of and the safety/dependability analysis performed as part of the Design Analysis. Evaluate the consistency with the critical system function list, the critical software requirements list and the critical software component list produced by earlier criticality analyses. If discrepancies are found, notify the ISVV customer who will have to consider consequences in terms of re-analysis. MAN.VV.T4.S2: If code level safety and dependability analyses exist from the developer, investigate whether these may be used to assign software criticality categories to software units. The software criticality scheme should be relevant for ISVV, the analysis should be based on the same versions of code as ISVV (or else a delta analysis must be carried out), and the results of any higher level analyses it is based on should not be in conflict with the results of the Design Analysis. MAN.VV.T4.S3: If not, identify mapping rules from software design components to software units. For each software component Inspection of traceability matrices (either architectural design component or detailed design component) trace the software component to source code. Assign to each software unit the software criticality category of the software component it implements (by Inspection of traceability matrices). MAN.VV.T4.S4: When software complexity is a risky matter, define complexity measure for the software units and calculate the Software metrics analysis complexity measurement (Software metrics analysis)..man.vv.t4.s5: Fill in the error potential questionnaire (see Annex D) for each software unit, taking into account the complexity Error potential assessment (Annex D) measures when applicable. MAN.VV.T4.S6: Assign ISVV level to each software unit based on the software criticality category of the software unit and error Error potential assessment (Annex D) potential (by Error potential assessment). IVE.T1.T1 IVE.TA.T1.S1: Verify the Software Requirements external consistency with the system requirements Inspection - reviewing the traceability matrices produced by the software supplier IVE.TA.T1.S2: Verify the Interface Requirements external consistency with the system requirements (by ) Inspection - reviewing the traceability matrices produced by the software supplier and in case they do not exist, by producing them IVE.TA.T1.S3: Verify software requirements correctness Inspection ISVV Level 1 Modelling 23 or Formal methods 9 ISVV Level 2 IVE.TA.T1.S4: Verify the consistent documentation of the software requirements Inspection ISVV Level 1 Modelling 9 or Formal Methods 9 ISVV Level 2 IVE.TA.T1.S5: Verify software requirements completeness (by or ) Inspection ISVV Level 1 Modelling 9 ISVV Level 2 IVE.TA.T1.S6: Verify the dependability and safety requirements (by or ) Inspection ISVV Level 1 SFMECA 24 or Modelling ISVV Level 2 IVE.TA.T1.S7: Verify the readability of the software requirements Inspection IVE.TA.T1.S8: Verify the timing and sizing budgets of the software requirements Inspection 23 Applicable only to a range of software requirements. 24 In case the SFMECA was applied in the criticality analysis performed at requirements level for the ISVV level definition (MAN.VV activity) any RIDs generated then should be added to this technical specification verification report.
Issue 2 Revision 0 Page 80 Task Sub-task Method/Technique ISVV Level IVE.TA.T1.S9: Identify test areas and test cases for Independent Validation from other IVE.TA.T1 tasks ISVV Level 1 Modelling ISVV Level 2 IVE.TA.T1.S10: Verify that the software requirements are testable Inspection IVE.TA.T1.S11: Verify software requirements conformance with applicable standards Inspection IVE.DA.T1 IVE.DA.T1.S1: Verify the SW architectural design external consistency with the Technical Specification Inspection - reviewing the traceability matrices produced by the software supplier IVE.DA.T1.S2: Verify the SW architectural design external consistency with the Interface Control Documents Inspection - reviewing the traceability matrices produced by the software supplier IVE.DA.T1.S3: Verify interfaces consistency between different SW components Inspection ISVV Level 1 Modelling 25 ISVV Level 2 IVE.DA.T1.S4: Verify architectural design correctness Inspection ISVV Level 1 Modelling 26 or Simulation 27 ISVV Level 2 IVE.DA.T1.S5: Verify architectural design completeness Inspection IVE.DA.T1.S6: Verify the dependability & safety of the design Inspection ISVV Level 1 Modelling or Simulations or SFMECA or ISVV Level 2 Formal methods IVE.DA.T1.S7: Verify the readability of the architectural design Inspection IVE.DA.T1.S8: Verify the timing and sizing budgets of the software Inspection ISVV Level 1 Schedulability analysis including WCET ISVV Level 2 IVE.DA.T1.S9: Identify test areas and test cases for independent Validation from other IVE.DA.T1 tasks IVE.DA.T1.S10: Verify architectural design conformance with applicable standards Inspection IVE.DA.T2.S11: Verify the test performed on the high level model (if models are produced by the SW suppliers) inspection ISVV Level 1 Simulation or Modelling ISVV Level 2 IVE.DA.T2.S12: Verify the development and verification and testing methods and environment (if models are produced by the SW suppliers) IVE..DA.T1.S13: If models are produced by the SW suppliers, then construct model test cases IVE..DA.T1.S14: If models are produced by the SW suppliers, then construct model test procedures IVE..DA.T1.S15: If models are produced by the SW suppliers, then execution of model test procedures inspection IVE.DA.T2 IVE.DA.T2.S1: Verify the detailed design external consistency with the Technical Specification Inspection - reviewing the traceability matrices produced by the software developer IVE.DA.T2.S2: Verify the detailed design external consistency with the Interface Control Documents Inspection -reviewing the traceability matrices 25 UML: component, activity, communication, interaction, sequence, timing 26 UML: component, composite, deployment, package, activity, sequence, state machine 27 Simulation may be used to validate high level algorithms
Issue 2 Revision 0 Page 81 Task Sub-task Method/Technique ISVV Level IVE.DA.T3 produced by the software developer IVE.DA.T2.S3: Verify the detailed design external consistency with the Architectural Design Inspection IVE.DA.T2.S4: Verify interfaces consistency between different SW components Inspection ISVV Level 1 Modelling 11 ISVV Level 2 IVE.DA.T2.S5: Verify detailed design correctness Inspection ISVV Level 1 Modelling 28 or Simulation 29 ISVV Level 2 IVE.DA.T2.S6: Verify detailed design completeness Inspection IVE.DA.T2.S7: Verify the dependability & safety of the design Inspection ISVV Level 1 Modelling or Simulations or SFMECA 30 or ISVV Level 2 Formal Methods IVE.DA.T2.S8: Verify the readability of the detailed design Inspection IVE.DA.T2.S9: Verify the timing and sizing budgets of the software Inspection ISVV Level 1 schedulability analysis (including WCET) ISVV Level 2 IVE.DA.T2.S10: Verify the accuracy of the model (in case models are produced by the SW suppliers) Inspection IVE.DA.T2.S11: Identify test areas and test cases for independent Validation from other IVE.DA.T2 tasks IVE.DA.T2.S12: Verify detailed design conformance with applicable standards IVE.DA.T3.S1: Verify the timing and sizing budgets of the software Inspection Inspection ISVV Level 1 independent validation tests ISVV Level 2 IVE.DA.T3.S2: Verify the dependability & safety aspects on the product are specified in the SUM Inspection ISVV Level 1 Independent validation tests ISVV Level 2 IVE.DA.T3.S3; Verify the readability of the User Manual (by ) Inspection IVE.DA.T3.S4; Verify the completeness of the User Manual (by ) Inspection IVE.DA.T3.S5: Verify the correctness of the User Manual (by ) Inspection IVE.CA.T1 IVE.CA.T1.S1: Verify source code external consistency with Technical Specification Inspection - reviewing the traceability matrices produced by the software developer IVE.CA.T1.S2: Verify source code external consistency with Interface Control Documents Inspection - reviewing the traceability matrices produced by the software developer IVE.CA.T1.S3: Verify source code external consistency with Architectural Design and Detailed Design Inspection - reviewing the traceability matrices produced by the software developer IVE.CA.T1.S4: Verify interfaces consistency between different SW units Inspection ISVV Level 1 Reverse Engineering 31 ISVV Level 2 IVE.CA.T1.S5: Verify source code correctness with respect to technical specification, architectural design and detailed design Inspection ISVV Level 1 Reverse Engineering 20 ISVV Level 2 28 UML: class, component, package, activity, interaction, sequence, state machine 29 Simulation can be used to verify sporadic algorithms (e.g. communication protocols) 30 This methods used here aims at verifying the independence of the FDIR mechanisms from the faults they are supposed to handle 31 The Reverse engineering will create Control Flow, Data Flow, etc. from the source code. The ISVV supplier should notice the often difficulty to compile the code at the ISVV site when requiring the use of some automatic verification tools
Issue 2 Revision 0 Page 82 Task Sub-task Method/Technique ISVV Level IVE.CA.T1.S6: Verify the source code readability, maintainability and conformance with the applicable standards Bug pattern identification Inspection ISVV Level 1 software static analysis, metrics, coding ISVV Level 2 standard conformance checking IVE.CA.T1.S7: Verify the dependability & safety of the source code Inspection ISVV Level 1 Modelling, Simulations, SFEMCA, Software ISVV Level 2 Metrics Analysis, Bug pattern identification IVE.CA.T1.S8: Verify the accuracy of the source code Inspection ISVV Level 1 Numeric analysis ISVV Level 2 IVE.CA.T1.S9: Identify test areas and test cases for independent Validation From other IVE.CA.T1 tasks IVE.CA.T1.S10: Verify the timing and sizing budgets of the software Inspection ISVV Level 1 Schedulability analysis (including WCET) ISVV Level 2 IVE.CA.T2 IVE.CA.T2.S1: Verify consistency with Technical Specification Inspection - reviewing the traceability matrices produced by the software developer IVE.CA.T2.S2: Verify consistency with Software Architectural Design Inspection - reviewing the traceability matrices produced by the software developer IVE.CA.T2.S3: Verify integration test procedures correctness and completeness Inspection IVE.CA.T2.S4: If models are produced by the SW suppliers, then evaluate model verification and validation test results Inspection IVE.CA.T2.S5: Verify integration test reports IVE.CA.T3 IVE.CA.T3.S1: Verify consistency with Software Detailed Design Inspection - reviewing the traceability matrices produced by the software developer IVE.CA.T3.S2: Verify unit test procedures correctness and completeness Inspection IVE.CA.T3.S3: Verify unit test reports IVA.T1 IVA.T1.S1: Evaluate Task Input Inspection only: IVA.T1.S2: Perform Analysis Use checklist in Annex G.7. ISVV Level 1 ISVV Level 2 IVA.T1.S3: Writing Independent Validation Test Plan Inspection IVA.T2 IVA.T2.S1: Achieve knowledge about the SVF Inspection or dry runs IVA.T2.S2: Implement Test Cases into Test Procedures IVA.T2.S3: Updating the Independent Validation Test Plan IVA.T3 IVA.T3.S1: Execute the Test Procedures IVA.T3.S2: Investigation of failed tests IVA.T3.S3: Produce Test Report
Issue 2 Revision 0 Page 83 Annex E. ISVV Levels and Software Criticality Categories E.1. ISVV Level definition overview The ISVV Level is a number on an ordinal scale assigned to a system function, software requirement, component or unit to designate the required level of verification and validation. System functions will not be verified as part of ISVV, but will nevertheless have to be assigned ISVV Levels because the critical system functions list will be the basis for budgeting of the ISVV project. The following ISVV Levels are defined: Level ISVVL 0 ISVVL 1 ISVVL 2 Description No ISVV activities are required. Basic ISVV is required. Full ISVV is required. Table 8: ISVV levels For a given Verification and Validation activity, the ISVV Level provides guidance to the selection of tasks and the rigour of performing each task within the activity. The ISVV Level is derived from the Software Criticality Category (SCC) but may be adjusted upward if there are other risk factors warranting increased verification and validation (see next section). The software criticality scheme is defined by project prime and it allows tailoring of the development process to the criticality of software.. A common scheme may have been defined for all of the software products embedded in the system or several different schemes may have been in use. For the example criticality categories presented in draft ECSS-Q-80C 32, four criticality categories are defined as the basis for the ones to be defined per project. The ISVV supplier may also perform the Criticality analysis (by using a Simplified SFMECA method see Annex E.3 ) to identify these software criticality categories for the ISVV level definition. The ISVV Level for a software item is determined by two factors: a) the software criticality category b) In addition, there may be a range of factors associated with a specific software product, which may lead one to consider intensifying the verification and validation of the software. These factors are not related to the criticality of the software as determined by safety or dependability analyses, but to characteristics of the development organisation, the development process or the software itself which may affect the quality of the software. It is called error potential. The factors influencing the error potential are listed as yes/no questions in a questionnaire in next section. The more yes responses, the higher the potentially negative impact on software quality. Based on a qualitative assessment one may then decide to increase the ISVV level, i.e. from ISVVL 1 to ISVVL 2 or to reduce it, e.g. from ISVVL 2 to ISVVL 1. It should be noted that when the ISVV level of an item is raised from 0 to 1, this is effectively equivalent to increasing 32 Draft ECSS-Q-80C is referred here as an advance of the version C still to be published. This Guide is compliant with the latest published ECSS-Q-80B.
Issue 2 Revision 0 Page 84 the number of items subject to ISVV. The table below shows the mapping from software criticality category and error potential to ISVV Level: Software Criticality Category ISVVL SCC 4 ISVVL 2 ISVVL 2 ISVVL 2 SCC 3 ISVVL 1 ISVVL 1 ISVVL 2 ISVVL 2 SCC 2 ISVVL 0 ISVVL 1 ISVVL 1 ISVVL 2 SCC 1 ISVVL 0 ISVVL 0 ISVVL 1 Low Medium High Error Potential Table 9: Matrix to derive ISVV level from Software Criticality Category and Error Potential For some software criticality categories and error potential levels, the table provides two choices of ISVV Level, the decision being left to expert judgement after having assessed the error potential.
Issue 2 Revision 0 Page 85 E.2. Error Potential Questionnaire 33 The table below shows the error potential questionnaire with the weight of each question. For software code criticality analysis, the weight of the question related to complexity is 5 when it is a risky matter to be considered as an error potentiality for the specific project. In all other cases, the weight of each question is 1. The score is computed from the weighted sum of all yes answers which is then normalised. ID Question Weight Yes No Score 1 Is the number of people in the software development team 1 [0..1] (including development verification and validation) more than 20? 2 Is the development team split across several geographical 1 [0..1] working locations (more than 5 minutes walking distance)? 3 Is the maturity of the software development team s process low 1 [0..1] as measured by a suitable internationally recognised software process assessment approach? 4 Is the software development team lacking in experience with the 1 [0..1] software technology, the domain, or the application? 5 Is the software supplier lacking in experience with development 1 [0..1] of software of the required criticality level? 6 Does development of the software require innovative designs? 1 [0..1] 7 Are software requirements still unstable? 1 [0..1] 8 Is there any potential impact of project Schedule Pressure? 1 [0..1] 9 When complexity is a risky matter for the project, Is the complexity of the software high? NOTE 1: Software complexity is inherently difficult to define. Some of the factors which may influence complexity include the size of the software, the number of components and number of relationships between components, the complexity of algorithms, the number of internal and external interfaces. Complexity is not only an intrinsic property of the software itself, but also dependent on tools available to visualise software as well as the cognitive capacity of the observer. NOTE 2: The weight of this question is 5 when evaluating the error potential for the software code criticality analysis, otherwise it is either 0 (when no information is available) or 1. 10 In case being existing software, do we have enough history data providing confidence fulfilling the requirements of this project? 0/1/5 [0..5] 1 [0..1] Sum 9/10/15 [0..15] Normalised score (sum score / sum weight) [0..1] Table 10: Error Potential Questionnaire The mapping between the normalised score and the error potential levels is shown in the table below: Error potential score Error potential level [0..1/3> Low [1/3..2/3> Medium [2/3..1] High Table 11: Mapping from error potential score to error potential level 33 The questions have been inspired by [NASA IV&V]
Issue 2 Revision 0 Page 86 E.3. Procedures for Performing Simplified FMECA E.3.1. System FMECA In the event that we would like to redefine the criticality categories for the purpose of ISVV, the existing safety/dependability analyses cannot be directly reused. The existing system FMECA may be used as a starting point, provided that it is function oriented and not component oriented. If the system FMECA does not address software related functions such an analysis will have to be prepared from scratch. However, an FMECA carried out for the purposes of determining (i.e. limiting) the scope of ISVV could be somewhat simplified. The columns that need to be filled in are: FMECA # Item Function Failure mode Consequence Operational phase/mode Severity (criticality) Generic failure modes such as function output wrong, late, none or spurious may be used to guide the analysis. It is important to consider to what extent the failure mode may be mitigated by an FDIR function. This is further elaborated in the section below. E.3.2. Software Requirements FMECA If no software requirements FMECA is available, a simplified analysis (e.g. a simplified SFMECA) must be produced to identify the most critical software requirements. The following procedure represents the most basic way for identifying the criticality category of the software requirements. The procedure may be based on expert sessions. The traceability matrix from software requirements to system requirements will aid the process considerably. 1. Obtain the software requirements specification and the system requirements specification. 2. Obtain the system critical functions list and define the criticality categories before performing the analysis. 3. Go through all software requirements together with at least one expert who knows the software design and the system design well 4. Ask the question: To which system function is the software requirement linked? a. If no link to critical system functions is found, the software requirement might be non-critical and nothing needs to be done with respect to ISVV. b. If one or more links to critical system functions are found, consider whether a failure in the functionality specified in the requirement could contribute to the failure modes of the system functions. c. Also consider if there is an FDIR function that will mitigate a failure in the functionality specified. This may be used to downgrade the criticality of the functionality d. Document the link and the reasoning for the classification of the functionality. 5. When all information is collected, the links can be collected in a table, with one critical function failure mode per column and the software requirements as rows. There should always be at least one entry per column (at least one software requirement per critical system function). 6. In an additional column of the table, the maximum criticality category is written down for all software requirements.
Issue 2 Revision 0 Page 87 A well founded functional model is very useful as the basis for both system level software criticality analysis and technical specification criticality analysis, for example in situations where the software requirement specification provide no logical top-down functional breakdown of the system and overview is lost to detail; As there may be thousands of software requirements related to software in a spacecraft it may be too costly to base the granularity of the analysis on the individual requirement. Alternatives are to consider all requirements that are linked to a critical system function equally critical or to introduce higher level software functions to create an intermediate level. Compared to an analysis per individual requirement this may increase the scope of ISVV analysis at design and code level and thereby increase downstream costs. Therefore, this is a trade-off that needs to be considered on a case by case basis. It is not a trivial matter to decide upon the criticality of the various functions and requirements. Below are examples of issues that need to be considered carefully in each project: - What level of partitioning is required between functions that are assigned different criticality? E.g. in safety of life related applications a very strong partitioning will be required. However, in applications that are not related to safety of life, functions sharing resources such as e.g. memory may be assigned different criticality simply to spend a limited ISVV budget in the most effective way. - To what extent will a specific software related problem be fully mitigated by FDIR functionality? E.g. range checking, CRC, time stamping, watchdogs, transition to degraded mode, system reset, patching of software etc. - Is there a difference in criticality of a function when it is executed in system-degraded mode compared to nominal mode? - Shall the system be single fault tolerant to any software related problems? - Can all FDIR mechanisms be used in all phases of the mission, or are there restrictions e.g. related to timing? It should also be considered whether the scoping of ISVV activities should be different for manual analysis compared to automated analysis. Automated static analysis of software with low criticality may in some cases be considered a cost-effective 34 risk reduction if the partitioning towards more critical software is weak.; 34 Automated analysis is generally less costly than manual analysis. However, findings reported by automated tools, have to be confirmed by manually investigation in most cases. Therefore it must be carefully considered if the risk reduction justifies the costs.
Issue 2 Revision 0 Page 88 Annex F. Methods This annex provides a brief description for each of the methods considered for design analysis. Portions of the definitions presented herein were extracted from [PASCON WO12-TN2.1:2000] and [ECSS-Q80-03:2004]. The methods are presented in alphabetical sequence: Formal Methods Inspection Modelling Data Flow Analysis Control Flow Analysis Real-Time Properties Verification o Schedulability Analysis o Worst Case Execution Time Computation Reverse Engineering Simulation (Design execution) Software Failure Modes, Effects and Criticality Analysis (SFMECA) Static Code Analysis o Coding Standard Conformance o Bug Pattern Identification o Software Metrics Analysis Traceability Analysis F.1. Formal Methods Formal Methods provide a means of developing an analysable description of a software system at some stage in its development life-cycle; specification, design, or code. Formal Methods generally offer a notation (usually some form of discrete mathematics being used), a technique for deriving a description in that notation, and various forms of mathematical analysis for checking a description to detect various classes of inconsistency or incorrectness. Some examples of Formal Methods and formal specification languages are B, RAISE, VDM, Z, Petri Nets, SDL and Finite State Machines. Further information on these methods can be found in [PASCON WO12-TN2.1:2000]. Formal Methods do suffer from certain limitations. In particular, Formal Methods can prove that an implementation satisfies a formal specification, but they cannot prove that a formal specification captures a user's intuitive informal understanding of a system. In other words, Formal Methods can be used to verify a system, but not to validate a system. The extent of this limitation should not be underestimated - the reduction of informal application knowledge to a rigorous specification is a key problem area in the development of large systems. F.2. Inspection From the ISVV point of view the inspection can be defined as an evaluation technique in which software requirements; design, code, or other work products are formally examined by a person or group (the inspection team) to detect faults, violations of development standards, and other problems. An inspection begins with the distribution of the item to be inspected (e.g., a specification, some code and test data). Each inspector is required to analyse the item on his own. All errors found are recorded, but no attempt is made to correct the errors at that time. Inspections may be performed at any development phases. All ISVV activities defined in this guide suggested to be performed by inspection are having a supporting checklist in Annex G. The basic method of software inspection for design and code was historically defined by Fagan in 1976 [INSPEC:1976]. The method has subsequently been applied to the verification of software requirements where it is said to be most effective when
Issue 2 Revision 0 Page 89 individual reviewers are assigned specific responsibilities and where they use systematic techniques for meeting those responsibilities [DETECT:1995, I.A]. Alternatively [ISO 9000:2000] defines Inspection as an activity such as measuring, examining, testing or gauging for conformity evaluation. F.3. Modelling Modelling consists on the elaboration of a model of the system using a modelling tool and/or language (e.g. SYSML, UML, SpecTRM-RL, etc.). The primary target of this method is to help in identifying missing and incomplete requirements. Modelling can be used to cover a broad range of analysis or sub-tasks such as, data flow analysis, control flow analysis, state machine diagrams, etc. The method may be applied to all or specific parts of the system under verification. OMG Systems Modelling Language (OMG SysML ) is a general-purpose modelling language for systems engineering applications. It can be used for requirements modelling and analysis. It also allows an efficient and straightforward identification of test cases. SysML supports the specification, analysis, design, and verification & validation of a broad range of complex systems. These systems may include hardware, software, information, processes, personnel, and facilities. The origins of the SysML initiative can be traced to a strategic decision by the International Council on Systems Engineering s (INCOSE) Model Driven Systems Design workgroup in January 2001 to customize the Unified Modelling Language (UML) for systems engineering applications. This resulted in a collaborative effort between INCOSE and the Object Management Group (OMG), which maintains the UML specification, to jointly charter the OMG Systems Engineering Domain Special Interest Group (SE DSIG) in July 2001. Currently it is common practice for systems engineers to use a wide range of modelling languages, tools and techniques on large systems projects. In a manner similar to how UML unified the modelling languages used in the software industry, SysML is intended to unify the diverse modelling languages currently used by systems engineers. SysML reuses a subset of UML 2 and provides additional extensions needed to address the requirements in the UML for Systems Engineering Request for Proposal (RFP). SysML is designed to provide simple but powerful constructs for modelling a wide range of systems engineering problems. It is particularly effective in specifying requirements, structure, behaviour, and allocations and constraints on system properties to support engineering analysis. UML 2 defines 13 basic diagram types, divided into two general classes: Structural diagrams. This class comprises diagrams that define the static architecture of a model (elements of a specification that are irrespective of time). These diagrams are used to model the building blocks that make up the full model classes, objects, interfaces and physical components. Structure diagrams are also used to model the relationships and dependencies between elements. It includes class, component, composite structure, deployment, object and package. Behavioural diagrams. This class comprises diagrams that depict behavioural features of a system or business process. It includes activity, state machine, and use case diagrams as well as the interaction diagrams.
Issue 2 Revision 0 Page 90 The Behavioural class is further divided in a subclass named Interaction Diagrams. This subclass is defined as a subset of behaviour diagrams which emphasize object interactions. This includes communication, interaction overview, sequence, and timing diagrams. The next table presents the UML 2 diagram types grouped by class. Diagram Type UML 2 Structural Diagrams Class Diagram Component Diagram Composite Structure Diagram Deployment Diagram Object Diagram Description Shows a collection of static model elements such as classes and types, their contents, and their relationships. This type of diagram defines the basic building blocks that are used to build the full model. Depicts the components that compose an application, system, or enterprise. The components, their interrelationships, interactions, and their public interfaces are depicted. This type of diagram is used to model higher level or more complex structures, usually built up from one or more classes, and providing a well defined interface. Depicts the internal structure of a classifier (such as a class, component, or use case), including the interaction points of the classifier to other parts of the system. It provides means of layering an element's structure and focusing on inner detail, construction and relationships. Shows the execution architecture of systems. This includes nodes, either hardware or software execution environments, as well as the middleware connecting them. By other words, this type of diagram shows the physical disposition of significant artefacts within a real-world setting. Depicts objects and their relationships at a point in time, typically a special case of either a class diagram or a communication diagram. Shows how instances of structural elements are related and used at run-time. Package Diagram Shows how model elements are organized into packages as well as the dependencies between packages. Diagrams of this type are used to define the high level architecture of the system. Behavioural Diagrams Activity Diagram Communication Diagram Interaction Overview Diagram Sequence Diagram State Machine Diagram Timing Diagram Use Case Diagram Depicts high-level business processes, including data flow, or to model the logic of complex logic within a system. This type of diagram has a wide number of uses, from defining basic program flow, to capturing the decision points and actions within any generalized process. Shows instances of classes, their interrelationships, and the message flows between them. Communication diagrams typically focus on the structural organization of objects that send and receive messages. Formerly called a Collaboration Diagram in UML 1.x. A variant of an activity diagram which overviews the control flow within a system or business process. Each node/activity within the diagram can represent another interaction diagram. Models the sequential logic, in effect the time ordering of messages between classifiers. It is closely related to Communication diagrams and show the sequence of messages passed between objects using a vertical timeline. Describes the states an object or interaction may be in, as well as the transitions between states. Formerly referred to as a state diagram, state chart diagram, or a state-transition diagram. Depicts the change in state or condition of a classifier instance or role over time. Typically used to show the change in state of an object over time in response to external events. Shows use cases, actors, and their interrelationships. This type of diagram is used to model user/system interactions. They define behaviour, requirements and constraints in the form of scripts or scenarios. Table 12: UML 2 diagram types OCL [OCL] can also be used when modelling techniques are mentioned in the ISVV activities. It is a formal language used to describe expressions on UML models. The traditional formal languages have the disadvantage that they are usable to persons with a strong mathematical background, but difficult for the average system modeller to use. OCL has been developed to fill this gap since it is a formal language that remains easy to read and write. OCL is not a programming language; therefore, it is not possible to write program logic or flow control in OCL. OCL is a typed language so that each OCL expression has a type. To be well formed, an OCL expression must conform to the type conformance rules of the language. For example, you cannot compare an Integer with a String. Each Classifier defined within a UML model represents a distinct OCL type. OCL can be used for a number of different purposes: As a query language To specify invariants on classes and types in the class model
Issue 2 Revision 0 Page 91 F.4. Data Flow Analysis To specify type invariant for Stereotypes To describe pre- and post conditions on Operations and Methods To describe Guards To specify target (sets) for messages and actions To specify constraints on operations To specify derivation rules for attributes for any expression over a UML model. Data flow analysis checks the behaviour of program variables as they are initialised, modified or referenced as if the program would execute. Data flow diagrams are used to facilitate this analysis. The purpose is to detect poor and potentially incorrect program structures. Data flow analysis combines the information obtained from the control flow analysis with information about which variables are read or written in different portions of code. It may also be used in the design and implementation phases. Data flow analysis can be used to support the dependability and safety assessment in what concerns the analysis of failures and faults in the product. It complements the engineering activities, in case these diagrams are already provided therein from which they could be reused for the dependability and safety analyses purposes. Many tools and methods used for the design engineering of the product already have the possibility to define the data flows, and these same diagrams should be re-used to analyse dependability and safety aspects of the software product, this is, potential faults existing in the product. F.5. Control Flow Analysis Control flow is most applicable to real time and data driven systems. Logic and data requirements are transformed from text into graphic flows, which are easier to analyze. Examples of control flow diagrams include, among others, PERT, state transition, and transaction diagrams. These analyses are intended to identify unreachable code, dead code, inconsistent/incomplete interface mechanisms between modules, and logic errors inside a module. Control flow analysis can be used to support the dependability and safety assessment in what concerns the analysis of failures and faults in the product and an analysis of their possible propagation. It complements the engineering activities, in case these diagrams are already provided therein from which they could be reused for the dependability and safety analyses purposes. Many tools and methods used for the design engineering of the product already have the possibility to define the control flow (IDEF0, etc), and these same diagrams should be used to analyse dependability and safety aspects of the software product, this is, potential faults existing in the product. F.6. Real-Time Properties Verification F.6.1. Schedulability Analysis Schedulability Analysis aims at determining whether a specific software system is schedulable or not. In other words, to check whether the software system accomplishes the deadlines it was designed for (does each function execute within the required time limit). Cyclic models are by their nature deterministic and the duration and completion of each function can be determined. On the contrary pre-emptive models are non-deterministic, since
Issue 2 Revision 0 Page 92 the functions may be triggered by the asynchronous occurrence of some events. In the case of hard real-time systems, deadlines are defined and must imperatively be respected when a service is provided. In this case, if a pre-emptive model is adopted, application shall be analysed in order to verify if it meets the deadline requirements. When verifying the schedulability analysis provided by the SW supplier, the ISVV supplier will first need to verify that the arguments in the schedulability analysis are sound. If a multitasking operating system is used, this analysis will typically be based on so called Rate Monotonic Analysis (RMA), and therefore the ISVV supplier must be familiar with this framework. To this end Scheduling Models have been defined, based on the Rate Monotonic or Deadline Monotonic Scheduling Algorithms and the Ceiling Priority Inheritance Protocol [AUDSLEY:1991, HRTOSK:1991]. Such Scheduling Models allow to verify that all critical tasks are schedulable (that is, can be executed within their deadlines) under their worst case execution time conditions. The ISVV supplier shall verify that the SW supplier has: For each task, provided information on: Priority, period, start time, worst case execution time, deadline, and at what time shared resources will be used (blocked) by the other tasks during one execution cycle. Selected a strategy for handling for the so called Priority Inversion problem. Utilised worst case blocking time for resources shared between tasks in calculations, considering the combined effects of all tasks. Given reasonable priorities to the different tasks when taking into account their period, workload and deadline If a multitasking is used it is recommended to use simulation tools supporting Rate Monotonic Analysis when performing the verification. The second task of the ISVV supplier is to verify that assumptions made in the schedulability analysis are reflected in the design and code, e.g.: Assumptions regarding use of shared resources are reasonable Task priorities are correct If the priority ceiling algorithm is used to mitigate the priority inversion problem, it should be verified that the ceiling priorities for each resource correspond to the highest priority task actually using the resource. If the priority ceiling algorithm is not used, inspection of design and source code should consider the possibility of deadlocks with respect to tasks waiting for shared resources Possibility for direct and indirect recursion should be evaluated It should be verified that there are mitigating measures in place that will clean up the situation in case of task overrun F.6.2. Worst Case Execution Time Computation Schedulability Analysis serves to exercise the scheduling algorithm in order to check whether the software system is schedulable or not. However the method requires vital input information that is the duration of each task. That information can be obtained by applying another method, the Worst Case Execution Time calculation (or WCET). The execution time of a program depends generally on its input data, which determine a certain execution path within the code according to its control flow instructions. The Worst
Issue 2 Revision 0 Page 93 Case Execution Time (or WCET) is thus defined as the maximum value of such execution time for the set of all possible input data. There are a number of methods developed for the prediction of the WCET of a function or program, e.g.: Use of static analysis tools to derive WCET directly from source code. Measurement of WCET through testing (for individual tasks or for a whole program) Simulation of task execution The methods above may be used in isolation or combined. It should be noted that due to the large number of execution paths in source code, the WCET data will typically be an approximation. WCET data shall be provided in the software supplier s schedulability analysis. F.7. Reverse Engineering The process of extracting software system information (including documentation) from source code [IEEE Std 1219-1998]. Reverse engineering is the process of analyzing a subject system (or software product) to: identify its components and their interrelationships and; create representations of the software product in another form or at a higher level of abstraction. Reverse engineering generally involves extracting design components and building or synthesizing abstractions that are less implementation-dependent. Reverse engineering does not involve changing the subject system or creating a new system based on the reverseengineered subject system. It is a process of examination, not a process of change or replication. F.8. Simulation (Design execution) Simulation consists in exercising parts or the model of the overall software/system in order check is behavioural characteristics and to inquire its feasibility, accuracy, etc. This method implies the elaboration of a model of the system and the environment it interacts with. F.9. Software Failure Modes, Effects and Criticality Analysis (SFMECA) The software failure modes effects analysis SFMEA is an iterative method, intended to analyse the effects and criticality of failure modes of the software within a system. SFMECA extends SFMEA by assigning a criticality category to each software failure. SFMEA and SFMECA are based on FMEA and FMECA respectively being the last two targeted to hardware/equipment analysis. The SFMEA/SFMECA main purposes are to reveal weak or missing requirements and to identify latent software failures, assign software criticality categories. SFMEA/SFMECA use intuitive reasoning to determine the effect on the system of a component failing in a particular failure mode. For example, if a function of a train crossing system is to turn on warning lights as a train approaches the crossing and to leave the lights on until the train has past, some of its failure modes could be: the lights do not turn on when a train approaches the lights turn on though no train is coming the lights turn off too soon (before the train has fully crossed) the lights turn on too late (after the train has begun crossing). The effect on the system of each component s failure in each failure mode would then be assessed by developing a matrix for each component. The criticality factor, that is, the
Issue 2 Revision 0 Page 94 seriousness of the effect of the failure, can be used in determining where to apply other analyses and testing resources. F.10. Static Code Analysis F.10.1. Coding Standard Conformance Coding Standard Conformance is a method that aims at checking whether the implemented source code follows a specific coding convention or set of coding rules (this includes checking coding style, naming conventions, etc.). Coding Standard Conformance verification usually results on an exhaustive task and therefore a tool for automating it is required. The Coding Conformance Verification may be used to verify user defined conventions or some standards such as the Ada RAVEN, MISRA C, etc. F.10.2. Bug Pattern Identification Bug Pattern Identification consists on the identification of known programming language and library bug patterns. As Coding Standard Conformance Bug pattern Identification is not a complete method. It is always possible to identify and further patterns. This method can be applied manually but that is only feasible for very small systems. Therefore, a tool for automating the method will be required in the majority of the cases. Fortunately a significant amount of high quality tools is available. F.10.3. Software Metrics Analysis Software Metrics Analysis consists on evaluating the quality of the software based upon a set of extracted metrics such as the McCabe s cyclomatic rate, percentage of comments per statement and, number of subprogram exit points, etc. However there are some metrics that are widely accepted McCabe s cyclomatic rate is probably the best example many others vary or are particular to a specific tool. It is therefore impossible to say that a tool is complete with respect to the set of metrics it provides. This fact is not critical because Software Metrics Analysis is to be used as a companion method; it is not intended to be complete. Software Metrics Analysis may support the safety and dependability subtask when using complexity measures to point to complex code areas more likely to have software faults. F.11. Traceability Analysis The traceability analysis method consists of analysing the tracing of (finding the correspondence of) specific items of one lifecycle phase to items of another lifecycle phase. Typically items are traced across adjacent lifecycle phases and the traceability can be done from inputs to the outputs (forward traceability, e.g. trace software requirements to architectural elements) or from outputs to the inputs (backwards traceability e.g. trace architectural elements to technical requirements). The main purpose of traceability analysis is to check the consistency and completeness of the work products being reviewed. Traceability analysis is performed analysing a table with at least two columns so called the traceability matrix. In case of backwards traceability, the first column will be filled with the outputs of the phase (e.g. for architectural design analysis all the design elements) and then, for every output the analyst will check for the matching inputs (e.g. for architectural design analysis all the software requirements).
Issue 2 Revision 0 Page 95 Annex G. Checklists G.1. Requirements Review Checklists 1. Interfaces (To support IVE.TA.T1.S2) Verified 1.1. Are all applicable system interface requirements traced to the interface specification? 1.2. Are all interface specifications traced to the applicable system interface requirements? 1.3. Do the interface specifications correctly represent the system interface requirements allocated to software within the assumptions and constraints identified for the system, taking into consideration any state transitions, data and control flows, and data usage and format? 1.4. Are the interface specifications defined in a consistent level of detail with respect to the system interface reqirements originating them? 1.5. Are the interface specifications accurately specifying the system interface requirements? 1.6. For embedded software, do the software requirements include specifications about the implementation of the hardware and register controls? 1.7. Do the software requirements completely and consistently cover the HW/SW and SW/SW interfaces protocol definitions? 1.8. Do the software requirements define mechanisms to prevent/correct HW/SW and SW/SW interface errors (including communication errors between subsystems) and error handling mechanisms? 1.9. Do the software requirements completely and consistently define HW/SW and SW/SW interfaces (communications) recovery? 2. Correctness & completeness (To support IVE.TA.T1.S1, S3) Verified 2.1. Do the software requirements correctly represent the system requirements allocated to software within the assumptions and constraints identified for the system, taking into consideration any state transitions, data and control flows, and data usage and format? 2.2. Do software requirements and modelled physical phenomena comply with applicable physical laws? 2.3. Do the precision specified for interfaces and calculations represent the requirements of the system? 2.4. Is the specification of system events consistent with their intended interpretation? 2.5. Are the mode invariants for each system mode defined (i.e. under what conditions must the system exit or remain in a given mode)? 2.6. Can the system's initial conditions fail to satisfy the initial mode's invariant? 2.7. Is there a sequence of events that allows the system to enter a mode without satisfying the mode's invariant? 2.8. Is there a sequence of events that allows the system to enter a mode, but never leave (deadlock)? 2.9. Do the software requirements include specifications about start-up/restart handling such as system initialization, recovery after power off, reconfiguration activities etc? 2.10. Do the software requirements include specifications about entry in alarm/safe/failure mode? 2.11. For embedded software, do the software requirements include specifications about event/interruption handling/bus interface? 2.12. For embedded software, do the software requirements define actions for memory (RAM/EEPROM) checks and error handling? 2.13. For embedded software, do the software requirements define software patch/dump procedures? 2.14. Are the software requirements externally and internally consistent (not implying formal proof consistency)? Are interactions between software requirements and assumptions embedded in them consistent? Do they represent system requirements? 3. Documentation (To support IVE.TA.T1.S4) Verified 3.1. Are the software requirements documented in a consistent level of detail that represent the system requirements? 3.2. Are the Interface specifications documented in a consistent level of detail? 3.3. Do software requirements interactions and assumptions correctly, consistently and completely described?
Issue 2 Revision 0 Page 96 4. Completeness (To support IVE.TA.T1.S5) Verified 4.1. Do the software requirements completely implement the system architecture and system requirements they are traced to in terms of functional and performance specifications, software product quality requirements, security specifications, human factors engineering (ergonomics) specifications, data definition and database requirements? 4.2. Do the software requirements include also the specification of the interfaces external to the software item? 4.3. Do software requirements include specifications for in flight modification (when applicable)? 5. Structural verification (To support IVE.TA.T1, S1, S3) Verified 5.1. Is the information defined for each data appearing in the external interface section consistent with its description in the overview, that should contain: - Name: - Class: or function they belong to (e.g., input port, output port, application variable, abbreviated term, function) - Data type: (e.g., integer, time, Boolean, enumeration) - Acceptable values: Are there any constraints, ranges, limits for the values of this data? - Failure value: Does the data have a special failure value? - Units or rates? 5.2. Are the Initial values of the data defined? 5.3. If data represents a physical quantity, are its units properly specified? 5.4. If the data s value is computed, can that computation generate a non-acceptable value? 5.5. For each functional requirement that identify data references, Do all data references obey formatting conventions? 5.6. Are all data referenced in the functional requirements listed in the input or output sections? 5.7. Can any data use be inconsistent with the data's type, acceptable values, failure value, etc.? 5.8. Can any data object definition be inconsistent with the data's type, acceptable values, failure value, etc.? 5.9. Is each functional requirement identifying all input/output data? 5.10. Are all values written to each output data consistent with its intended function? 5.11. Is there at least one function that uses each output data? 6. Testability (IVE.TA.T1.S10) Verified 6.1. Are the software requirements testable to objective validation criteria? 6.2. Are the acceptance validation criteria for validating the software requirements objective and quantified? 6.3. Are the software requirements objective and unambiguous? 6.4. Are the software requirements simple? 7. Timing and sizing budgets (IVE.TA.T1.S8) Verified 7.1. Are the software requirements for timing and sizing budgets correct? 7.2. Are the software requirements for timing and sizing budgets with the accuracy required by the upper level system performance applicable requirements? 7.3. Are the criteria for validating sizing and timing budget requirements objective and quantitative? 8. Dependability and safety (IVE.TA.T1.S6) Verified 8.1. Are the software requirements related to safety, security, and criticality defined based on suitably rigorous methods? 8.2. Do the requirements properly define Fault Detection Isolation and Recovery (FDIR) mechanisms, such as memory error handling, overall philosophy related to software faults, software crash/exception handling mechanisms (including failure modes and error handling, e.g. ability to isolate and minimize the effects of errors including contingency operations or software crash/exception handling mechanisms)? (see also questionnaire nr. 11 below) 8.3. Are all FDIR mechanisms independent of the software failures and faults they are supposed to deal with? 8.4. Are all the system requirements relate to safety and dependability completely represented in both the software requirements and interface specifications? 8.5. Are all the system requirements relate to safety and dependability correctly represented in both the software requirements and interface specifications? 8.6. Are the maintenance requirements defined including an explanation of the basis for their definition? 8.7. The system and software must start in a safe and stable state. In case interlocks are used, Are they initialized or checked to be operational at system start-up, including startup after temporarily overriding interlocks?
Issue 2 Revision 0 Page 97 8.8. Are the specifications detailing how Interlock failures halt the hazardous functions? 8.9. Is the behaviour of the software explicitly defined with respect to inputs received before start-up, after shutdown, or when the computer is temporarily disconnected from the process (off-line)? Or Is the fact that this information can be safely ignored explicitly documented? 8.10. Is the maximum time the computer waits before the first input specified? Note: After program start-up there should be some finite limit on how long the program waits for an input before it tries various alternative strategies, such as raising an alerting or shifting to another operating mode that does not use that input. For some type of systems this rule may not be applicable. 8.11. Are the mechanisms and steps to leave from fail-safe (partial or shut-down) states specified? Note: The elapsed time in a safe but reduced-function state should be minimised. 8.12. Are all defined state machines deterministic (i.e. only one possible transition out of a state is applicable at any time)? 8.13. Are all specified states reachable from the initial state? 8.14. Is the internal software model of the system updated to reflect the actual system context at initial start-up and after temporary shutdown? Note: An important consideration when developing software for system control is that the system might continue to change state even when the computer is not executing. 9. Clarity and readability (supporting IVE.TA.T1.S7) Verified 9.1. Is an overview of the system context provided? 9.2. Are useless details avoided in requirements models and respective description? 9.3. Is a model provided for each of the main requirements? 9.4. Are all the requirements presented using the same layout and similar level of detail? 9.5. Does the software requirements documentation have a clear and consistent structure? 9.6. Is the documentation intelligible for the target readers and are all the required elements for its understanding provided (i.e. acronyms, terms, conventions used, etc.)? 10. Applicable standards (IVE.TA.T1.S11) Verified 10.1. Are the software requirements compliant to applicable standards, references, regulations, policies, business rules? 10.2. Do software requirements and modelled physical phenomena comply with applicable physical laws? 10.3. When a specific method is used to represent and support the software requirements analysis, Are these methods or techniques completely followed? 11. Checklist for the FDIR requirements, mechanisms and implementation of HW controls and I/F (complementing above questionnaire nr 8) Verified Checklist for the FDIR requirements 11.1. Are the fault detection mechanisms correctly described? Note: This question will assess if any FDIR requirements or strategy is described within the requirements document. 11.2. Is there any reference to alive-signal monitoring mechanisms? Note: This question will assess if a Watchdog mechanism has been considered to monitor periodic tasks. This mechanism significantly improves fault detection. 11.3. Is there any reference to Acknowledge mechanisms? Note: This question will assess if acknowledge mechanisms were considered in order to increase the confidence in the system. This mechanism also significantly improves fault detection. 11.4. Is it possible to have any external intervention in FDIR mechanisms? Note: This question will assess if the FDIR system allows to change parameter values, accepts new recovery actions, etc. This information shall be verified in the general FDIR requirements. This will allow the ground control to update, erase or add recovery procedures, update boundary values on which some detection mechanisms are based, etc. 11.5. Are there references to redundant mechanisms? Note: This question will assess if mechanisms like switching to redundant equipment (which avoid single points of failure) or health status monitoring are described within the FDIR requirements. These mechanisms significantly improve fault tolerance. 11.6. Is it possible to enter "Safe Mode" anytime and in any situation or to perform any other procedure in order to ensure that the system remains in a consistent state? Note: This question will assess if the satellite can be brought into "Safe Mode" or is able to abort all the required nominal operations whenever fault recovery procedures are unsuccessful, independently of the system state. 11.7. Is there any reference to an error handling strategy (hierarchical, local, etc)? Note: This question will assess if a clear FDIR strategy/architecture has been defined. Issues like anomalies management level or anomalies report strategy between different hierarchical FDIR modules shall be addressed. 11.8. Is there any access control strategy regarding shared resources? Note: This question will assess if access control issues have been considered in the FDIR requirements. Issues like access strategy (e.g. Priority Inheritance) or
Issue 2 Revision 0 Page 98 assigning priorities to different processes shall be addressed. Checklist for the FDIR requirements, mechanisms and implementation of HW controls and I/F 11.9. Are the inputs/outputs being verified and validated according to the expected format? Note: This question will assess if all the received data is being checked for correctness, i.e. if the data has the expected format (e.g. Telemetry and Telecommand packets). 11.10. Is there any parameter monitoring (e.g. health monitors, timeout monitors, etc)? note: This question will assess if fundamental parameters are being monitored. By monitoring different parameters, fault detection increases its efficiency and completeness. 11.11. Is there any reference to error detection/correction mechanisms within the data? Note: This question will assess if issues as transmission errors were considered and in that case, what was the strategy implemented in order to avoid/prevent this type of issues (e.g. parity bits, CRC, retransmission etc.). 11.12. Is there any reference to time codes or time signatures? Note: This question will assess if any mechanism of signing data was considered when it was gathered (e.g. time stamps) 11.13. Are there fault detection reports available? Note: This question will assess if a report is generated whenever a fault is detected 11.14. When available, do the fault detection reports clearly identify the detected fault? Note: This question is only applicable when the response to the previous question is affirmative. The question will assess if the report generated by the fault detection clearly identifies the problem. 11.15. Are there isolation mechanisms to prevent fault propagation? Note: This question will assess if there are mechanisms to prevent that a failure propagates to other functions or SW modules (e.g. after detecting an incorrect formatted TC, it shall be discarded and not passed to any other function for processing). 11.16. Is there any recovery actions planned? Note: This question will assess if there is a recovery action specified for the detected faults. 11.17. Are there recovery action reports available? Note: This question will assess if a report is generated whenever a recovery action is completed. This report shall, at least, identify the recorevery action and its result (successful, unsuccessful). G.2. Architectural Design Review Checklist 1. Interfaces(To support IVE.DA.T1.S2, S3) Verified 1.1. Is every single interface exposed by a component required by at least one other component? 1.2. Is every single interface required by a component provided by some other component? 1.3. Is the description of the data format, timing characteristics, performance, accuracy, etc. consistent between the IF supplier and the IF user, taking into consideration both data and control flows? 1.4. Is the architectural design of the software components correct with respect to the applicable interface control documents? 1.5. Is the layout used for interface description consistent throughout the architectural design document? 1.6. Is the level of detail of the interface description the same for all the interfaces? 1.7. Is the description of each interface sufficient and clear? 1.8. Is the number of interfaces reduced to the minimum necessary to expose the component functionality? 1.9. Is the number of interface parameters minimised (i.e. minimum data passed at each interface)? 1.10. For embedded software, does the architecture implement the specified hardware and register control? 1.11. Does the architecture completely and consistently cover the HW/SW and SW/SW interfaces protocol definitions? 1.12. Does the architecture define mechanisms to prevent/correct HW/SW and SW/SW interface errors (including communication errors between subsystems) and error handling mechanisms? 1.13. Does the architecture completely and consistently define all HW/SW and SW/SW interfaces (communications) recovery? 2. Correctness & consistency (IVE.DA.T1.S1, S4) Verified 2.1. Are all software requirements traceable to all software components (in text or defined in models)? 2.2. Are all software components (in text or defined in models) allocated to a software requirement? 2.3. Does each component of the static architecture (packages, classes or modules) and the dynamic architecture (software active objects such as thread, tasks and processes) correctly implement the requirements it is traced to: i.e. not implementing more functionalities than the ones allocated? 2.4. Is each functionality of the software requirement implemented only once, including when allocated to more than one software architectural design component?
Issue 2 Revision 0 Page 99 2.5. Is the traceability to the upper level documents namely the technical specification provided and expressed in a uniform manner? 2.6. Does the architecture promote separation between logic and data? 2.7. For real time software, are the computational models correct and consistent? 2.8. Has the architecture been adequately decomposed (i.e. complexity and modularity) in accordance with quality requirements? 2.9. Does the design implement proper sequence of events, inputs, outputs, interfaces logic flow, allocation of timing and sizing budgets, and error handling? 2.10. Does the architecture properly implement the specifications about start-up/restart handling such as system initialization, recovery after power off, reconfiguration activities etc? 2.11. Does the architecture properly implement the specifications about entry in alarm/safe/failure mode? 2.12. For embedded software, does the architecture properly implement the specifications about event/interruption handling? 2.13. For embedded software, does the architecture implement the specified actions for memory (RAM/EEPROM) checks and error handling? 2.14. For embedded software, does the architecture implement the specified software patch/dump procedures? 3. Completeness (To support IVE.DA.T1.S5) Verified 3.1. Does the software architectural description include: Hierarchy, dependency and interfaces of software components, process, data and control aspects of the software components, static and dynamic architecture of the software and mapping between them? 3.2. For real time software, is there a computational model as part of the software architectural design? 4. Dependability and safety (To support IVE.DA.T1.S6) Verified 4.1. Does the architecture minimise the number of critical components on the basis of suitably rigorous methods? 4.2. Does the architecture properly implement Fault Detection Isolation and Recovery (FDIR) mechanisms, such as memory error handling, overall philosophy related to software faults, software crash/exception handling mechanisms? (see also questionnaire nr 8 below) 4.3. Does the architecture avoid unnecessary redundancy and complexity? 4.4. Does the architecture implement the maintenance requirements? 4.5. Are hazardous design constructs avoided? 4.6. Are the implemented FDIR mechanisms independent of the faults they are supposed to deal with? 4.7. Is the architectural design correctly implementing safety, security, and criticality requirements based on suitably rigorous methods? 4.8. Is it ensured that the software is not contributing to system hazardous events by analysing software failure modes and their possible propagation to system level? 4.9. Is the integration of the FDIR mechanisms properly designed with particular attention to the management of irregular fault-events in synchronous or cyclic processes? 4.10. The system and software must start in a safe and stable state. In case interlocks are used, Are they initialized or checked to be operational at system start-up, including startup after temporarily overriding interlocks? 4.11. Is the architectural design detailing how Interlock failures halt the hazardous functions? 4.12. Are all system and local variables initialized upon start-up, including clocks? 4.13. Is the behaviour of the software explicitly defined with respect to inputs received before start-up, after shutdown, or when the computer is temporarily disconnected from the process (off-line)? Or Is the fact that this information can be safely ignored explicitly documented? 4.14. Is the maximum time the computer waits before the first input defined? Note: After program start-up there should be some finite limit on how long the program waits for an input before it tries various alternative strategies, such as raising an alerting or shifting to another operating mode that does not use that input. For some type of systems this rule may not be applicable. 4.15. Are the mechanisms and steps to leave from fail-safe (partial or shut-down) states defined? Note: The elapsed time in a safe but reduced-function state should be minimised. 4.16. Are all defined state machines deterministic (i.e. only one possible transition out of a state is applicable at any time)? 4.17. Are all specified states reachable from the initial state? 4.18. Is the internal software model of the system updated to reflect the actual system context at initial start-up and after temporary shutdown? Note: An important consideration when developing software for system control is that the system might continue to change state even when the computer is not executing.
Issue 2 Revision 0 Page 100 5. Clarity and readability (To support IVE.DA.T1.S7) Verified 5.1. Is an overview of the system context provided? 5.2. Are useless details avoided in architectural diagrams and respective description? 5.3. Is a model provided for each of the main architectural components? 5.4. Are all the components presented using the same layout and similar level of detail? 5.5. Does the architectural design documentation have a clear and consistent structure? 5.6. Is the documentation intelligible for the target readers and are all the required elements for its understanding provided (i.e. acronyms, terms, conventions used, etc.)? 6. Timing and sizing budgets (IVE.DA.T1.S8) Verified 6.1. Is the software architectural design correctly implementing timing and sizing budgets? 6.2. For real-time software, Is the schedulability analysis (including WCET) of the software architectural design accurate as required by the software requirements? 7. Applicable standards (IVE.DA.T1.S10) Verified 7.1. Is the software architectural design compliant to applicable standards, references, regulations, policies, business rules? 7.2. Does the software architectural design comply with applicable physical laws? 7.3. When a specific method is used to represent and support the software architectural design, Are these methods or techniques completely followed? 8. FDIR detailed questions (complementing above questionnaire nr. 4) Verified 1. General Questions Fault Detection (FD) 8.1. Is there any task monitoring mechanism described in the design documents? Note: This question will assess if any mechanism is described through the design documents that verifies the proper behaviour of periodic tasks. The mechanism shall allow to verify the aliveness of specific periodic tasks, identify and clearly report anomalies and generate appropriate events in order to trigger any applicable recovery actions. 8.2. Is there any acknowledge mechanisms correctly described in the design report? Note: This question will assess if acknowledge mechanisms were considered in order to improve the confidence in the system. This mechanism also significantly improves fault detection and the awareness of the system state to the Ground. 8.3. Is the FDIR reporting strategy correctly and completely described in the design documents? Note: This question will assess if it is clearly described which services and sub-services will be used by the FDIR tasks and if that information is described with the sufficient level of detail. This verification must ensure that it is clear which services and sub-services are used and how are they used within the FDIR reporting strategy. 8.4. Is there any parameter monitoring mechanism described in the design documents? Note: This question will assess if any mechanism is described through the design documents that monitors parameter values. The mechanism shall allow the monitoring of parameters on Ground request as well as the definition of parameters limit values and the associated triggering events. Isolation (I) 8.5. Is the shared resources strategy correctly described in the design documents? Note: This question will assess if control methods have been implemented in the design phase. Issues like access strategy (e.g. Priority Inheritance) or assigning priorities to different processes shall be addressed. 8.6. Are the FDIR mechanisms independent from the faults they are handling? Note: his question will assess if the FDIR mechanism used to handle a particular fault does not depend directly on the fault itself (e.g. a fault was detected in the space antenna and the failure report is being sent as a TM to the ground). Recovery (R) 8.7. Are the recovery actions well described in the design documents? Note: This question will assess the onboard software autonomous transitions into safe states. The recovery actions must either be a ground responsibility or not. In both cases if any autonomous transition related to a recovery action occurs it must be described in the design documents. 8.8. Are the FDIR operations sequence clearly described in the design documents? Note: This question will assess if the Design Documents clearly describe the steps to be taken from the fault detection up until the system recovery. 2. Detailed questions Fault Detection (FD)
Issue 2 Revision 0 Page 101 8.9. Are all the faults identified in the Requirements documents mapped into the design SW units? Note: This question will assess the SW Design completeness regarding fault detection. For this task, a traceability matrix shall be used in order to expedite the process. 8.10. Is the error detection/correction mechanisms correctly described in the design documents? Note: This question will assess if some data transmission error detection mechanisms (e.g. data errors, communication errors, retransmission, etc.) were described. 8.11. Are the time code/signatures referenced through the design documents? Note: This question will assess if any mechanism of signing data (e.g. time stamps) was considered. This kind of mechanisms usually has limited applicability (e.g. packet exchanges), but it is valuable in case of communication difficulties. 8.12. Are the critical parameters and their boundaries being monitored? Note: This question will ensure that all important parameters for each object are referred at design level. If applicable, hard coded parameters must be referred in the design documents. Isolation (I) 8.13. Are there isolation mechanisms (to prevent fault propagation) correctly described? Note: This question will assess if mechanisms to prevent a failure to propagate to other functions or SW modules were considered. Recovery (R) 8.14. Are the associations between recovery schemes and fault categories identified in the Software design documents? Note: This question will assess if the identified fault categories within the Design Documents have associated recovery schemes. This item shall be verified in sequence with 2.4 and 2.5 checklist items. 8.15. Are the actions that should occur when system is unable to recover from a fault correctly described? Note: This question will assess if an alternative action in case of an unsuccessful recovery action (e.g. system reboot, enter safe mode, enable redundant hardware, etc) is described. G.3. Detailed Design Review Checklist 1. Interfaces (To support IVE.DA.T2.S2, S4) Verified 1.1. Is every single interface exposed by a unit required by at least one other unit? 1.2. Is every single interface required by a unit provided by some other unit? 1.3. Is the description of the data format, timing characteristics, performance, accuracy, etc. consistent between the IF supplier and the IF user, taking into consideration both data and control flows? (Modified in order to support IVE.DA.T4.S1) 1.4. Is the layout used for interface description consistent throughout the detailed design document? 1.5. Is the level of detail of the interface description the same for all the interfaces? 1.6. Does the description of each interface is sufficient and clear? 1.7. Is the number of public interfaces reduced to the minimum necessary to implement the required functionality? 1.8. Is the number of function parameters minimised (i.e. the minimum data passed)? 1.9. Does the detailed design avoid the use of global data (e.g. all class data is declared private or at least protected)? 1.10. For embedded software, does the detailed design implement the specified hardware and register control? 1.11. Does the detailed design completely and consistently cover the HW/SW and SW/SW interfaces protocol definitions? 1.12. Does the detailed design define mechanisms to prevent/correct HW/SW and SW/SW interface errors (including communication errors between subsystems) and error handling mechanisms? 1.13. Does the detailed design completely and consistently define all HW/SW and SW/SW interfaces (communications) recovery? 2. Correctness & completeness (IVE.DA.T2.S1, S3, S5) Verified 2.1. Are all software requirements allocated to a software component (or component model if produced) traceable to its software units (or model elements)?and that the functionality described in the requirements is correctly implemented by the corresponding software unit (or model element) 2.2. Is the functionality described in the requirements correctly implemented by the corresponding software unit (or model element)? 2.3. Is each software unit (or model element) implementing only the functionalities described in the requirements allocated to it? 2.4. Is the criticality level of the software correct? 2.5. Do the detailed design units correctly and completely implement the architectural components they are traced to? 2.6. Do the static architecture (packages, classes or modules) and dynamic architecture (software active objects such as
Issue 2 Revision 0 Page 102 threads, tasks and processes) described in the detailed design units correctly implement the requirements they are traced to? 2.7. Is each software requirement functionality implemented completely and only once even when it may be traced to more than one software unit (or model elements)? 2.8. Is the relationship between the software units (or model elements) and the software requirements specified in a uniform manner? 2.9. Is the detailed design of the software units correct with respect to the applicable interface control documents? 2.10. Is the detailed complexity and modularity in accordance with quality requirements? 2.11. Does the design implement proper sequence of events, inputs, outputs, interfaces logic flow, allocation of timing and sizing budgets, and error handling? 2.12. Is the design compatible with the target platform (i.e. are platform dependent issues compatible with the target hardware?) 2.13. Does the detailed design description include (accordingly to [ECSS-E40B:2003): decomposition of the software into software units, update of the software item internal interfaces design, and the physical model of the software items described during the software architectural design? 2.14. In case of real-time software, is the computational model provided as part of the software design? 2.15. For real-time software, is the developer s schedulability analysis available? 2.16. For real-time software, is the detailed design consistent with the computational model defined in the software architectural design? 2.17. Does each component have high internal cohesion and low external coupling? 2.18. Does the detailed design refer all the applicable programming standards? 2.19. Are the development tools listed and the development environment described? 2.20. Is the traceability to the upper level documents namely the architectural design provided? 2.21. Does the detailed design promote a clear separation between logic and data? 2.22. Were the timing and sizing budgets of the software correctly addressed and expanded into further detail? 2.23. Are all the function parameters described? Is it specified whether they are IN, OUT or IN-OUT? 2.24. Are the function return values (and exception that can be raised if any) described? 2.25. Does the design implement proper sequence of events, inputs, outputs, interfaces, logic flow, allocation of timing and sizing budgets, and error handling? 2.26. Does the detail design properly implement the start-up/restart handling such as system initialization, recovery after power off, reconfiguration activities etc? 2.27. Does the detail design properly implement the entry in alarm/safe/failure mode? 2.28. For embedded software, does the detail design properly implement the event/interruption handling? 2.29. For embedded software, does the detail design implement the actions for memory (RAM/EEPROM) checks and error handling? 2.30. For embedded software, does the detail design implement the software patch/dump procedures? 3. Database Verified 3.1. When the design includes databases, are the physical limitations of the database (maximum number of records, maximum record length, largest/smallest numeric value, and maximum array length in a data structure) properly analysed? 3.2. When the design includes databases, has it been analyzed the use of multiple indexes compared to the volume of stored data to determine if the proposed approach meets the requirements for data retrieval performance and size constraints? 3.3. When the design includes databases, has it been reviewed the use of data structures within a record (such as arrays, tables and date formats) for potential impact on requirements for data storage and retrieval? 3.4. When the design includes databases, have the methods employed for backup been reviewed against the requirements for data recovery and system disaster recovery? 3.5. When a distributed architecture is used, is the proposed distribution of data and processes feasible? 3.6. When a distributed architecture is used, has the possibility of timing conflicts because of the proposed distribution of data and processes been analysed? 3.7. When a distributed architecture is used, has the system degradation due to the proposed distribution of data and processes been analysed
Issue 2 Revision 0 Page 103 4. Dependability and safety (IVE.DA.T2.S7) Verified 4.1. Does the detailed design minimise the number of critical units? 4.2. Does the detailed design properly implement Fault Detection Isolation and Recovery (FDIR) mechanisms, such as memory error handling, overall philosophy related to software faults, software crash/exception handling mechanisms? (see also questionnaire in below checklist nr. 8) 4.3. Does the detailed design avoid unnecessary redundancy and complexity? 4.4. Does the detailed design adequately address maintainability issues? 4.5. Are hazardous events controlled by design constructs? 4.6. Are the implemented FDIR mechanisms independent of the faults they are supposed to deal with? 4.7. Are inputs verified at least at interface level? 4.8. Does the detailed design include consistency checking? 4.9. Are there proper handlers for all the errors that can be returned and exceptions that can be raised? 4.10. Is the design correctly implementing safety, security, and criticality requirements based on suitably rigorous methods? 4.11. Is it ensured that the software is not contributing to system hazardous events by analysing software failure modes and their possible propagation to system level? 4.12. Does the software correctly handle hardware faults? Is the implemented software logic harming the hardware in any way? 4.13. The system and software must start in a safe and stable state. In case interlocks are used, Are they initialized or checked to be operational at system start-up, including startup after temporarily overriding interlocks? 4.14. Is the detail design detailing how Interlock failures halt the hazardous functions? 4.15. Are all system and local variables initialized upon start-up, including clocks? 4.16. Is the behaviour of the software explicitly defined with respect to inputs received before start-up, after shutdown, or when the computer is temporarily disconnected from the process (off-line)? Or Is the fact that this information can be safely ignored explicitly documented? 4.17. Is the maximum time the computer waits before the first input specified? Note: After program start-up there should be some finite limit on how long the program waits for an input before it tries various alternative strategies, such as raising an alerting or shifting to another operating mode that does not use that input. For some type of systems this rule may not be applicable. 4.18. Are the mechanisms and steps to leave from fail-safe (partial or shut-down) states detailed? Note: The elapsed time in a safe but reduced-function state should be minimised. 4.19. Are all defined state machines deterministic (i.e. only one possible transition out of a state is applicable at any time)? 4.20. Are all specified states reachable from the initial state? 4.21. Is the internal software model of the system updated to reflect the actual system context at initial start-up and after temporary shutdown? Note: An important consideration when developing software for system control is that the system might continue to change state even when the computer is not executing. 5. Clarity and readability (IVE.DA.T2.S8) Verified 5.1. Is an overview of the design component provided prior start describing each design unit it comprises? 5.2. Are useless details avoided in the description of each design unit? 5.3. Are all the units presented using the same layout and similar level of detail? 5.4. Does the detailed design documentation have a clear and consistent structure? 5.5. Is the documentation intelligible for the target readers and are all the required elements for its understanding provided (i.e. acronyms, terms, conventions used, etc.)? 6. Timing and sizing budgets (IVE.DA.T2.S9) Verified 6.1. Is the detailed design correctly implementing timing and sizing budgets? 6.2. For real-time software, Is the schedulability analysis (including WCET) of the detailed design accurate as required by the software requirements and architectural design? 7. Applicable standards (IVE.DA.T2.S12) Verified 7.1. Is the detailed design compliant to applicable standards, references, regulations, policies, business rules? 7.2. Does the detailed design comply with applicable physical laws?
Issue 2 Revision 0 Page 104 7.3. When a specific method is used to represent and support the detailed design, Are these methods or techniques completely followed? 8. FDIR detailed questions (complementing above questionnaire nr. 4) Verified 1. General Questions Fault Detection (FD) 8.1. Is there any task monitoring mechanism described in the design documents? Note: This question will assess if any mechanism is described through the design documents that verifies the proper behaviour of periodic tasks. The mechanism shall allow to verify the aliveness of specific periodic tasks, identify and clearly report anomalies and generate appropriate events in order to trigger any applicable recovery actions. 8.2. Is there any acknowledge mechanisms correctly described in the design report? Note: This question will assess if acknowledge mechanisms were considered in order to improve the confidence in the system. This mechanism also significantly improves fault detection and the awareness of the system state to the Ground. 8.3. Is the FDIR reporting strategy correctly and completely described in the design documents? Note: This question will assess if it is clearly described which services and sub-services will be used by the FDIR tasks and if that information is described with the sufficient level of detail. This verification must ensure that it is clear which services and sub-services are used and how are they used within the FDIR reporting strategy. 8.4. Is there any parameter monitoring mechanism described in the design documents? Note: This question will assess if any mechanism is described through the design documents that monitors parameter values. The mechanism shall allow the monitoring of parameters on Ground request as well as the definition of parameters limit values and the associated triggering events. Isolation (I) 8.5. Is the shared resources strategy correctly described in the design documents? Note: This question will assess if control methods have been implemented in the design phase. Issues like access strategy (e.g. Priority Inheritance) or assigning priorities to different processes shall be addressed. 8.6. Are the FDIR mechanisms independent from the faults they are handling? Note: his question will assess if the FDIR mechanism used to handle a particular fault does not depend directly on the fault itself (e.g. a fault was detected in the space antenna and the failure report is being sent as a TM to the ground). Recovery (R) 8.7. Are the recovery actions well described in the design documents? Note: This question will assess the onboard software autonomous transitions into safe states. The recovery actions must either be a ground responsibility or not. In both cases if any autonomous transition related to a recovery action occurs it must be described in the design documents. 8.8. Are the FDIR operations sequence clearly described in the design documents? Note: This question will assess if the Design Documents clearly describe the steps to be taken from the fault detection up until the system recovery. 2. Detailed questions Fault Detection (FD) 8.9. Are all the faults identified in the Requirements documents mapped into the design SW units? Note: This question will assess the SW Design completeness regarding fault detection. For this task, a traceability matrix shall be used in order to expedite the process. 8.10. Is the error detection/correction mechanisms correctly described in the design documents? Note: This question will assess if some data transmission error detection mechanisms (e.g. data errors, communication errors, retransmission, etc.) were described. 8.11. Are the time code/signatures referenced through the design documents? Note: This question will assess if any mechanism of signing data (e.g. time stamps) was considered. This kind of mechanisms usually has limited applicability (e.g. packet exchanges), but it is valuable in case of communication difficulties. 8.12. Are the critical parameters and their boundaries being monitored? Note: This question will ensure that all important parameters for each object are referred at design level. If applicable, hard coded parameters must be referred in the design documents. Isolation (I) 8.13. Are there isolation mechanisms (to prevent fault propagation) correctly described? Note: This question will assess if mechanisms to prevent a failure to propagate to other functions or SW modules were considered. Recovery (R) 8.14. Are the associations between recovery schemes and fault categories identified in the Software design documents? Note: This question will assess if the identified fault categories within the Design Documents have associated recovery schemes. This item shall be verified in sequence with 2.4 and 2.5 checklist items. 8.15. Are the actions that should occur when system is unable to recover from a fault correctly described? Note: This question will assess if an alternative action in case of an unsuccessful recovery action (e.g. system reboot, enter safe mode, enable redundant hardware, etc) is described.
Issue 2 Revision 0 Page 105 G.4. Software User Manual Review Checklist 1. Software User Manual Verified 1.1. Is the SUM containing the correct timing and sizing information? 1.2. Is the SUM containing correct and complete information on how the software s contribution to system hazardous events is documented? 1.3. Is the SUM containing correct and complete information about features for Fault Detection Isolation And Recovery (FDIR) in accordance with the technical specification (e.g. How does the software deal with the faults that they are supposed to deal with)? 1.4. Is the SUM containing correct and complete information about the handling of hardware faults? 1.5. Is the SUM explaining how the implemented software logic is not harming the hardware in any way? 1.6. Does the user manual have a clear and consistent structure? 1.7. Is the user manual intelligible for the target software users and are all the required elements for its understanding provided (i.e. acronyms, terms, conventions used, etc.)? 1.8. Does the User Manual describe all the functionalities implemented by the software? Is all the necessary information for performing the required operations provided? 1.9. Is the information provided in the User Manual consistent with the software implementation i.e. does the software behave as described? G.5. Code Inspection Checklist 2. Interfaces consistency (IVE.CA.T1.S2, S4) Verified 2.1. Are all inputs (outputs) of one software unit produced (consumed) by some other unit? 2.2. For embedded software, does the code implement the specified hardware and register control? 2.3. Does the code completely and consistently cover the HW/SW and SW/SW interfaces protocol definitions, eg from the applicable Interface Control Documents? 2.4. Does the code define mechanisms to prevent/correct HW/SW and SW/SW interface errors (including communication errors between subsystems) and error handling mechanisms? 2.5. Does the code completely and consistently define all HW/SW and SW/SW interfaces (communications) recovery? 2.6. Does the code implement the interfaces with other software units consistently and completely with the applicable Interface Control Documents? 2.7. Does the code implement the interfaces with the user consistently and completely with the applicable Interface Control Documents and any applicable Human interface standards? 2.8. Are interfaces coded in a uniform way? 2.9. Are the interfaces providing all required information for the calling/using unit? 3. Correctness & completeness (IVE.CA.T1.S1, S3) Verified 3.1. Do the software units (source code) correctly and completely implement the internal interfaces described in the software architectural and detailed design? 3.2. Does the code completely and correctly implement both the design static architecture (e.g. software decomposition into software elements such as packages, and classes or modules) and its dynamic architecture (e.g. specification of the software active object such as threads / tasks and processes)? 3.3. Are all software requirements traceable to a software unit (source code) and that functionality described in the requirement is implemented by the source code unit? 3.4. Are all software units allocated to any software requirement, so that they are not implementing more functionalities than the ones described in the requirements allocated to them respectively? 3.5. Is each software requirement functionality implemented completely and only once even when it may be traced to more than one software unit? 3.6. Are the software units (source code) elements specified in a uniform manner (in terms of level of detail and format) as specified by the software requirements? 4. Structural correctness (IVE.CA.T1.S5) Verified 4.1. Does the code completely and correctly implement both the design static architecture (e.g. software decomposition into software elements such as packages, and classes or modules) and its dynamic architecture (e.g. specification of the software active object such as threads / tasks and processes)?
Issue 2 Revision 0 Page 106 4.2. Are there any leftover stubs or test routines in the code? 4.3. Can any code be replaced by calls to external reusable components or library functions? 4.4. Are there any blocks of repeated code that could be condensed into a single procedure? 4.5. Are symbolic constants used rather than magic number constants or string constants? 4.6. Are any modules excessively complex and should be restructured or split into multiple routines? Does the code s complexity and modularity conform to quality requirements? 4.7. Are all variables properly defined with meaningful, consistent, and clear names? 4.8. Do all assigned variables have proper type consistency or casting? 4.9. Are there any redundant or unused variables? 4.10. Does the code avoid comparing floating-point numbers for equality? 4.11. Does the code systematically prevent rounding errors? 4.12. Does the code avoid additions and subtractions on numbers with greatly different magnitudes? 4.13. Are divisors tested for zero or noise? 4.14. Are all loops, branches, and logic constructs complete, correct, and properly nested? 4.15. Are the most common cases tested first in IF- -ELSEIF chains? 4.16. Are all cases covered in an IF- -ELSEIF or CASE block, including ELSE or DEFAULT clauses? 4.17. Does every case statement have a default? 4.18. Are loop termination conditions obvious and invariably achievable? 4.19. Are indexes or subscripts properly initialized, just prior to the loop? 4.20. Can any statements that are enclosed within loops be placed outside the loops? 4.21. Does the code in the loop avoid manipulating the index variable or using it upon exit from the loop? 4.22. Are indexes, pointers, and subscripts tested against array, record, or file bounds? 4.23. Are imported data and input arguments tested for validity and completeness? 4.24. Are all output variables assigned? 4.25. Are the correct data operated on in each statement? 4.26. Is every memory allocation deallocated? 4.27. Are timeouts or error traps used for external device accesses? 4.28. Is data systematically checked for initialisation before accessing it for reading? 4.29. Have actions been taken to avoid overflows / underflows? 4.30. Are dangerous type conversions avoided? 4.31. Have measures been taken to avoid that pointers could be illegally de-referenced? 4.32. Does the code systematically prevent access to null pointers? 4.33. Does the code systematically prevent deadlocks? 4.34. Does the code systematically prevent memory access violations? 4.35. When using Object Oriented Programming, have actions been taken to avoid errors related to error inheritance? 4.36. Does the software correctly implement exception handling? 4.37. Does the software correctly prevent access conflicts on shared data? 4.38. Are files checked for existence before attempting to access them? 4.39. Are all files and devices are left in the correct state upon program termination? 4.40. Is the code implementing proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets and error handling? 4.41. Is the code properly using the programming language features, any library call, any system call, etc? 5. Clarity and readability (IVE.CA.T1.S6) Verified 5.1. Do all source code files adhere to the same coding style? Are the applicable coding conventions, if any, followed? 5.2. Does every single source file have a descriptive header? Was the file history recorded there? 5.3. Is a description is provided for every single subprogram? 5.4. Does the code conform to any pertinent coding standards (e.g. Ada RAVEN, MISRA C, etc)? 5.5. Is the code well-structured, consistent in style, and consistently formatted? Can the internal consistency between software units be ensured?
Issue 2 Revision 0 Page 107 5.6. Is the source code written in a clear way and properly documented? 5.7. Is the code clearly and adequately documented with an easy-to-maintain commenting style? 5.8. Are all comments consistent with the code? 6. Dependability and safety (IVE.CA.T1.S7) Verified 6.1. Is the code correctly implementing safety, security, and criticality requirements based on suitably rigorous methods? 6.2. Does the code minimise the number of critical software units without introducing undesirable software complexity (e.g. are critical software units sharing resources with non-critical software units thus increasing their criticality)? 6.3. Have software failure modes and their possible propagation to system level been analyzed? 6.4. Is it ensured that the code is controlling any possible software contribution to system hazardous events by analysing software failure modes and their possible propagation to system level? 6.5. Are there proper control mechanisms in place for any new critical part of the software identified in the design? 6.6. Have features for Fault Detection Isolation and Recovery (FDIR), such as memory error handling, overall philosophy related to software faults, software crash/exception handling mechanisms, defensive programming technique, safe-set coding standard, alive signal monitoring, acknowledge mechanisms, been implemented in accordance with the technical specification? 6.7. Are the FDIR mechanisms independent of the faults they are supposed to deal with? 6.8. Is it possible to have any external intervention in FDIR mechanisms? Note: This question will assess if the FDIR system allows to change parameter values, accepts new recovery actions, etc. This information shall be verified in the general FDIR requirements. This will allow the ground control to update, erase or add recovery procedures, update boundary values on which some detection mechanisms are based, etc. 6.9. Does the software correctly handle hardware faults? Does the software logic harm the hardware in any way? 6.10. Is the code properly implementing defensive programming techniques, such as verification of inputs and consistency checking? 6.11. Does the code report all relevant events using the appropriated channels? 6.12. Are there proper implementation of numerical protection is in place at the source level: no divide by zero, no logarithms to zero, no tan (PI/2 + npi), etc? 6.13. Are restricted code constructs (e.g. known code generator or compiler limitations or defects) used and have they been verified (via configuration of code generator, and/or proven absence via static analyser tools, or otherwise)? 6.14. Is the code maintainable? 6.15. Is the code free from any hazardous programming language construct or library function? Issues to consider are: non-determinism, dynamic constructions, non-portable code, implicit/explicit recursion, or inefficient programming language constructs or library function calls 6.16. Is it ensured that no dead-lock or race conditions exist in concurrent systems? 6.17. Are there any uncalled or unneeded procedures or any unreachable or dead code? If deactivated code exists, had it been ensured that its activation will not lead to a hazardous condition? 6.18. Does the code properly implement the start-up/restart handling such as system initialization, recovery after power off, reconfiguration activities etc? 6.19. Is there any access control strategy regarding shared resources? 6.20. Does the code properly implement the entry in alarm/safe/failure mode? 6.21. For embedded software, does the code properly implement the event/interruption handling? 6.22. For embedded software, does the code implement the actions for memory (RAM/EEPROM) checks and error handling? 6.23. For embedded software, does the code implement the software patch/dump procedures? 6.24. The system and software must start in a safe and stable state. In case interlocks are used, Are they initialized or checked to be operational at system start-up, including startup after temporarily overriding interlocks? 6.25. Is the code implementing how Interlock failures halt the hazardous functions? 6.26. Are all system and local variables initialized upon start-up, including clocks? 6.27. Is the software implementing how to behave with respect to inputs received before start-up, after shutdown, or when the computer is temporarily disconnected from the process (off-line)? Or Is the fact that this information can be safely ignored explicitly documented? 6.28. Is the maximum time the computer waits before the first input correctly implemented? Note: After program start-up there should be some finite limit on how long the program waits for an input before it tries various alternative strategies, such as raising an alerting or shifting to another operating mode that does not use that input. For some type of systems this rule may not be applicable.
Issue 2 Revision 0 Page 108 6.29. Are the mechanisms and steps to leave from fail-safe (partial or shut-down) states correctly implemented? Note: The elapsed time in a safe but reduced-function state should be minimised. 6.30. Are all defined state machines deterministic (i.e. only one possible transition out of a state is applicable at any time)? 6.31. Are all specified states reachable from the initial state? 6.32. Is the internal software model of the system updated to reflect the actual system context at initial start-up and after temporary shutdown? Note: An important consideration when developing software for system control is that the system might continue to change state even when the computer is not executing. 7. Accuracy (IVE.CA.T1.S8) Verified 7.1. Is the computational precision correctly implemented in the source code (e.g. rounding vs. truncation, single vs. double precision, etc.). 7.2. Are the granularity and detail of the error information sufficient and does it trigger the necessary corrective actions? 7.3. Are the units of the parameters values correct (e.g. meters, inches, volts, etc.)? 7.4. Are the units of the parameters values defines as required by yhe external interfaces (e.g. meters, inches, volts, etc.)? 8. Timing and sizing budgets (IVE.CA.T1.S10) Verified 8.1. Is the code correctly implementing timing and sizing budgets? 8.2. For real-time software, Is the schedulability analysis (including WCET) of the code accurate as required by the software requirements and architectural design and based on the WCET? 8.3. Is the sizing budget of the software correctly considering executable image size, stack sizes, buffers, and any other data type structure to its maximum capacity? G.6. Unit and Integration Test Review Checklist 1. Test Plan Completeness and correctness (ISVV level 2 only) Verified 1.1. Does the test plan clearly identifies the items to be tested (e.g. requirements, specific components, etc.)? 1.2. Does the test plan identify the software features to be tested? 1.3. Does the test plan identify the software features not to be tested? 1.4. Is the reason for not testing a particular feature provided and acceptable and sufficient? 1.5. Is the testing approach described? 1.6. Is the pass/fail criteria for each test item defined and correct (for integration tests and unit tests)? 1.7. Is the test plan conformant with the project testing strategy in which respects to the types of tests to be performed (e.g. functional, boundary, performance, usability, etc.) and test coverage goals (such as call graph and parameter passing)? 1.8. Were the entire test deliverables defined? 1.9. Are the testing tasks specified? 1.10. Are the necessary testing tools and environment described? 1.11. Are the testing coverage goals identified and in conformance with the criticality of the project? 1.12. Is the required staffing and training needs specified and appropriated for the testing tasks that are to be performed? 1.13. Are the test milestones identified in the project schedule referred? 1.14. Were the potential risks and respective contingencies identified and described? 1.15. The test plan is in conformance with the project plan? 2. Test procedure and data completeness and Correctness (ISVV level 2 only) Verified 2.1. Are the integration test procedures and data in accordance with the integration test plan, namely in which respect to: the types of tests to be performed, e.g. functional, robustness, performance, usability, etc.; test coverage goals (such as call graph and parameter passing).? 2.2. Are the unit test procedures and data in accordance with the unit test plan, namely in which respect to: the types of tests to be performed, e.g. functional, robustness, performance, usability, etc.; test coverage goals (such as statement, decision and branch condition coverage)?
Issue 2 Revision 0 Page 109 2.3. Does every single test contain all the necessary information to test the addressed component (for unit tests and integration tests)? 2.4. Are the integration test procedures and data traceable to the software architectural design and elements? 2.5. Are the unit test procedures and data traceable to the detailed design and elements? 2.6. Are the test results evaluated against reference data when these are available? G.7. Validation Checklist 1. Checklist for Performing Analysis for ISVV level 1 Verified 1.1. Are events happening too early/late handled correctly? 1.2. Are FDIR requirements correctly implemented? 1.3. Are injections outside or at the specified boundaries handled correctly? 1.4. Are failure condition properly tested? 1.5. Is the response of the system to excessive load properly tested? 1.6. Are potential runtime errors and overflow/underflow taken care of? 1.7. Are there any dataflow conflicts to investigate?. 1.8. Are worst case situations handled correctly? 1.9. Are the test results conformant to the expected results? 1.10. Are exceptions properly tested? 1.11. Are the interaction with external SW focusing on degraded functionality of the external SW products analysed? 1.12. Has fault injection been performed in order to validate the correct behaviour of fault tolerance mechanisms? 1.13. Is the Software User Manual properly detailing operator failures? G.8. Model conformance with applicable standards 1. Checklist for Model conformance with applicable standards Verified 1.1. Ensure that only model elements (nodes) from the set of allowed nodes are used in the model. Rationale: We can expect that any modelling tool and environment will come with a large amount of predefined blocks/nodes packed in libraries. In order to ensure testability of the final product, each project should define its own set of validated/qualified nodes and only use these. In this context, the justification/qualification of these blocks could be reviewed, too. 1.2. Ensure that pitfalls, intrinsic weaknesses, known problems of the chosen modelling language and tool suite are avoided. 1.3. Ensure that nodes, variables, and constants are given names, which are meaningful and related to the terminology of the application domain. 1.4. Ensure that naming conventions for the modelling environment are obeyed, e.g. model element names vs. underlying file names. 1.5. Ensure that presentation standards are followed, e.g. colouring, fonts, effects, etc. for the graphic representation of the model. 1.6. Ensure that architectural standards are followed, e.g. complexity, granularity, modularisation, coupling, coherence, etc. 1.7. Ensure that the limits set by the Model Quality Metrics (if applicable) are respected in accordance with the criticality level of the software. 1.8. Ensure that a suitable hierarchical functional decomposition is applied to limit the complexity at each level of the model. 1.9. Ensure that all model elements adhere to the same modelling style and applicable modelling conventions, if applicable. 1.10. Ensure that automatically generated documentation is conforming to applicable Document Requirements in particular with respect to completeness because the developers might be tempted to deliver only what can be produced automatically. 1.11. Ensure sufficient level of comments
Issue 2 Revision 0 Page 110 Annex H. Software Validation Facility The software validation facility must be able to support all test cases identified. The final construction of the validation facility is dependent on the actual project. This is especially the case when the facility must provide project specific reactions to stimuli of the environment: The generic software validation facility can provide general facilities for monitoring and control of the software under test, and The software validation facility must allow for extensions that represents the project specific aspects. However, it is possible to state a number of general requirements: The software validation facility shall enable execution of the OBS under test in a flight representative environment ( validation in context ). The software validation facility shall enable execution of the OBS under test flight image. The software validation facility shall support error injection on all levels, e.g., - memory corruption - data bus failure - erroneous data from subsystems - communication failure (e.g., response too early/late, missing/duplicated packets) The software validation facility shall support event triggered actions. The events shall include: - OBS reading or writing to specific memory locations - time based events The software validation facility shall support white box test mode: - inspection of the internal software state - control of the execution of the processor The software validation facility shall support black box test mode, controlling the input to the OBS under test and monitoring the output. The software validation facility shall use a central clock representing the target processor time (Simulated Real Time) as its global time reference, and relate all time events to this. The software validation facility shall provide a test script language to support regression testing of OBS through batch test execution.
Issue 2 Revision 0 Page 111 Annex I. References [AUDSLEY:1991] [BS 7799-2:2002] [DETECT:1995] Hard Real-Time Scheduling: the Deadline-Monotonic Approach, N. C. Audsley, A. Burns, M. F. Richardson, and A. J. Wellings, IEEE Workshop on Real-Time Operating Systems, 1991 BS 7799 Part 2, Specification for information security management systems, 2002. Comparing Detection Methods For Software Requirements Inspections, IEEE Transactions in Software Engineering, 06/1995 [ECSS-E-40B:2003] ECSS, ECSS-E-40 Part 1B, Space engineering, Software Part 1: Principles and requirements, 28 November, 2003. [ECSS-P-001B:2004] ECSS, ECSS-P-001B, Glossary of terms, 14 July 2004. [ECSS-Q-80B:2003] [HRTOSK:1991] [IEC 61508-1:1998] [IEEE 1012:1998] [IEEE 1074:1997] [IEEE 1219:1998] [INSPEC:1976] ECSS, ECSS-Q-80B, Space product assurance, Software product assurance, 10 October, 2003. Hard Real-Time Operating System Kernel: Overview and Selection of Hard Real-Time Scheduling Model, British Aerospace and University of York, ESTEC Contract HRTOSK - Task 1 Report, 1991 IEC, IEC 61508: Functional safety of electrical/electronic/programmable electronic safety-related systems Part 1: General requirements, First Edition, 1998. IEEE, IEEE Standard 1012: IEEE Standard for Software Verification and Validation, 1998. IEEE, IEEE Standard 1074: IEEE Standard for Developing Software Life Cycle Activities, 1997. IEEE. IEEE standard 1219: IEEE Standard for software maintenance. Status: Archived Publication Date: 21 Oct 1998. E-ISBN: 0-7381-0533-3. Design and Code Inspections to Reduce Errors in Program Development, IBM Systems Journal, Vol. 15 No. 3, 1976. [ISO 9000:2000] ISO, ISO 9000: Quality management systems Fundamentals and vocabulary, 2000. [NASA IV&V] NASA, Software Independent Verification and Validation (IV&V)/Independent Assessment (IA) Criteria, http://ivvcriteria.ivv.nasa.gov. [OCL] OMG Object Constraint Language, V2.0, 06-05-01 [PASCON WO12-TN2.1:2000] RAMS related static methods, techniques and procedures concerning software, Issue 1.0, 2 May 2000 [SYSML] OMG Systems Modelling Language (OMG SysML ) V1.0