Comparison on the between SQA and HCD with Quality-In-Use Perspective Seojeong Lee Division of Marine Information Technology, Korea Maritime and Ocean University, Busan, South Korea. E-mail: sjlee@kmou.ac.kr Jieun Jung Department of Computer Engineering, Graduate School of Korea Maritime & Ocean University, Busan, South Korea. E-mail: zieun.j@gmail.com Abstract e-navigation is a strategy of International Maritime Organization (IMO) that aims to reduce marine accidents caused by human errors and protect the marine environment. According to the strategy implementation plan for e- Navigation, e-navigation software quality assurance, humancentred design, and testing guidelines have been developed. The guidelines developed for those 3 fields have been integrated into 1 guideline but more specific integration is required to practically apply the guidelines with different subjects and scopes. This paper analyzes the, a common feature addressed in the 3 guidelines, and compares how to measure the in terms of software quality assurance and human-centred design Keywords: e-navigation, Software Quality Assurance, Human-Centred Design,, Quality-in-use Introduction As with other industries, the marine area has been targeting safe navigation using different IT devices. However, limited communication environments and high costs have relatively limited the development of a service based on information exchange between marine and land sites. IMO is a UN affiliated organization that defines regulations and standards on navigation. e-navigation is a solution for the aforementioned problem and has been a strategy of the IMO since 2005. e-navigation is the harmonized collection, integration, exchange, presentation and analysis of marine information on board and ashore by electronic means to enhance berth to berth navigation and related services for safety and security at sea and protection of the marine environment [1]. e- Navigation provides quick information exchange and communication between ships, and ship and on-land stations with marine information standards and improved marine communication environments. Furthermore, it is capable of standardizing the navigation system on the basis of electronic navigation charts to reduce the workload of navigators and thus the number of marine accidents that occur due to human errors. To implement e-navigation, the IMO has approved the development of Software Quality Assurance (SQA), Human- Centred Design (HCD), and Testing (UT) guidelines. Korea, Australia, and Japan have led the development of SQA, HCD, and UT guidelines, respectively. The 2015 IMO NCSR conference decided to integrate those guidelines, and the final integrated product was registered as a circular of the IMO at the 95th Session of Maritime Safety Committee [2]. The SQA guideline is applied to e-navigation software while the HCD guideline is applied to the e-navigation software as well as hardware and the entire equipment layout. As the integrated ones have been registered as a circular, it will be applied to the industry but nothing has been decided on which parts of the guidelines, having different subjects and scopes, will be integrated in which perspective. In addition, a guideline is a declarative document so more efforts will be required for its practical application in detail [3]. This paper is to be able to assist in the practical application of the integrated guidelines by analyzing the common features of both the SQA and HCD, and comparing the evaluation methods for a more specific integration of them. Related Researches e-navigation SQA Software Quality Assurance (SQA) aims to understand the and quality attributes; to check the progress of the software development with the software life cycle; and to ensure fewer defects and better performance of the final development. The ISO/IEC 25000 series has progressed their works to do with the SQA. The ISO/IEC 25000 series are intended to create a framework to evaluate the quality of the software product. The ISO/IEC 25000 series includes a quality model, quality metrics, quality, and quality evaluation. The e-navigation SQA guideline has been developed by the task T11: Draft Guidelines for Software Quality Assurance in e-navigation for solutions S3: improved reliability, resilience and integrity of bridge equipment and navigation information and S3 from among the 5 solutions and 18 tasks defined in the e-navigation Strategy Implementation Plan [4]. e-navigation HCD Human-Centred Design (HCD) refers to system design and development focusing on using more useful systems by 1182
applying human factors, human engineering, and knowledge and technology. The ISO 9241 addresses the HCD. ISO 9241 defines the of the software and hardware of the computing equipment in terms of human engineering. Specifically, the ISO 9241-11 addresses the principles and activities of the HCD. The e-navigation HCD guideline has been developed by the task T1: Development of draft Guidelines on Human- Centred Design for e-navigation systems for the solution to S1: improved, harmonized and user-friendly bridge design from among the 5 solutions and 18 tasks defined in the e- Navigation Strategy Implementation Plan [4]. e-navigation UT Testing is an evaluation method and technology that seeks to improve the of system. It is related to ISO 16982, which address methods and technologies for evaluating. The e-navigation UT guideline has been developed by the task T2: Development of draft Guidelines on Testing, Evaluation and Assessment (UTEA) of e-navigation systems. for the solution to S1: improved, harmonized and user-friendly bridge design from among the 5 solutions and 18 tasks defined in the e-navigation Strategy Implementation Plan [4]. Comparative Points Subjects Related research shows that the SQA, HCD, and UT guidelines commonly address the [5]. is one of 8 main characteristics of the product quality model among the software quality models. The is also the ultimate goal of adopting HCD in developing a product. UT is an evaluation method and technology to improve the. Figure 1 shows that the can be achieved by applying SQA and HCD together to the design and development of the product. Figure 1: Links between the SQA, HCD and UT guidelines [6] Details The of both SQA and HCD is defined for a user to achieve certain s and to obtain Effectiveness, Efficiency, and Satisfaction in using a given product [8]. While SQA and HCD share the same definition of, they have different methods and subjects to measure it. The of HCD is measured by effectiveness, efficiency, and satisfaction with its purpose regardless of subject. There is no separate international standard to address an accurate function to measure it, but Annex B, ISO 9241-11 presents an example of it. How to measure the for only SQA software products is presented in ISO/IEC 25023. How to measure effectiveness, efficiency, and satisfaction of software is presented in ISO/IEC 25022. As SQA and HCD possess different methods and subjects to measure the. those are required to compared to be able to apply them in harmonizing manners. Result of Comparison Annex B, ISO 9241-11, which presents how to measure the of the HCD, addresses how to measure effectiveness, efficiency, and satisfaction respectively for the goal but only measuring items provided without any function [8]. ISO 25022 provides a function to draw numerical values in order to measure the effectiveness, efficiency, and satisfaction in terms of quality of SQA [9]. It presents examples to measure specifics and the overall of HCD. It presents only a general measurement to measure the overall of SQA. It compared measurement of the HCD and SQA for effectiveness, efficiency, and satisfaction in order to measure specific and overall stated in Annex B, ISO 9241-11 and ISO/IEC 25022. Table 1: Comparison of Effectiveness measures in SQA/HCD HCD Measures SQA Measures (Based on the (Based on the ISO ISO 25022) 9241-11) Percentage of goals Objectives achieved achieved; {X = 1- Ai X>0} Ai= Proportional value of each missing or incorrect in Percentage of users successfully completing task; Average accuracy of tasks the task output Tasks Completed A = Number of tasks with errors B = Total number of tasks Table 1 shows the comparison of effectiveness measurement for HCD and SQA. HCD and SQA possess the same method of measuring effectiveness. Table 2: Comparison of Efficiency measures in SQA/HCD HCD Measures (Based on SQA Measures (Based the ISO 9241-11) on the ISO 25022) Time to complete a task X = T T = Task time Monetary costs of performing the task A= Objectives achieved B = Total cost of carrying out the task Table 2 shows comparisons of efficiency measurement for the overall, for HCD and SQA. Both HCD and SQA adopt the same method to measure the efficiency. 1183
Table 3: Comparison of Satisfaction measures in SQA/HCD HCD Measures (Based SQA Measures (Based on the on the ISO 9241-11) ISO 25022) Frequency of discretionary use A= Number of users using a specific function, application or system is used B = Number of potential users who could have used the specific function, application, Frequency of complaints or system question related to a Table 3 shows comparisons of satisfaction measurements for the overall for HCD and SQA. SQA provides a function to quantify the contents which cannot be quantified with only item called discretionary use provided in terms of HCD. In case of 'frequency of dissatisfaction' in terms of HCD, not same items are provided in SQA. For the satisfaction level calculation method in terms of SQA, a method to investigate the satisfaction level relating to reliability, pleasure and physical comfort through the survey and quantify the response rate is used. Table 4: Comparison of Effectiveness measures in SQA/HCD HCD Measures (Based SQA Measures (Based on on the ISO 9241-11) the ISO 25022) Number of power tasks performed; Percentage of tasks successfully on first attempt Percentage of tasks successfully after a specified period of non-use Number of references to documentation; Number of calls to ; Number of accesses to help; Number of functions learned; Percentage of users who manage to learn to criterion; Percentage of errors corrected or reported by the system; Percentage of words read correctly at normal viewing distance 'This issue is not relevant because the software quality must be consistent even though it is intermittently used.' A = Number of tasks with errors B = Total number of tasks Table 4 shows comparisons of effectiveness measurements for the specific for HCD and SQA. In ISO 9241-11 Annex B, the effectiveness with presence of a specific purpose can be measured mainly using the function to calculate the completeness of task into a percentage provided in terms of SQA. For, the quality of software should be maintained constantly even if software is used occasionally, so it was considered that it was meaningless to measure such item. For other items, the measurement methods in terms of HCD are intended for the purpose of specific, there are no item that matches with general measurement methods of SQA. Table 5: Comparison of Efficiency measures in SQA/HCD HCD Measures (Based on the ISO 9241-11) efficiency compared with an expert user Time taken on first attempt; efficiency on first Attempt; Time spent relearning functions; Number of persistent errors; SQA Measures (Based on the ISO 25022) X = A/T A = Objectives achieved T = Time Productive time X = Ta / Tb Ta = Productive time = time taken to complete the task - time spent getting help or assistance - time taken recovering from errors - time taken searching ineffectually Tb = Task time Time to learn to criterion; Time to re-learn to criterion; efficiency while learning; Time spent on correcting errors Time to correctly read a specified number of characters X = T T = Task time Table 5 shows comparisons of efficiency measurements for the specific for HCD and SQA. In ISO 9241-11 Annex B, the efficiency with presence of a specific purpose can be measured using the function to calculate the time efficiency for calculating the time taken for completing a task 1184
successfully or productive time among general functions provided in terms of SQA. For other items, the measurement methods in terms of HCD are intended for the purpose of specific, there are no item that matches with general measurement methods of SQA. Table 6: Comparison of Satisfaction measures in SQA/HCD HCD Measures (Based on the ISO 9241-11) Rating scale for satisfaction with power features Rate of voluntary use Frequency of use Rating scale for satisfaction with facilities Rating scale for ease of learning Rating scale for error handling Rating scale for sort SQA Measures (Based on the ISO 25022) Table 6 shows comparisons of satisfaction measurements for the specific for HCD and SQA. The targets of which purpose satisfaction level can be evaluated in terms of HCD are items including 'Rate of voluntary use, frequency of use' in Table 9. Such items cannot be generalized with the measurement methods for specific purpose of, so they do not match with general functions provided in terms of SQA. In addition, for the user's subjective evaluation, an evaluation method which carry out the survey relating to the relevant items and calculate and quantify the response rate of the survey can be used as provided in terms of SQA Conclusion This paper intends to understand the common characteristics and of both SQA and HCD and to compare those measurements in order to practically integrate and apply the e-navigation SQA and HCD guidelines to industry. HCD and SQA have different application scopes and subjects so the measurements of effectiveness, efficiency, and satisfaction do not match HCD in Annex B, ISO 9241-11 and SQA in ISO/IEC 25022. Considering the goals, however, the effectiveness, efficiency, and satisfaction of overall were measured using the functions in ISO/IEC 25022. It means that the of e-navigation equipment software can be measured using measurements of effectiveness, efficiency, and the satisfaction of the general software provided in the existing ISO/IEC 25022. Acknowledgement The contents of this paper are the results of the research project of the Ministry of Oceans and Fisheries of Korea (A fundamental research on maritime accident prevention - phase 2). References [1] IMO, International Maritime Organization, Report of Maritime Safety Committee on its Eighty-fifth Session, Annex 20 Strategy for the development and implementation of e-navigation, MSC 85/26/Add.1, 2008. [2] IMO, International Maritime Organization, Guideline on Software Quality Assurance and Human-Centred Design for e-navigation, MSC.1/Circ.1512, 2015. [3] S. Lee, e-navigation SQA and HCD guideline trends, Telecommunications Technology Association, Vol.159, pp.36-41, 2015. [4] IMO, International Maritime Organization, Report of the Sub-Committee on Navigation, Communications and Search and Rescue(NCSR) on its first session, Annex 7 - e-navigation Strategy Implementation Plan, NCSR1/28, pp.102-142, 2014. [5] J. Jung, S. Lee, Understanding of Common Characteristics of SQA/HCD Based on International Standards, Advanced Science and Technology Letters, vol.117, pp. 39-42, 2015. [6] IMO, International Maritime Organization, Development an e-navigation strategy implementation plan, draft Guidelines developed to the e- Navigation strategy Submitted by Norway, NCSR 1/9/1, 2014. [7] ISO 25010, International Standards Organization, Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality model, 2011. [8] ISO 9241-11, International Standards Organization, Ergonomic for office work with visual display terminals (VDTs) -- Part 11: Guidance on, 1998. 1185
[9] ISO DIS 25022, International Standards Organization, Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - Measurement of quality in use, 2015. 1186