Software process and product improvement: an empirical assessment



Similar documents
Evaluation and Integration of Risk Management in CMMI and ISO/IEC 15504

Software Process Improvement CMM

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

Lecture 8 About Quality and Quality Management Systems

Comparative Analysis of Different Software Quality Models

ISO, CMMI and PMBOK Risk Management: a Comparative Analysis

The Role of Information Technology Studies in Software Product Quality Improvement

Software Engineering Practices in Jordan

V. Phani Krishna et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (6), 2011,

Using Rational Software Solutions to Achieve CMMI Level 2

QUANTIFIED THE IMPACT OF AGILE. Double your productivity. Improve Quality by 250% Balance your team performance. Cut Time to Market in half

Risk Knowledge Capture in the Riskit Method

Brillig Systems Making Projects Successful

MTAT Software Engineering Management

An Enterprise Framework for Evaluating and Improving Software Quality

The Usability of Electronic Stores based on the Organization of Information and Features

Making Process Improvement Work

Software Customer Satisfaction

Software Engineering Compiled By: Roshani Ghimire Page 1

Continuous Risk Management Guidebook

Quality Systems Frameworks. SE 350 Software Process & Product Quality 1

Role of Software Quality Assurance in Capability Maturity Model Integration

Asking the "tough questions" in choosing a partner to conduct Customer Experience Measurement and Management (CEM) programs for Your Company

Use of Metrics in High Maturity Organizations

Fundamentals of Measurements

Software Engineering. Introduction. Lecturer: Giuseppe Santucci

AN OVERVIEW OF INDUSTRIAL SOFTWARE DOCUMENTATION PRACTICES

Case Study of CMMI implementation at Bank of Montreal (BMO) Financial Group

MEASURING USABILITY OF ICONIC BASED GUIs OF MOBILE EMERGENCY SERVICE SOFTWARE BY USING HCI. Y.Batu Salman, Adem Karahoca

Comparison Between Joint Commission Standards, Malcolm Baldrige National Quality Award Criteria, and Magnet Recognition Program Components

Software Engineering: Analysis and Design - CSE3308

Benchmarking Software Quality With Applied Cost of Quality

SOFTWARE QUALITY MODELS: A COMPARATIVE STUDY

The Role of Feedback Management in Becoming Customer Centric

Software Quality Assurance: VI Standards

Errors in Operational Spreadsheets: A Review of the State of the Art

SOFTWARE PROCESS IMPROVEMENT AT SYSGENIC

Total Quality Management for Improving Services of Information Technology Based Organizations: a case study

Deploying a CRM system in practice Understanding the user experience Received (in revised form): 5th February, 2007

Seven Steps To Measure Supplier Performance

Quality Management. Lecture 12 Software quality management

Cost Estimation for Secure Software & Systems

The Concept of Project Success What 150 Australian project managers think D Baccarini 1, A Collins 2

Your Software Quality is Our Business. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc.

[project.headway] Integrating Project HEADWAY And CMMI

Key Factors for Developing a Successful E-commerce Website

Engineering Standards in Support of

Multi-Dimensional Success Factors of Agile Software Development Projects

Software Process Improvement. Overview

An Empirical Study of Software Process Maturity, TQM Practices and Organizational Characteristics in Taiwanese Companies

Software Metrics and Measurements

A Capability Maturity Model for Scientific Data Management

Measurement Information Model

Stakeholder Analysis: The Key to Balanced Performance Measures

Differences in Characteristics of the ERP System Selection Process between Small or Medium and Large Organizations

STANDARDIZATION OF INFORMATION SYSTEMS DEVELOPMENT PROCESSES AND BANKING INDUSTRY ADAPTATIONS

Portfolio, Programme and Project Management Maturity Model - a Guide to Improving Performance

A CASE STUDY ON SOFTWARE PROJECT MANAGEMENT IN INDUSTRY EXPERIENCES AND CONCLUSIONS

Becoming Agile: a getting started guide for Agile project management in Marketing, Customer Service, HR and other business teams.

PORTFOLIO, PROGRAMME & PROJECT MANAGEMENT MATURITY MODEL (P3M3)

I.3 Quality Management

Utilization of Statistical Process Control in Defined Level Software Companies to Manage Processes Using Control Charts with Three Sigma

NEOXEN MODUS METHODOLOGY

Moving from ISO9000 to the Higher Levels of the Capability Maturity Model (CMM)

Foredragfor Den Norske Dataforening, den

Knowledge Infrastructure for Project Management 1

The Design and Improvement of a Software Project Management System Based on CMMI

CMMI 100 Success Secrets

EVALUATING ASSET MANAGEMENT MATURITY IN THE NETHERLANDS: A COMPACT BENCHMARK OF EIGHT DIFFERENT ASSET MANAGEMENT ORGANIZATIONS

An Integrated Model of ISO 9001:2000 and CMMI for ISO Registered Organizations

Analyzing and interpreting data Evaluation resources from Wilder Research

Application Support Solution

Exploring Graduates Perceptions of the Quality of Higher Education

Data Quality Assessment

A Report on The Capability Maturity Model

CSC 408F/CSC2105F Lecture Notes

Transcription:

Information and Software Technology 42 (2000) 27 34 www.elsevier.nl/locate/infsof Software process and product improvement: an empirical assessment J.P. Kuilboer, N. Ashrafi* University of Massachusetts, Management Science/Information Systems, 100 Morrissey Blvd., Boston, MA 02125, USA Received 16 March 1999; received in revised form 7 May 1999; accepted 3 June 1999 Abstract Despite all the attention that software process improvement (SPI) practices have received, there is no solid evidence of how extensively they are used across organizations, and their impact on quality, cost, and on-time delivery. The findings of previous studies are based on case studies, often assessing the effectiveness of a particular methodology in a large company. In our attempt to obtain a broader insight into the software process improvement practices, we conducted a survey targeted at software developers in New England. We collected 67 responses and used descriptive statistics to analyze the survey results. In addition, we examined the impact of SPI methodologies on quality factors and compared the impact to the importance of quality factors for software developers. The Spearman correlation coefficient was used to determine the degree of correlation between the two. 2000 Elsevier Science B.V. All rights reserved. Keywords: Software process improvement; CMM; ISO-9000; Quality factors; Empirical assessment 1. Introduction The software development crisis has been a topic of discussion for over a decade. People in the software industry have been looking for a silver bullet to solve the problems of project cancellation, cost overruns, and schedule delays. At the threshold of the millennium, as software applications grow in size, complexity, and criticality, a search for a solution has become even more imperative. Over the years, a set of tools and techniques such as CASE tools, rapid application development (RAD), information engineering and many more have been undertaken. And yet, new products continue to fail to meet their functional, technical, and reliability objectives, often over budget and later [1,2]. Faced with these challenges, and giving up on quick-fix solutions, the software development community has been looking for a comprehensive system that provides a roadmap to achieving improvements. Software process improvement practices (SPI) have, unquestionably, been the flavor of the 1990s [3]. SPI practices are oriented toward total process improvement rather than the final product. The assumption is that well-defined and clearly documented processes will eventually result in quality products. The International Organization for Standardization s ISO 9000-3 standard for quality management systems, the Software Engineering Institute s Capability Maturity Model * Corresponding author. Tel.: 1-617-287-7880; fax: 1-617-287-7877. (CMM), and the Malcolm Baldrige National Quality Award (MBNQA) are three process-oriented methodologies that provide guidelines for software process improvement. These practices do not tell software developers how to analyze, design, implement, test, or document software development. Instead, they furnish a set of standards, which provide a basis for evaluating the process of software development and eventually contribute to the continuous process improvement. The concept of SPI as an improvement driver is best described in an ISO report in 1992. This report states that product quality is highly dependent on processes used in its creation, and the way toward product quality is to have available and to utilize a proven, consistent, reliable method of software process assessment and use the results in a continuous improvement program [4]. Originally, NASA used SPI methodologies for developing advanced military avionics applications. Today, Microsoft, Raytheon, General Electric, and IBM are but a few companies that advertise the successful use of SPI to develop software for the external market or internal use. Mark Paulk, head of the Software Engineering Institute at Carnegie Mellon, reports that The increasing ability of mature software organizations to deliver high-quality software products on budget and schedule shows [that the software crisis is dead] : at least for that part of the software community that has adopted a systematic approach to software process improvement. Despite the publicity and success stories of CMM and ISO 9000, they have not been without critics. Smaller 0950-5849/00/$ - see front matter 2000 Elsevier Science B.V. All rights reserved. PII: S0950-5849(99)00054-3

28 J.P. Kuilboer, N. Ashrafi / Information and Software Technology 42 (2000) 27 34 Table 1 Percentage of responses indicating the type of SPI methodology used in their organization (12% of organizations use both ISO and CMM) ISO CMM MBNQA Other None 28% 21% 0% 22% 40% companies find them difficult to implement, and were hesitant to adopt them [4]. In addition, the impact of process improvement methodologies in achieving software quality and productivity is not clear. According to Fenton [5], anecdotal evidence of significantly improved quality and productivity are not backed up by hard evidence and where real empirical evidence does exist, the results are counter to the view of the so-called experts. Caper Jones [6] notes that ISO 9000 certification and the SEI capability maturity levels do not yet have strong empirical correlation with software quality results. He adds that, although the ISO standards are aimed at quality, they have not yet created any significant results within the software industry. Whether these practices are the solutions to the software crisis, as some claim, or just another trend, which will fade away as new theory and practices come along, as some others assert, is not certain. There is no accurate estimate of their use in the software development community [7], and debates about their impact on software quality and on-time delivery continues. The basic question of whether a systematic approach to the software process improvement is the answer to software crisis remains unanswered. 2. Survey In our efforts to shed some light on these questions, we conducted a survey, targeted at software developers in New England. We contacted four professional organizations in New England Software Process Improvement Network (SPIN), Association of Computing Machinery (ACM) Boston Chapter, American Society for Quality (ASQ) of Massachusetts and New England Software Quality Assurance Forum (NESQAF), and obtained permission to attend one of their meetings. Data gathering was based on a semistructured interview: we handed out questionnaires in person and answered questions as they were raised. Naturally, those members who were not part of a developing team declined to answer. Sixty-seven responses were collected and analyzed to obtain a preliminary insight into software process improvement practices. The questionnaire consisted of three parts: in the first part, we asked the respondents to identify which methodology they were using and for how long. The choices were: ISO, CMM, MBNQA, and/or any other. As the type of organization was not consequential to our study, we did not ask for that information; rather, we wanted to attain a profile of the software products, which were being developed using a systematic approach to software process improvement. Thus, we asked questions regarding the type and the average size of software projects. The type of software was determined by indicating whether the software was developed for in-house use, on contract, for general purposes, or shrinkwrap. The size of software was measured by the size of the developing team and the time between the start and end (the first revenue shipment) of software development project. In the second part of the survey, we asked questions pertaining to the impact of SPI methodologies on cost, schedule, and quality. While cost and scheduling are quantitative in nature, and thus easier to estimate, quality is qualitative, elusive in nature, and difficult to measure. Traditionally, software quality was defined in two ways: 1. Fitness of use as measured by customer satisfaction. 2. Quality of the product as measured by totality of features and characteristics that bears on its ability to satisfy given needs; for example, conformance to specifications [IEEE 1983]. To obtain information on the former, we asked what was the impact of SPI on the overall customer satisfaction. And, to obtain a comprehensible answer to the question regarding the impact of SPI methodologies on the quality of products, we used the taxonomy of software quality factors that was recognized by software quality experts [8]. This taxonomy divides the software quality components into three dimensions: quality of design, quality of performance, and quality of adaptation. It then identifies the quality factors for each dimension. Correctness, maintainability, and verifiability of a software product contribute to a good design. Efficiency, integrity, reliability, usability, and testability are measures of high quality performance. Finally, expendability, flexibility, portability, reusability, interoperability, and intra-operability are quality factors for adaptation. For the definition of these factors refer to The Handbook of Software Quality Assurance [8]. In the third part of the survey, we asked questions regarding the importance of quality factors to software developers. We wanted to compare the impact of the SPI methodologies on the quality factors and the importance of quality factors to the developers. A correlation between impact and importance (or the lack of it) would be an indication whether SPI guidelines touch on topics, which the software developing community considers important. 3. Results Our survey revealed that while 60% of the interviewed companies use some SPI methodology, 40% replied none to the question of which SPI methodologies are used by their organizations. Table 1 depicts the percentage responses indicating the type of SPI methodology used in their organization. More organizations are using ISO 9000 than CMM, but the proportion was much lower than the 2:1 estimated ratio of a study in 1997 [9]. This may be due to an anomaly of our

J.P. Kuilboer, N. Ashrafi / Information and Software Technology 42 (2000) 27 34 29 Table 2 Percentage of software projects that deployed SPI for certain time duration 1 year 1 and 2 years 2 and 3 years 3 and 5 years 10 years 22% 41% 12% 16% 9% sample or an indication that the adoption of CMM is gaining ground on alternative frameworks. MBNQA was not mentioned at all, and among other methodologies used, there was no dominant approach. Some of the approaches mentioned were proprietary methodologies such as Summit- D, US government FDA guideline FDA-QSR, Software Engineering Laboratory SEL (NASA), and a fair number of homegrown methodologies. The average adoption time for our respondents was 38 months (about three years). This finding concurs with a technical report conducted by CM/USEI-94-TR-013 [10], which indicates that the median number of years for an organization that initiated software process improvement efforts in recent years was 3.5. Table 2 shows the percentage of software projects that have deployed SPI for specific time duration. Although both ISO and CMM have been available for almost 10 years, more than 90% of the respondents have been using a SPI methodology for 5 years or less and 63% for 2 years or less. The raw data did not indicate a significant difference in the duration of the use of ISO and CMM. Our survey results disclosed that projects, for which SPI methodologies were deployed, are mostly developed in-house or for customers under contract. Table 3 shows the breakdown for the type of software projects. The size of projects was derived from two factors: development team size and the time between the start and end (the first revenue shipment) of the software development project. For both factors, we asked for a range of values rather than a specific value, and used mean and standard deviation for lower and upper limits as an indication of size and its variability. Table 4 summarizes this information. These measures show a wide range for both team size and development duration, indicating that once companies adopt a SPI methodology, they apply it to small and larger projects alike. These preliminary data provide a profile of software development projects that deploy SPI methodologies and set the ground for the primary purpose of this paper: examining the impact of SPI on productivity and quality. To this end, we asked the respondents to rate the impact of SPI on cost, Table 3 Percentage of the type of software projects developed using a SPI methodology. (Note that some companies develop two or three types of software applications) In-house Contract General/retail Shrink wrap 50% 60% 20% 15% schedule, and quality factors ranging from highly increased, increased, the same, decreased to highly decreased. Tables 5 and 6 relate to the impact of SPI on cost, schedule, and customer satisfaction. For the sake of simplicity and clarity, we combined highly increased with increased, and decreased with highly decreased. Table 5 excludes 30% of the overall respondents that indicated that it was too early to describe the impact of SPI on the schedule and cost. On the whole, it was difficult to determine a significant increase or decrease on cost and scheduling. The raw data indicated that companies using a SPI methodology for a longer period predominantly showed an overall increase in cost and delivery time. ISO, CMM, or other schemes did not indicate a significant difference in their impact on cost and schedule. The second part of the questionnaire focused on obtaining information on the impact of SPI methodologies on quality defined as fitness of use. We found that unlike cost and scheduling, the perceived satisfaction of the customer for companies who used some sort of SPI methodologies was noteworthy: more than two thirds reported an increased or highly increased impact. Table 6 shows the results for this question. These findings were consistent with Yourdon and Howard Rubin s report of some decline in the productivity, and a substantial increase in the quality in recent years due to use of SPI [11]. Based on these results, perhaps one could conclude that quality is not free, as some experts have declared. When comparing the impact of ISO and CMM, we found that 100% of the companies using both ISO and CMM claimed an increase in customer satisfaction, 80% of those using only CMM declared high to very high increase, while only 20% of the firms using only ISO reported high to very high increase in customer satisfaction. ISO 9000 was originally intended to facilitate trade rather than process improvement [9], and led to the least satisfaction. One Table 4 Size of software projects developed using a SPI methodology Average and standard deviation for the size of developing team Average and standard deviation for the time between the start and the end of software development project (in months) Lower limit Upper limit m ˆ 5 m ˆ 31 s ˆ 8 s ˆ 47 m ˆ 6 m ˆ 20 s ˆ 7 s ˆ 11

30 J.P. Kuilboer, N. Ashrafi / Information and Software Technology 42 (2000) 27 34 might argue that while the documentation of processes and the discipline imposed by the ISO standards limits the risks (i.e. you know what you get), it does not achieve the process improvements satisfying the customers. To analyze the impact of the SPI methodology on the quality factors, we asked the respondents to rank their perceived impact from very low, low, average, high to very high. Again, to simplify and avoid confusion, we combined the percentages of respondents who answered high and very high, and used this figure as a basis of our analysis for this part of survey and we will refer to it as HVH henceforth. Fig. 1 shows the impact of SPI methodology on the quality of design. Readers interested to see and compare the percentage of responses for HVH, low to very low (LVL) and no difference impact should refer to Table 7. Therefore, the percentage of respondents who reported HVH impact of methodologies on the quality of design dimensions was moderate, with correctness being the leading indicator, followed by verifiability. Correctness and verifiability are addressed by key process areas at level 2 of CMM and clauses 5.9.2/6.1.3 and 5.4.6/5.5.2, respectively in ISO 9000. Maintainability had the lowest HVH impact. This is understandable, as there is an ongoing debate about whether CMM addresses maintainability directly or at all, though it is explicitly covered in ISO 9000 clause 5.10. Fig. 2 shows the HVH impact of SPI methodology on the quality of performance. The HVH impact of methodologies on the quality of performance factors was uneven. Reliability had the highest impact, followed by usability and testability. Reliability is one of the most important aspects of software quality and it is encouraging that more than 50% of respondents reported high to very high impact on this factor. Reliability, usability, and testability are explicitly covered in key process areas of the CMM at level 2, and 5.7 and 5.9 clauses of ISO. The impact of methodologies on efficiency and integrity of product was quite low. These results are not surprising, given that both CMM and ISO guidelines are primarily concerned with the integrity and efficiency of the process, which do not necessarily lead to integrity and efficiency of the final product. Table 8 presents summary information on the percentage of responses for HVH, LVL and no difference impact on quality of performance. Fig. 3 shows the HVH impact of SPI methodology on the quality of adaptation. Among factors that determine the quality of adaptation, expandability received the highest HVH impact and was followed by intra-operability. Reusability is covered in the 5.6.2 clause of ISO, but originally, it was not covered in CMM; however, in 1996, its importance was recognized and addressed specifically in an extension of CMM, which deals with the Software Acquisition process (SA-CMM). Expandability, flexibility, and portability are covered in ISO clauses 5.10.1.c, 5.10.7, 5.3.1, respectively, and are covered in level 2 of CMM. It must be noted that in extensions to CMM (CMMI), extra attention is given to integration, which should have an impact on both the intra- and inter-operability. Table 9 presents the summary information on the percentage of responses for HVH, LVL and no difference impact on the quality of adaptation. In the third part of the questionnaire, we asked the respondents to rank the importance of quality factors from very low, low, average, high, to very high. In this section of the study, we wanted to compare the impact of SPI methodologies on the quality factors and the importance of the quality factors to the software developers. It made sense to compare the percentage of respondents who answered HVH impact and those who reported HVH importance. Fig. 4 depicts the histogram that shows this comparison. Correctness, reliability, and intra-operability were the most important factors to the software developers. The impacts of SPI on these three factors do not measure up to their perceived importance. The factors rated lowest in importance are efficiency and integrity. They could be perceived as not being part of the purview of the developers surveyed. The difference between the perceived importance and the impact for the inter-operability factor could reflect that, originally, ISO and CMM both focused on getting internal processes under control before paying attention to the external capabilities of software products. As the concept of integration and supply chain gain importance, inter-operability is likely to get more attention in forthcoming standards. The impact of SPI on reusability exceeds its stated importance. Reusability is specifically mentioned in the CMM methodology ( 94 SEI report p. 5), and is often cited as the path for technological innovation. However, it faces some resistance from developers, who have not got into the idea of using reusable component made elsewhere. Portability and expandability are also viewed as better served than the importance would warrant. The software developers are either not judging these two issues at their proper value, or overestimating the impact of SPI. Table 10 compares the percentage of respondents that assigned HVH importance to factors with the percentage that reported the HVH impact of SPI. This table shows that the most important quality factor to software developers is correctness, and yet this correctness also yields the biggest discrepancy between impact and importance. The negative numbers in the difference column indicate that impact of expandability, reusability, and portability exceeds their importance. The last two columns show the ranking of importance and impact, indicating some correlation between the two; rankings are close for most factors. To validate whether there was a correlation between importance and impact, we ran the Spearman correlation test. The result showed a high correlation (71%) between the two. This is good news, showing that the priorities of software developers for high quality product and the impact of SPI requirements are congruent.

Table 5 Impact of SPI methodologies on cost and schedule J.P. Kuilboer, N. Ashrafi / Information and Software Technology 42 (2000) 27 34 31 Increased/highly increased Same Decreased/highly decreased Impact of SPI on cost (actual cost vs. estimated cost) (%) Impact of SPI on scheduling (actual delivery vs. estimated delivery) (%) 40 29.63 29.63 38 38 24 Table 6 Impact of SPI methodologies on customer satisfaction Increased/highly increased Same Decreased/highly decreased Impact of SPI on customer satisfaction 71% 18% 11% Fig. 1. HVH impact of SPI methodology on the quality of design. Fig. 2. HVH impact of SPI methodology on the quality of performance. 4. Conclusion and summary Our survey results provided an insight into the SPI practices in New England region, sometimes referred to as the Silicon Valley of the Northeast. This study is exploratory in nature and may have some limitations. Although the respondents belong to four different groups, some of them (i.e. SPIN members) have vested interest in software process improvement activities and may exhibit some bias towards SPI methodologies. Also, as we could not be sure that our sample was a true representation of software developers in New England region, we did not perform any inferential statistics that require scientific sampling procedure. Hence, we limited our analysis to descriptive statistics, which is used basically for exploratory examination of qualitative and quantitative observations. For our future research, we will extend our survey to include a much broader base (via internet), and will use a Table 7 Impact of SPI methodology on the quality of design HVH (%) LVL (%) No difference (%) Correctness 48.6 16.2 35.1 Maintainability 38.9 27.8 33.3 Verifiability 47.2 27.8 25.0

32 J.P. Kuilboer, N. Ashrafi / Information and Software Technology 42 (2000) 27 34 Table 8 Impact of SPI methodology on the quality of performance HVH (%) LVL (%) No difference (%) Efficiency 25.7 40.0 34.3 Integrity 22.9 37.1 40.0 Reliability 56.8 13.5 29.7 Usability 43.2 32.4 24.3 Testability 43 8.3 55.67 Table 9 Impact of SPI methodology on the quality of adaptation HVH (%) LVL (%) No difference (%) Expandability 54.5 21.0 24.5 Flexibility 46.9 21.9 31.2 Portability 40.6 37.5 21.9 Reusability 46.9 31.3 21.8 Inter-operability 31.3 25.0 43.7 Intra-operability 52.9 20.6 26.5 Fig. 3. Impact of SPI methodology on the quality of adaptation. Fig. 4. Comparison of impact and importance.

Table 10 Comparison of importance and impact J.P. Kuilboer, N. Ashrafi / Information and Software Technology 42 (2000) 27 34 33 Quality factors % of HVH importance % of HVH impact Difference (importance-impact) Ranking importance Ranking impact Correctness 82.5 48.6 33.9 1 5 Reliability 72.5 56.8 15.7 2 1 Intra-operability 62.2 52.9 9.3 3 4 Testability 60.0 55.6 4.4 4 2 Verifiability 60.0 47.2 12.8 4 6 Maintainability 53.8 38.9 14.9 6 11 Expendability 52.8 54.5 1.7 7 3 Interoperability 50.0 31.3 18.7 8 12 Flexibility 47.2 46.9 0.3 9 7 Usability 45.0 43.0 2.0 10 9 Integrity 41.0 22.9 18.1 11 14 Reusability 38.9 46.9 8.0 12 7 Portability 30.6 40.6 10.0 13 10 Efficiency 28.2 25.7 2.5 14 13 modified questionnaire that identifies more clearly the position of the respondents, type of companies, and other detailed information. We could then use this information for a different type of comparative studies involving inferential statistics. For this study, however, we were only interested in developers opinions and their perceptions of SPI methodologies. The findings of this study were quite interesting, some of which validated previous case studies. The most noteworthy results are summarized below: 1. The majority of our respondents reported that deploying an SPI methodology does not necessarily decrease the cost or speed up the delivery. They indicated, however, that the use of a methodology had a high impact on perceived customer satisfaction. 2. Our survey results disclosed that projects developed inhouse or/and for customers under contract, are the best targets for deploying SPI methodologies. The retail market s low adoption supports the opinion that the time to market, rather than quality, remains the dominant factor. 3. Our results showed a wide range for both team size and development duration, indicating that once the companies adopt a SPI methodology, they apply it to small and larger projects alike. Some shortcomings regarding the impact of SPI on quality factors were detected. Given that a high proportion of IS budgets go into maintenance, the low impact on maintainability signals that this factor warrants more attention in future frameworks. Integrity and efficiency are two factors experiencing the least impact from SPI. With the advent of the Internet, integrity, along with systems resistance to intrusion, and self-healing configuration are emerging as issues and will require more consideration. Efficiency, however, remains a low concern for the software development community because rapid advances in hardware more than compensates software deficiencies. The HVH impact of SPI on factors that determine the quality of adaptation was generally good. The future extensions to CMM such as CMMI, give extra attention to integration, which should have an impact on both intra- and inter-operability. The 71% positive correlation between impact and importance indicates that some of the SPI methodologies do address some aspects of the software crisis. Our study, although limited to the New England region, provides some insight into SPI practices and the results are encouraging overall. Some of the discovered shortcomings are being addressed by the revisions of CMM, which aim at improving their guidelines through new extensions to CMM key process areas, and by ISO through its SPICE/ISO 15504 initiative. The search for software process and product improvements is not over. As new and/or improved techniques, philosophies, and methodologies come into picture, there is a need to ensure that the emerging guidelines meet the needs of their users. This is only possible by conducting studies, which voice the practitioners view on the effectiveness of paradigms. Successive refinement based on proper feedback could be a partial solution to the software development crisis. References [1] The Standish Group, Chaos, Standish Group Report, 1995. [2] H.E. Thomson, P. Mayhew, The software process: a perspective on improvement, The Computer Journal 37 (8) (1994) 683 690. [3] E.M. Gray, W.L. Smith, On the limitation of software process assessment and the recognition of a required re-orientation for global process improvement, Software Quality Journal 7 (1998) 21 34. [4] ISO/IEC, ISO/IEC JTC1/SC7 N944R, The need and requirements for a software process assessment standard, Study Report, Issue 2.0, June 1992. [5] N. Fenton, How effective are software engineering methods? Journal of Systems and Software 22 (1993) 141 146. [6] J. Capers, The best of Capers Jones: essentials in software development, Artemis Management Systems, 1998. [7] J. Herbsleb, D. Zubrow, D. Goldenson, W. Hayes, M. Paulk, Capability maturity model and the software quality, Communications of the ACM 40 (6) (1997) 30 40.

34 J.P. Kuilboer, N. Ashrafi / Information and Software Technology 42 (2000) 27 34 [8] G. Schulmeyer, J.I. McManus, The Handbook of Software Quality Assurance, Prentice Hall, Englewood Cliffs, NJ, 1998. [9] S.A. Sheard, The framework quagmire, Crosstalk, September 1997. [10] J. Herbsleb, A. Carleton, J. Rozum, J. Siegel, D. Zubrow, Benefits of CMM-based software process improvement initial results, Technical Report CMU/SEI-94-TR-013 ESC-TR-94-013, August 1994. [11] R.L. Glass, The reality of software technology payoffs, Communications of the ACM 32 (2) (1999) 74 79.