Current Defect Density Statistics Ann Marie Neufelder You may not reprint or copy any part of this presentation without t express and written consent from Ann Marie Neufelder
Actual fielded from 90+ projects spanning nearly every industry Defect clusters delivered normalized 3 2.5 2 1.5 1 0.5 0 0 0.5 1 Percentile group World class Very good Good Fair Poor Ugly This data is in terms of fielded (escaped) s per 1000 lines of effective code normalized to assembler. Seven clusters are visible. A method to predict which cluster your project will fall into was developed from this data.
How to determine normalized effective size Predict/count new and modified lines of code Predict/count deleted lines Multiply existing but unchanged code by 10% Entire functions deleted reduce existing size Effective size = Modified + New + Lines subtracted + (10% of existing code) Multiply effective size by conversion ratio to assembler using industry tables as summarized below Second generation (C, Fortran) 3 Object oriented (Java, C++, Ada 9x) 6 Visual Basic 10
Use the SoftRel Survey to predict the best cluster Appropriate Cluster determined by these things Inherent stability of existing design and code Methods and techniques use to prevent s and develop software Application type Existence of major obstacles (new technology, new environments, etc.) Existence of major opportunities (end user domain experts available to project, etc.) Inherent stability of development process Process alone will not guarantee a world class cluster! An SEI CMM level 1 organization can be world class An SEI CMM level 4 or 5 does not guarantee world class
How the survey score maps to the clusters SoftRel survey score versus clusters SoftRel Survey score 60 40 20 0 0 0.5 1 World class Very good Good Fair Poor Percentile group Ugly The most variation exists in the world class cluster, however, this cluster is easily predictable because of void of major obstacles and presence of major opportunities as shown next.
The World Class Cluster had no major obstacles Major project obstacles versus clusters Number of major project obstacles 6 5 4 3 2 1 0 0 0.5 1 Percentile group World class Very good Good Fair Poor Ugly Obstacles are defined specifically as new technology, new operating system, new development environment, new compiler, new target hardware
The Ugly group had no opportunities Major project opportunities versus clusters Number of major project opportunities 8 6 4 2 0 0 0.5 1 World class Very good Good Fair Poor Percentile group Ugly Opportunity Explicitly defined as the degree to which end user domain experts are available to the software engineers for this project
by system type System application type fielded testing Ratio of test to field Command and control 0.106 0.180 1.7 Command, control and communications 0.011 0.366 33.9 Military ground vehicle 0.106 n/a Satellite 0.087 0.358 4.1 Large stationery capitol equipment 0.649 2.495 3.8 Small devices 0.202 4.787 23.7 GPS 0.134 n/a Power systems 1.0925 n/a No special target hardware 0.123 0.448 3.6 Total/average 0.414 2.104 5.1
by software type Software application type Wireless capabilities Biometrics Domain knowledge can be acquired via public domain in short period of time Client server Real time Multi-tasking DB interfaces Mathematically intensive Web based Target HW is new or evolving Application process evolving fielded 0.165 0.400 0.068 0.108 0.476 0.449 0.456 0.430 0.008 0.642 0.378 testing n/a This is the same set of data sliced a different way. 3.092 1.290 0.434 2.172 2.104 1.459 1.513 0.091 2.588 0.278 Ratio of test to field n/a 18.7 3.2 4.0 4.6 4.7 3.2 3.5 11.1 4.0 0.7
by risk level Risk level fielded testing Ratio of test to field Safety risk (occupational, regional, national or global) 0.509 2.608 5.1 Legal risks (banking, etc) 0.169 4.539 26.9 Monetary risks (loss of product with monetary value) 0.476 1.535 3.2 Recall risk 0.230 3.158 13.7 Government regulated 0.141 3.165 22.5 This is the same set of data sliced a different way.
You can also predict the risk of a late delivery Percentile Ratio of Normalized Fielded Defect Density testing to fielded Stddev Ave Min Max Probability of a late delivery (%) Margin of error when delivery is late (%) World Class 8.5.011 0.0055 0.0180.006 10 17.5 Very Good 12.4.060 0.0396 0.0756.0172 20 25 Good 10.7.112 0.0888 0.135.0169 25 25 10.6.250 0.180 0.366.0590 36 41 Fair 2.1.618 0.400 0.835.177 85 125 Poor 16.1 1.111 1.0357 1.224.081 100 100 Ugly.5 2.069 1.743 2.674.524 83 75 Probability of late delivery If your organization makes 10 releases and the probability of being late is 10% then 1 out of 10 will be late Margin of error Measured as a percentage of the original schedule prediction
How to predict your cluster Answer a survey based on Risks Product characteristics Application type Resources Practices, techniques and methods Process stability Determine a baseline cluster for your typical project Each project specific additional obstacle lowers the cluster while adding domain expertise raises the cluster
Lessons Learned Risks cannot be overcome by any of the following New expensive automated tools to theoretically speed up development (this will actually increase the risk level for the first time project) Wishful thinking Risks can be minimized by More granular milestones Addressing high risk items before everything else in the schedule software engineers tend to work on the low risk tasks first Design prototyping when design is a risk Requirements prototyping when end user requirements are volatile Defect prevention techniques such as formal unit testing Increasing the end user domain knowledge of the team This does not mean software experience this means application experience