EVALUATING SOFTWARE METRICS AND SOFTWARE MEASURMENT PRACTICES. Version 4.0 March 14, Capers Jones, VP and CTO; Namcook Analytics LLC

Size: px
Start display at page:

Download "EVALUATING SOFTWARE METRICS AND SOFTWARE MEASURMENT PRACTICES. Version 4.0 March 14, 2014. Capers Jones, VP and CTO; Namcook Analytics LLC"

Transcription

1 EVALUATING SOFTWARE METRICS AND SOFTWARE MEASURMENT PRACTICES Version 4.0 March 14, 2014 Capers Jones, VP and CTO; Namcook Analytics LLC Web: Blog: Abstract Software productivity and quality are important topics with significant economic importance in the modern world. Both productivity and quality should be measured with accuracy using effective metrics and proven measurement practices. But unlike older and more mature scientific disciplines, software engineering has used inaccurate metrics and ineffective measurement practices for more than 50 years. This paper analyzes and evaluates a sample of current software metrics and measurement practices. Some common problems include the fact that lines of code or LOC penalizes highlevel languages and makes requirements and design invisible. The common cost per defect metric penalizes quality and makes buggy software look less expensive than high-quality software. The urban legend that cost per defect goes up by more than 100 times after release is not true and is due to poor measurement practices that ignore fixed costs. The purpose of this paper is to show how both productivity and quality can be measured with high precision and with adherence to standard economic principles. Overall activity-based costs using function point metrics are the best choice for productivity and economic analysis. Function points combined with defect removal efficiency (DRE) are the best choice for software quality analysis. More than 300 metric and measurement topics are included. Copyright 2014 by Capers Jones. All rights reserved. 1

2 INTRODUCTION Over the past 50 years the software industry has grown to become one of the major industries of the 21 st century. On a global basis software applications are the main operating tools of corporations, government agencies, and military forces. Every major industry employs thousands of software professionals. The total employment of software personnel on a global basis probably exceeds 20,000,000 workers. Because of the importance of software and because of the high costs of software development and maintenance combined with less than optimal quality, it is important to measure both software productivity and software quality with high precision. But this seldom happens. For more than 50 years the software industry has used a number of metrics that violate standard economic concepts and produce inaccurate and distorted results. Two of these are lines of code or LOC metrics and the cost per defect metric. LOC metrics penalize high-level languages and make requirements and design invisible. Cost per defect penalizes quality and ignores the true value of quality, which derive from shorter schedules and lower development and maintenance costs. Both LOC and cost per defect metrics can be classed as professional malpractice for overall economic analysis. However both have limited use for more specialized purposes. One of the reasons IBM invested more than a million dollars into the development of function point metrics was to provide a metric that could be used to measure both productivity and quality with high precision and with adherence to standard economic principles. For more than 200 years a basic law of manufacturing has been used by all major industries except software: If a manufacturing cycle includes a high component of fixed costs and there is a decline in the number of units manufactured, the cost per unit will go up. The problems with both LOC metrics and cost per defect are due to ignoring this basic law. For modern software projects requirements and design are often more expensive than coding. Further, requirements and design are inelastic and stay more or less constant regardless of coding time. When there is a switch from a low-level language such as assembly to a higher level language such as Java the quantity and effort for coding is reduced but requirements and design act like fixed costs, so the cost per line of code will go up. Table 1 illustrates the paradoxical reversal of productivity rates using LOC metrics in a sample of 10 versions of a PBX switching application coded in 10 languages, but all the same size of 1,500 function points: 2

3 Table 1: Productivity Rates for 10 Versions of the Same Software Project (A PBX Switching system of 1,500 Function Points in Size) Language Effort Funct. Pt. Work Hrs. LOC per LOC per (Months) per Staff per Staff Staff Month Funct. Pt. Month Hour Assembly C CHILL PASCAL PL/I Ada C Ada Objective C Smalltalk Average As can be seen the Assembly version had the largest amount of effort but also the highest apparent productivity measured with LOC per month and the lowest measured with function points per month. Function points match standard economic assumptions while LOC metrics reverse standard economics. In this table using LOC metrics for productivity comparison would be professional malpractice. When testing software, the time needed to write test cases and run them are comparatively inelastic and stay more or less constant regardless of how many bugs are found. When few bugs are found, test case preparation and execution act like fixed costs so the cost per defect will go up. Actual defect repairs are comparatively flat, although there are ranges. But the ranges are found in every form of defect removal and do not rise in a consistent pattern. Table 2 shows the mathematics of the cost per defect metric. Every column uses fixed costs that are exactly the same. Labor costs are set at $75.75 per hour for every row and column of the 3

4 table. The defect repair column assumes a constant value of 5 hours per defect for every form of test: Table 2: Cost per Defect for Six Forms of Testing (Assumes $75.75 per staff hour for costs) Writing Running Repairing TOTAL Number of $ per Test Cases Test Cases Defects COSTS Defects Defect Unit test $1, $ $18, $20, $ Function test $1, $ $7, $9, $ Regression test $1, $ $3, $5, $ Performance test $1, $ $1, $3, $ System test $1, $ $1, $3, $1, Acceptance test $1, $ $ $2, $2, As can be seen the fixed costs of writing test cases and running test cases cause cost per defect to go up as defect volumes come down. This of course is due to the basic rule of manufacturing economics that in the presence of fixed costs a decline in units will increase cost per unit. However actual defect repairs were a constant 5 hours for every form of testing in the table. By contrast looking at the same project and the same testing sequence using the metric defect removal cost per function point the true economic situation becomes clear: Table 3 Cost per Function Point for Six Forms of Testing (Assumes $75.75 per staff hour for costs) (Assumes 100 function points in the application) Writing Running Repairing TOTAL $ Number of Test Cases Test Cases Defects PER F.P. Defects Unit test $12.50 $7.50 $ $ Function test $12.50 $7.50 $75.75 $ Regression test $12.50 $7.50 $37.88 $ Performance test $12.50 $7.50 $18.94 $ System test $12.50 $7.50 $11.36 $ Acceptance test $12.50 $7.50 $3.79 $

5 It is important to understand that tables 2 and 3 both show the results for the same project and also use identical constant values for writing test cases, running them, and fixing bugs. However defect removal costs per function point decline when total defects decline, while cost per defect grows more and more expensive as defects declines. Both of these problems will be discussed again later in this report. But the basic point is that manufacturing economics and fixed costs need to be included in software manufacturing and production studies. Thus far much of the software literature has ignored fixed costs. What is software productivity? The standard economic definition for productivity for more than 100 years has been goods or services produced per unit of labor or expense. For software projects the critical topic is what exactly constitutes a unit of measure for software s good or services. The oldest definition for software goods was a line of code. In the 1950 s when only machine language and assembly language existed, more than 90% of the total effort for software was involved with coding and this was not a bad choice. Today in 2014 there are over 126 occupations associated with software engineering and for major systems coding is less than 30% of the total effort. There are also more than 3,000 programming languages of various levels and capabilities. LOC is no longer an effective metric of economic productivity, and indeed reverses true economic productivity as will be discussed later. The best choice for software goods in 2014 is the function point metric. The function point metric can be used for requirements, design, coding, testing, documentation, management and all other software activities and occupations. The LOC metric only applied to coding and has no relevance for any of the other kinds of work associated with modern software. Furthermore, function point metrics are methodology neutral and work equally well with agile projects, iterative projects, extreme programming, the Rational Unified Process (RUP), the Team Software Process (TSP), Merise, Prince2, and any of the other software development methods now in common use. By contrast story point metrics are limited to agile projects with user stories. Use-case point metrics are limited to projects that utilize use cases, and have no relevance for other design methods such as state transition diagrams or even flow charts. 5

6 What is software quality? Quality is something of a subjective topic in all fields. However for software a workable definition for quality is the absence of defects that would cause a software application to either fail completely or produce incorrect results. This has been used by the author for more than 40 years and has been applied to embedded software, systems software, commercial software, military software, outsource software, web software, and other forms as well. An older and common definition for quality is, quality means conformance to requirements. However this is not a valid definition for software quality. Requirements themselves are filled with defects and also with toxic requirements that should not be included in software applications at all. To define quality as conforming to something that has been measured to cause more than 20% of total software defects is not safe and not satisfactory. Software quality needs to encompass requirements defects and not assume that all user requirements are perfect and error free. There are many other definitions for quality including a host of words ending in..ility such as reliability and maintainability. However the absence of defects is an attribute that can be measured with precision while terms such as maintainability and reliability are ambiguous. Further empirical data supports the hypothesis that low defect counts correlate well with high levels of user satisfaction. Studies within IBM found that low defects and high levels of user satisfaction were consistently related. This brings up two powerful metrics for understanding software quality: 1) software defect potentials; 2) defect removal efficiency (DRE). The phrase software defect potentials was first used in IBM circa It is based on measured data from IBM software applications but analyzed and pointed toward future applications. The term defect potential means the sum total of bugs or defects that are likely to be found in all software deliverables. In other words defect potentials include bugs that might originate in requirements, architecture, design, code, user documents, bugs in test cases themselves, and also bad fixes or bugs in attempts to fix earlier bugs. In today s world of 2014 the best metric for expressing software defect potentials is the metric defects per function point because this allows defects from all sources to be summed; i.e. Table 4: Software Defect Potentials Circa Requirements defects = 1.00 per function point 2. Architecture defects = 0.30 per function point 3. Design defects = 1.25 per function point 4. Code defects = 1.50 per function point 5. Document defects = 0.60 per function point 6. Test case defects = 0.75 per function point 7. Bad fix defects = 0.35 per function point TOTAL DEFECTS = 5.75 per function point 6

7 No other metric allows defects from all origins to be compared and summed in order to show overall defect potentials. This is a capability that only function point metrics provide. The second powerful metric for software quality is defect removal efficiency (DRE). This metric refers to the percentage of defects that are found and removed prior to the release of software to clients. Normally DRE is measured at a fixed time point, which IBM set at 90 days after release of the software to clients. A simple example illustrates the basic mathematics of DRE. Assume that the development team found 90 bugs in a small software application. In the first three months of use, the clients reported another 10 bugs in the first three months. The total number of bugs is 100, so the defect removal efficiency level is 90%. The metric of defect potential and the metric of defect removal efficiency are synergistic and a show how effective various methods can be, including formal inspections, static analysis, pair programming, and all forms of testing. The combination of defect potential and defect removal efficiency also show that some kinds of defects, such as requirements defects, are harder to eliminate than coding defects. Table 5 shows that when defect potentials are combined with DRE, the results are of considerable importance to the software industry: Table 5: Software Defect Potentials and Defect Removal Efficiency (DRE) Defect Origins Defect Defect Defects % of Potential Removal Delivered Total Requirements defects % % Design defects % % Test case defects % % Bad fix defects % % Code defects % % User document defects % % Architecture defects % % TOTAL % 0.80 % Note: table 3 is only an example. Defect potentials and DRE vary widely from the results shown here, in both directions. The combination of defect potentials and DRE measures shows that requirements defects, design defects, test case defects, and bad fix defects are harder to remove than code defects. This is proven by the percentages of delivered defects attributable to each kind of defect origin. It also shows that the older definition of quality as conformance to requirements has serious flaws when requirements are a major contributor to overall delivered defect volumes. The bottom line is that requirements, architecture, and design defects are resistant to testing, and therefore pre-test inspections of requirements and design documents should be used for all major 7

8 software projects. Testing is efficient against coding defects, of course, but testing is not efficient in finding requirements, architecture, and design defects so additional methods need to be used prior to release. Because this paper discusses a great many metrics and measurement topics, it is useful to summarize overall results by selecting the 10 best metrics for software economic analysis and then showing the 10 worst metrics for economic topics: Table 6: The 10 Best Metrics for Software Economic Analysis (alphabetical order) 1 Activity-based costs 2 Cost drivers 3 Cost of quality (COQ) 4 Defect potentials 5 Defect removal efficiency (DRE) 6 Function points 7 Occupation groups 8 Total cost of ownership (TCO) 9 User costs 10 Work hours per function point The combination of metrics in table five will show software productivity without violating standard economic assumptions and also show software quality including the economic value of high quality. We turn now to the set of the 10 worst metrics for software economic studies, some of which have been in use without analysis of their flaws for more than 50 years. Table 7: The 10 Worst Metrics for Software Economic Analysis (alphabetical order) 1 Backfiring 2 Cost per defect 3 DCUT 4 Leaky historical data 5 Lines of code (logical) 6 Lines of code (physical) 7 Phase-based costs 8 Story points 9 Technical debt 10 Total project costs with no details The combination of metrics in table 6 will show reversed productivity that violates the rules of standard economics and will not show quality at all. In fact the true value of quality will be 8

9 concealed and poor quality will look better than high quality. Technical debt is included because it only covers about 17% of the total costs of poor quality. Story points are included because they are not standardized and vary by more than 400% from company to company. Also, they are specific to agile projects with user stories and can t be used for projects that don t utilize user stories. Phase-based metrics are included because they cannot be validated. DCUT and leaky historical data are included because they make productivity and quality look better than they really are. Backfiring is included because it varies by over 50% in both directions from average values. Six Urgent Needs For Software Engineering Software is a major industry, but not yet a full profession with consistent excellence in results. Indeed quality lags far behind what is needed. Software engineering has an urgent need for six significant accomplishments: 1. Stop measuring with unreliable metrics such as LOC and cost per defect and begin to move towards activity-based costs, function point metrics, and defect removal efficiency (DRE) metrics. 2. Start every project with formal early sizing that includes requirements creep, formal risk analysis, formal cost and quality predictions using parametric estimation tools, and with requirements methods that will minimize toxic requirements and excessive requirements creep later on. 3. Raise defect removal efficiency from below 90% to more than 99.5% across the board. This will also shorten development schedules and lower costs. This can t be done by testing alone but needs a synergistic combination of pre-test inspections, static analysis, and formal testing using mathematically designed test cases and certified test personnel. 4. Lower defect potentials from above 4.00 per function point to below 2.00 per function point for the sum of bugs in requirements, design, code, documents, and bad-fix injections. This can only be done by increasing the volume of reusable materials, combined with much better quality measures than today. 5. Increase the volume of reusable materials from below 15% to more than 85% as rapidly as possible. Custom designs and hand coding are intrinsically expensive and error-prone. Only use of certified reusable materials that approach zero-defects can lead to industrialstrength software applications that can operate without excessive failure and without causing high levels of consequential damages to clients. 6. Increase the immunity of software to cyber attacks. This must go beyond normal firewalls and anti-virus packages and include permanent changes in software permissions, and probably in the von Neumann architecture as well. There are proven methods that can do this, but they are not yet deployed. Cyber attacks are a growing 9

10 threat to all governments, businesses, and also to individual citizens whose bank accounts and other valuable stored data are at increasing risk. The remainder of this paper discusses a variety of software metrics and measurement methods in alphabetical order: ALPHABETICAL DISCUSSION OF SOFTWARE METRICS AND MEASURES This paper is a work in progress and additional metrics will be added from time to time. 15 Common Software Risks (alphabetical order) Note the common risks associated with software measurement and metrics issues as indicated by triple asterisks *** : 1. Cancelled projects *** 2. Consequential damages to clients 3. Cost overruns *** 4. Cyber attacks 5. Estimate errors or rejection of accurate estimates *** 6. Impossible demands by clients or management *** 7. Litigation for breach of contract *** 8. Litigation for patent violation 9. Poor change control 10. Poor measurement after completion *** 11. Poor quality control *** 12. Poor tracking during development *** 13. Requirements creep 14. Toxic requirements and requirements errors 15. Schedule slips by > 25% *** Nine out of 15 common risks involve numeric information and errors with estimates or measures as contributing factors. Many software project risks could be minimized or avoided by formal parametric estimates, formal risk analysis prior to starting, accurate status tracking, and accurate benchmarks from similar projects. 20 Criteria for Software Metrics Selection Software metrics are created by ad hoc methods, often by amateurs, and broadcast to the world with little or no validation or empirical results. This set of 20 criteria shows the features that effective software metrics should have as attributes: 1. Be validated before release to the world 10

11 2. Be standardized; preferably by ISO 3. Be unambiguous 4. Be consistent from project to project 5. Be cost effective and have automated support 6. Be useful for both predictions and measurements 7. Have formal training for new practitioners 8. Have a formal user association 9. Have ample and accurate published data 10. Have conversion rules for other metrics 11. Support both development and maintenance 12. Support all activities (requirements, design, code, test, etc.) 13. Support all software deliverables (documents, code, tests, etc.) 14. Support all sizes of software from small changes through major systems 15. Support all classes and types of software (embedded, systems, web, etc.) 16. Support both quality and productivity measures and estimates 17. Support requirements creep over time 18. Support consumption and usage of software as well as construction 19. Support new projects, enhancement projects, and maintenance projects 20. Support new technologies as they appear (languages, cloud, methodologies, etc.) Currently IFPUG function point metrics meet 19 of these 20 criteria. Function points are somewhat slow and costly so criterion 5 is not fully met. Other function point variations such as COSMIC, NESMA, FISMA, unadjusted, engineering function points, feature points, etc. vary in how many criteria they meet, but most meet more than 15 of the 20 criteria. The older lines of code metric meets only criterion 5 and none of the others. LOC metrics are fast and cheap but otherwise fail to meet the other 19 criteria. The LOC metric makes requirements and design invisible and penalizes high-level languages. The cost per defect metric does not actually meet any of the 20 criteria and also does not address the value of high quality in achieving shorter schedules and lower costs. The technical debt metric does not currently meet any of the 20 criteria, although it such a new metric that it probably will be able to meet some of the criteria in the future. Technical debt has a large and growing literature but does not actually meet criterion 9 because the literature resembles the blind men and the elephant, with various authors using different definitions for technical debt. Technical debt comes close to meeting criteria 14 and 15. The story point metric for agile projects seems to meet five criteria; i.e numbers 6, 14, 16, 17, 18 but varies so widely and is so inconsistent it cannot be used across companies and certainly can t be used without user stories. The use-case metric seems to meet criteria 5, 6, 9, 11, 14, and 15 but can t be used to compare data from projects that don t utilize use cases. 11

12 This set of metric criteria is a useful guide for selecting metrics that are likely to produce results that match standard economics and do not distort reality, as do so many current software metrics. Abeyant defects The term abeyant defect originated in IBM in the late 1960 s. It refers to an unusual kind of bug that is unique to a single client and a single configuration and does not occur anywhere else. In fact the change team tasked with fixing the bug may not be able to reproduce it. Abeyant defects are both rare and extremely troublesome when they occur. It is usually necessary to send a quality expert to the client site to find out what unique combination of hardware and software led to the abeyant defect occurring. Some abeyant defects have taken more than two weeks to identify and repair. In today s world of software with millions of users and spotty technical support some abeyant defects may never be fixed. Activity-based costs The term activity is defined as the sum total of the work required to produce a major deliverable such as requirements documents or source code. The number of activities associated with software projects ranges from a low of three (design, code, test) to more than 40. Several parametric estimation tools such as Software Risk Master (SRM) predict activity costs. A typical pattern of software activities for a mid-sized software project of 1000 function points in size might include these seven: 1) requirements; 2) design; 3) coding; 4) testing; 5) quality assurance; 6) user documentation; 7) project management. One of the virtues of function point metrics is that they can show productivity rates for every known activity, as illustrated by table 3 which is an example for a generic project of 1000 function points in size: Table 8: Example of Activity-Based Effort Activities Requirements Design Coding Testing Documentation Work Hours per FP Quality Assurance Management Totals 0 12

13 The ability to show productivity for each activity is a virtue of function point metrics and is not possible with many older metrics such as lines of code and cost per defect. To conserve space table 8 only shows seven activities but this same form of representation can be extended to more than 40 activities and more than 250 tasks. Function points are the only available metric in 2014 that allows both activity and task-level analysis of software projects. LOC metrics could not show non-code work at all. Story points might show activities but only for agile projects and not for other forms of software. Use-case points require use-cases to be used at all. Only function point metrics are methodology neutral and applicable to all known software activities and tasks. Accuracy The topic of accuracy is often applied to questions such as the accuracy of estimates compared to historical data. However it should be applied to the question of how accurate is the historical data itself. As discussed in the section historical data leakage what is called historical data is often less than 50% complete and omits major tasks and activities such as unpaid overtime, project management, and the work of part-time specialists such as technical writers. There is little empirical data available on the accuracy of a host of important software topics including in alphabetical order costs, customer satisfaction, defects, development effort, maintainability, maintenance effort, reliability, schedules, size, staffing, and usability. Various function point metrics (COSMIC, FISMA, NESMA, etc.) frequently assert that their specific counting method is more accurate than rival function point methods such as those of the International Function Point Users Group (IFPUG). These are unproven assertions and also irrelevant in an industry where historical data includes only about 37% of the true costs of software development. As a general rule, better accuracy is needed for every software metric without exception. Agile metrics The agile development approach has created an interesting and unique set of metrics that are used primarily by the agile community. Other metrics such as function points and defect removal efficiency (DRE) work with agile projects too, and are needed if agile is to compared to other methods such RUP and TSP because the agile metrics themselves are not very useful for cross-method comparisons. The agile approach of dividing larger applications into small discrete sprints adds challenge to overall data collection. Some common agile metrics include burn down, burn up, story points, and velocity. This is a complex topic and one still in evolution so a google search on agile metrics will turn up many alternatives. The method used by the author for comparison between agile and other methods is to convert story points and other agile metrics into function points and to convert the effort from various sprints into a standard chart of accounts showing requirements, design, coding, testing, etc. for all sprints in aggregate form. This allows side-by-side comparisons between agile projects and other methods such as the Rational Unified Process (RUP), team software process (TSP), waterfall, iterative, and many others. Analysis of Variance (ANOVA) 13

14 Analysis of variance is a collection of statistical methods for analyzing the ranges of outcomes from groups of related factors. ANOVA might be applied to the schedules of a sample of 100 software projects of the same size and type, or to the delivered defect volumes of the same sample. There are text books and statistical tools available that explain and support analysis of variance. ANOVA is related to design of experiments and particularly to the design of wellformed experiments. Variance and variations are major elements of both software estimating and software measures. Annual Reports As all readers know, public companies are required to produce annual reports for shareholders. These reports discuss costs, profits, business expansion or contraction, and other vital topics. Some sophisticated corporations also produce annual software reports on the same schedule as corporate annual reports; i.e. in the first quarter of a fiscal year showing results for the prior fiscal year. The author has produced such reports and they are valuable in explaining to senior management at the CFO and CEO level what kind of progress in software occurred in the past fiscal year. Some of the topics included in these annual reports are software demographics such as numbers of software personnel by job and occupation group; numbers of customer supported by the software organizations; productivity for the prior year and current year targets; quality for the prior year and current year targets; customer satisfaction, reliability levels, and other relevant topics such as the mix of COTS packages, open source packages and internal development. Also included would be modern issues such as cyber attacks and any software related litigation. Really sophisticated companies might also include topics such as numbers of software patents filed in the prior year. Application size ranges Because function point metrics circa 2014 are somewhat expensive to count manually, they have not been used on really large systems above 10,000 function points in size. As a result the software literature is biased towards small applications and has little data on the world s largest systems. Among the set of really large systems can be found the world-wide military command and control system (WWMCCS) at about 300,000 function points; major ERP packages such as SAP and Oracle at about 250,000 function points; large operating systems from IBM and Microsoft at about 150,000 function points; large information systems such as airline reservation at about 100,000 function points; and dozens of heavy-duty systems software applications such as central-office switching systems at about 25,000 function points in size. These sizes were derived from backfiring, which is discussed later in this report. The approximate global distribution of applications by size approximates the following: 100,000 function points and above 1% 10,000 to 100,000 function points 5% 1,000 to 10,000 function points 15% 100 to 1,000 function points 45% 14

15 Below 100 function points 34% The size ranges for various types of large systems are shown by the following chart: The size ranges for smaller applications are shown using a reduced scale to match their smaller dimensions: Small projects are far more numerous than large systems. Large systems are far more expensive and more troublesome than small projects. Coincidentally agile development is a good choice below 1000 function points. TSP and RUP are good choices above 1000 function points. So far agile has not scaled up well to really large systems above function points but TSP and RUP do well in this zone. Size is not constant either before release or afterwards. So long as there are active users applications grow continuously. During development the measured rate is 1% to 2% per calendar month; after release the measured rate is 8% to 15% per calendar year. A typical postrelease growth pattern might resemble the following: Over a 10 year period a typical mission-critical departmental system starting at 15,000 function points might have: Two major system releases of: 2,000 function points Two minor system releases of: 500 function points Four major enhancements of: 250 function points Ten minor enhancements of: 50 function points Total Growth for 10 years: 6,500 function points System size after 10 years: 21,500 function points Ten-year total growth percent: 43% As can be seen, software applications are never static if they have active users. This continuous growth is important to predict before starting and to measure at the end of each calendar or fiscal year. The cumulative information on original development, maintenance, and enhancement is called total cost of ownership or TCO. Predicting TCO is a standard estimation feature of Software Risk Master, which also predicts growth rates before and after release. Appraisal metrics Many major corporations have annual appraisals of both technical and managerial personnel. Normally the appraisals are given by an employee s immediate manager but often include comments from other managers. Appraisal data is highly confidential and in theory not used for 15

16 any purpose other than compensation adjustments or occasionally for terminations for cause. One interesting sociological issue has been noted from a review of appraisal results in a Fortune 500 company. Technical personnel with the highest appraisal scores tend to leave jobs more frequently than those with lower scores. The most common reason for leaving is I don t like working for bad management. Indirect observation supports the hypothesis that teams with high appraisal scores outperform teams with low appraisal scores. Some companies such as Microsoft try and force fit appraisal scores into patterns; i.e. only a certain and low percentage of employees can be ranked as excellent. While the idea is to prevent appraisal score creep or assessing many more people as excellent than truly occurs, the force-fit method tends to lower morale and lead to voluntary turnover by employees who feel wrongly appraised. In some countries and in companies with software personnel who are union members, it may be illegal to have appraisals. The topic of appraisal scores and their impact on quality and productivity needs additional study, but of necessity studies involving appraisal scores would need to be highly confidential and covered by non-disclosure agreements. The bottom line is that appraisals are a good source of data on experience and knowledge, and it would be useful to the industry to have better empirical data on these important topics. 16

17 Assessment The term assessment in a software context has come to mean a formal evaluation key practice areas covering topics such as requirements, quality, measures, etc. In the defense sector the assessment method developed by Watts Humphrey, Bill Curtis, and colleagues at the Software Engineering Institute (SEI) is the most common. One byproduct of SEI assessments is placing organizations on a five-point scale called the capability maturity model integrated or CMMI. However the SEI is not unique nor is it the oldest organization performing software assessments. The author s former company, Software Productivity Research (SPR) was doing combined assessment and benchmark studies in 1984 a year before the SEI was first incorporated. There is also a popular assessment method in Europe called TickIT. Several former officers of SPR now have companies that provide both assessment and benchmark data collection. These include, in alphabetical order, the Davids Consulting Group, Namcook Analytics LLC, and the Quality/Productivity Measurement group. SPR itself continues to provide assessments and benchmarks as well. Assessments are generally useful because most companies need impartial outside analysis by trained experts to find out their software strengths and weaknesses. Assignment scope The term assignment scope refers to the amount of a specific deliverable that is normally assigned to one person. The metrics used for assignment scope can be either natural metrics such as pages of a manual or synthetic metrics such as function points. Common examples of assignment scopes would include code volumes, test case construction, documentation pages, customers supported by one phone agent, the amount of source code assigned to maintenance personnel. Assignment scope metrics and production rate metrics are used in software estimation tools. Assignment scopes are discussed in several of the author s books including Applied Software Measurement and Estimating Software Costs. Attrition measures As we all know personnel change jobs frequently. During the high-growth period of software engineering in the 1970 s most software engineers had as many as five jobs for five companies. In today s weak economy job hopping is less common. In any case most corporations measure annual personnel attrition rates by job titles. Examination of exit interviews show that top personnel leave more often than average personnel, and do so because they don t like working for bad management. For software engineers technical challenge and capable colleagues tend to be larger factors in attrition than compensation. Automatic function point counting The Object Management Group (OMG) has published a standard for automatic function point counting. This standard is supported by an automated tool by CAST Software. A similar tool 17

18 has been demonstrated by Relativity Technologies. Both tools use mathematical approaches and can generate size from source code or, in the CAST tool, from UML diagrams. Neither tool has published data on the speed of counting or on the accuracy of the counts compared to normal manual function point analysis. The author of this paper has filed a U.S. utility patent application on a method of high-speed and early sizing that can produce function point sizes for applications between 10 and 300,000 function points in about 1.8 minutes, regardless of the actual application size. The tool operates via pattern matching. Using a formal taxonomy of application nature, scope, class, type, and complexity the tool derives function point size from historical data of projects that share the same taxonomy pattern. The author s sizing tool is included in the Software Risk Master (SRM) tool and is also available for demonstration purposes on the Namcook Analytics LLC web site, The author s tool produces size in a total of 23 metrics including function points, story points, use-case points, physical and logical source code size, and others. Backfiring In the early 1970 s IBM became aware that lines of code metrics had serious flaws as a productivity metric since it penalized modern languages and made non-coding work invisible. Alan Albrecht and colleagues at IBM White Plains began development of function points. They had available hundreds of IBM applications with accurate counts of logical code statements. As the function point metric was being tested it was noted that various languages had characteristic levels, or number of code statements per function point. The COBOL language, for example, average about statements per function point in the procedure and data divisions. Basic assembly language average about 320 statements per function point. These observations led to a concept called backfiring or mathematical conversion between older lines of code data and newer function points. However due to variances in programming styles there were ranges of over two to one in both directions. COBOL varied from about 50 statements per function point to more than 175 statements per function point even though the average value was Backfiring was not accurate but was easy to do and soon became a common sizing method for legacy applications where code already existed. Today in 2014 several companies such as Gartner Group, QSM, and Software Productivity Research (SPR) sell commercial tables of conversion rates for more than 1000 programming languages. Interestingly, the values among these tables are not always the same for specific languages. Backfiring remains popular in spite of its low accuracy for specific applications and languages. Bad fix injections Some years ago IBM discovered that about 7% of attempts to fix software bugs contained new bugs in the repairs themselves. These were termed bad fixes. In extreme cases such as very high cyclomatic complexity levels bad fix injections can top 25%. This brings up the point that repairs to software are themselves sources of error. Therefore static analysis, inspections, and regression testing are needed for all significant defect repairs. Bad fix injections were first identified in the 1970 s. They are discussed in the author s book The Economics of Software Quality. 18

19 Bad test cases A study of regression test libraries by IBM in the 1970 s found that about 15% of test cases had errors in them. (The same study also found about 20% duplicate test cases that tested the same topics without adding any value.) This is a topic that is severely under-reported in the quality and test literature. Test cases that themselves contain errors add to testing costs but do not add to testing thoroughness. Balanced scorecard Art Schneiderman, Robert Kaplan, and David Norton (formerly of Nolan and Norton) originated the balanced scorecard concept as known today, although there were precursors. The book The Balanced Scorecard by Kaplan and Norton made it popular. It is now widely used for both software and non-software purposes. A balanced scorecard comprises four views and related metrics that combine financial perspective, learning and growth perspective, customer or stakeholder perspective, and financial perspective. The balanced scorecard is not just a retroactive set of measures but also include proactive forward planning and strategy approaches. Although balanced scorecards might be used by software organizations, they are most commonly used at higher corporate levels where software, hardware, and other business factors need integration. Baselines For software process improvement, a baseline is a measurement of quality and productivity at the current moment before the improvement program begins. As the improvement program moves through time, additional productivity and quality data collections will show rates of progress over time. Baselines may also have contract implications is an outsource vendor tenders and offer to provide development or maintenance services cheaper and faster than the current rates. In general the same kinds of data are collected for both baselines and benchmarks, which are discussed later in this paper. Bayesian analysis Bayesian analysis is named after the English mathematician Thomas Bayes from the 18 th century. Its purpose in general is to use historical data and observations to derive the odds of occurrences or events. In 1999 a doctoral student at the University of Southern California, Sunita Devnani- Chulani, applied Baysian analysis to software cost estimating methods such as Checkpoint (designed by the author of this paper), COCOMO, SEER, SLIM, and some others. This was an interesting study. In any case Baysian analysis is useful in combining prior data points with hypotheses about future outcomes. 19

20 Benchmarks The term benchmark is much older than software and originally applied to chiseled marks in stones used by surveyors for leveling rods. Since then the term has become generalized and as of 2014 can be used with well over 500 different forms of benchmarks in almost every industry. Major corporations have been observed to use more than 60 benchmarks including attrition rates, compensation by occupation, customer satisfaction, market shares, quality, productivity, and many more. Total costs for benchmarks can top $5,000,000 per year, but are scattered among many operating units so benchmark costs are seldom consolidated. In this paper a more narrow form of benchmark is relevant which deals specifically with software development productivity and sometimes with software quality. As this paper is written in 2014 there are more than 25 organizations that provide software benchmark services. Among these can be found the International Software Benchmark Standards Group (ISBSG), Namcook Analytics (the author s company), the Quality and Productivity Management Group, Quantimetrics, Reifer Associates, Software Productivity Research (SPR) and many more. The data provided by these various benchmark organizations varies, of course, but tends to concentrate on software development results. Function point metrics are most widely used for software benchmarks, but other metrics such as lines of code also occur. Benchmark data can either be self-reported by clients of benchmark groups or collected by on-site or remote meetings with clients. The on-site or remote collection of benchmark data by commercial benchmark groups allows known errors such as failure to records unpaid overtime to be corrected which may not occur with self-reported benchmark data. Breach of contract litigation The author has worked as an expert witness in more than a dozen software breach of contract cases. These are concerned with either projects that are terminated without being delivered, or with projects that were delivered but failed to work or at least failed to work well. The main kinds of data collected during breach of contract cases center on quality and on requirements creep, both of which are common in breach of contract litigation. Common problems noted during these cases that are relevant to software metrics issues include: 1) Poor estimates prior to starting; 2) Poor quality control during development; 3) Poor change control during development; 4) Very poor and sometimes misleading status tracking during development. About 5% of outsource contracts seem to end up in court. Litigation is expensive and the costs can easily top $5,000,000 for both the plaintiff and the defendant. It is an interesting phenomenon that all of the cases except one where the author was an expert witness were for major systems larger than 10,000 function points in size. It is unfortunate that neither the costs of canceled projects nor the costs of breach of contract litigation are currently included in the metric of technical debt which is discussed later in this report. 20

21 Bug One of the legends of software engineering is that the term bug first refereed to an actual insect that had jammed a relay in an electromechanical computer. The term bug has since come to mean any form of defect in either code or other deliverables. Bug reports during development and after release are standard software measures. See also defect later in this paper. There is a pedantic discussion among academics that involves differences between failures and faults and defects and bugs, but common definitions are more widely used than academic nuances. Burden rates Software cost structures are divided into two main categories; the costs of salaries and the costs of overhead commonly called the burden rate and also overhead. Salary costs are obvious and include the hourly or monthly salaries of software personnel. Burden rates are not at all obvious and vary from industry to industry, from company to company, and from country to country. In the United States some of the normal components of burden rates include insurance, office space, computers and equipment, telephone service, taxes, unemployment, and a variety of other fees and local taxes. Burden rates can vary from a low of about 25% of monthly salary costs to a high of over 100% of salary costs. Some industries such as banking and finance have very high burden rates; other industries such as manufacturing and agriculture have lower burden rates. But the specifics of burden rates need to be examined for each company in specific locations where the company does business. Burn down Although this metric can be used with any method, it is most popular with agile projects. The burn down rate is normally expressed graphically by showing the amount of work to be performed compared to the amount of time desired to complete the work. Burn down is somewhat similar in concept to earned value. A variety of commercial and open-source tools can produce burn down charts. See also the next topic of burn up. The work can be expressed in terms of user stories or natural deliverable such as pages of documentation or source code. Burn up This form of chart can also be used with any method but is most popular with agile projects. Burn up charts show the amount of work already completed compared to the backlog of uncompleted work. Burn down charts, just discussed, show uncompleted work and time remaining. Here too a variety of commercial and open-source tools can produce the charts. The work completed can be stories or natural metrics. 21

Function Points as a Universal Software Metric. Draft 10.0 July 13, 2013. Blog: http://namcookanalytics.com; Web: WWW.Namcook.com

Function Points as a Universal Software Metric. Draft 10.0 July 13, 2013. Blog: http://namcookanalytics.com; Web: WWW.Namcook.com Function Points as a Universal Software Metric Capers Jones, VP and CTO Namcook Analytics LLC Draft 10.0 July 13, 2013 Blog: http://namcookanalytics.com; Web: WWW.Namcook.com Keywords Capers Jones data,

More information

Future Technologies possible today in 2014. Copyright 2014 by Capers Jones. All rights reserved.

Future Technologies possible today in 2014. Copyright 2014 by Capers Jones. All rights reserved. Future Technologies possible today in 2014 Copyright 2014 by Capers Jones. All rights reserved. Capers Jones, VP and CTO Namcook Analytics LLC Web: www.namcook.com Blog: http://namcookanalytics.com Email:

More information

SOFTWARE ESTIMATING RULES OF THUMB. Version 1 - April 6, 1997 Version 2 June 13, 2003 Version 3 March 20, 2007

SOFTWARE ESTIMATING RULES OF THUMB. Version 1 - April 6, 1997 Version 2 June 13, 2003 Version 3 March 20, 2007 SOFTWARE ESTIMATING RULES OF THUMB Version 1 - April 6, 1997 Version 2 June 13, 2003 Version 3 March 20, 2007 Abstract Accurate software estimating is too difficult for simple rules of thumb. Yet in spite

More information

SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART

SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART Software Productivity Research an Artemis company SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART Capers Jones, Chief Scientist Emeritus Six Lincoln Knoll Lane Burlington, Massachusetts 01803

More information

SOFTWARE QUALITY IN 2012: A SURVEY OF THE STATE OF THE ART

SOFTWARE QUALITY IN 2012: A SURVEY OF THE STATE OF THE ART Namcook Analytics LLC SOFTWARE QUALITY IN 2012: A SURVEY OF THE STATE OF THE ART Capers Jones, CTO Web: www.namcook.com Email: Capers.Jones3@GMAILcom May 1, 2012 SOURCES OF QUALITY DATA Data collected

More information

A SHORT HISTORY OF SOFTWARE ESTIMATION TOOLS. Version 12.0 August 26, 2013

A SHORT HISTORY OF SOFTWARE ESTIMATION TOOLS. Version 12.0 August 26, 2013 A SHORT HISTORY OF SOFTWARE ESTIMATION TOOLS Version 12.0 August 26, 2013 Keywords Activity-based costs, Capers Jones data, function points, Namcook Analytics data, software costs, software development,

More information

the state of the practice Variations in Software Development Practices

the state of the practice Variations in Software Development Practices focus the state of the practice invited article Variations in Software Development Practices Capers Jones, Software Productivity Research My colleagues and I at Software Productivity Research gathered

More information

Software Cost Estimating Methods for Large Projects

Software Cost Estimating Methods for Large Projects Software Cost Estimating Methods for Large Projects Capers Jones Software Productivity Research, LLC For large projects, automated estimates are more successful than manual estimates in terms of accuracy

More information

Software Project Management Tools. Draft 5.0 June 28, 2013

Software Project Management Tools. Draft 5.0 June 28, 2013 Software Project Management Tools Draft 5.0 June 28, 2013 Keywords: Software project management, software sizing, software cost estimating, software schedule planning, software quality estimating, software

More information

Software Quality Metrics: Three Harmful Metrics and Two Helpful Metrics. June 6, 2012

Software Quality Metrics: Three Harmful Metrics and Two Helpful Metrics. June 6, 2012 Software Quality Metrics: Three Harmful Metrics and Two Helpful Metrics June 6, 2012 Capers Jones, VP and Chief Technology Officer Namcook Analytics LLC Abstract The cost of finding and fixing bugs or

More information

Enterprise Services for Defense Transformation

Enterprise Services for Defense Transformation Enterprise Services for Defense Transformation Prof. Paul A. Strassmann George Mason University, February 19, 2007 1 Case Study Hewlett-Packard Cost Reduction 2 Example of Application Simplification Domain

More information

Modern Software Productivity Measurement: The Pragmatic Guide. By Dr. Bill Curtis, Senior Vice President and Chief Scientist, CAST

Modern Software Productivity Measurement: The Pragmatic Guide. By Dr. Bill Curtis, Senior Vice President and Chief Scientist, CAST By Dr. Bill Curtis, Senior Vice President and Chief Scientist, CAST Contents Executive Summary...1 1. Software Productivity Defined...2 1.1 Release Productivity...2 1.2 Quality-adjusted Productivity...2

More information

Sources of Error in Software Cost Estimation

Sources of Error in Software Cost Estimation Sources of Error in Software Cost Estimation Seminar on Software Cost Estimation WS 02/03 Presented by Silvio Meier smeier@ifi.unizh.ch Requirements Engineering Research Group Department of Computer Science

More information

White Paper Software Quality Management

White Paper Software Quality Management White Paper What is it and how can it be achieved? Successfully driving business value from software quality management is imperative for many large organizations today. Historically, many Quality Assurance

More information

Fundamentals of Measurements

Fundamentals of Measurements Objective Software Project Measurements Slide 1 Fundamentals of Measurements Educational Objective: To review the fundamentals of software measurement, to illustrate that measurement plays a central role

More information

Software Project Management Practices: Failure Versus Success

Software Project Management Practices: Failure Versus Success This article is derived from analysis of about 250 large software projects at or above 10,000 function points in size that were examined by the author s company between 1995 and 2004. (Note that 10,000

More information

Data Quality Assurance

Data Quality Assurance CHAPTER 4 Data Quality Assurance The previous chapters define accurate data. They talk about the importance of data and in particular the importance of accurate data. They describe how complex the topic

More information

SmartBear Software Pragmatic Agile Development (PAD) Conceptual Framework

SmartBear Software Pragmatic Agile Development (PAD) Conceptual Framework Pragmatic Agile Development (PAD) Conceptual Framework This document describes the Pragmatic Agile Development framework, a Scrum based development process. SmartBear Software 3/10/2010 Pragmatic Agile

More information

Geriatric Issues of Aging Software Capers Jones Software Productivity Research, LLC. Software Sustainment. What Is Software Maintenance?

Geriatric Issues of Aging Software Capers Jones Software Productivity Research, LLC. Software Sustainment. What Is Software Maintenance? Maintenance Engineering Lagging Average Leading Software Sustainment Capers Jones Software Productivity Research, LLC. Software has been a mainstay of business and government operations for more than 50

More information

How to Decide which Method to Use

How to Decide which Method to Use Methods for Software Sizing How to Decide which Method to Use 1 Why Measure Software Size? Software is the output product from the software development and/or enhancement activity that is delivered and/or

More information

SOFTWARE DEFECT ORIGINS AND REMOVAL METHODS. Capers Jones, Vice President and Chief Technology Officer. Draft 5.0 December 28, 2012

SOFTWARE DEFECT ORIGINS AND REMOVAL METHODS. Capers Jones, Vice President and Chief Technology Officer. Draft 5.0 December 28, 2012 SOFTWARE DEFECT ORIGINS AND REMOVAL METHODS Capers Jones, Vice President and Chief Technology Officer Namcook Analytics LLC www.namcook.com Draft 5.0 December 28, 2012 Abstract The cost of finding and

More information

Productivity Measurement and Analysis

Productivity Measurement and Analysis Productivity Measurement and Analysis Best and Worst Practices Dr. Bill Curtis Director, CISQ 1 Consortium for IT Software Quality Co-sponsorship IT Executives CISQ Technical experts CISQ Objectives and

More information

CONFLICT AND LITIGATION BETWEEN SOFTWARE CLIENTS AND DEVELOPERS. Version 8 September 1, 2000

CONFLICT AND LITIGATION BETWEEN SOFTWARE CLIENTS AND DEVELOPERS. Version 8 September 1, 2000 CONFLICT AND LITIGATION BETWEEN SOFTWARE CLIENTS AND DEVELOPERS Version 8 September 1, 2000 Abstract Software development and maintenance outsource contracts may lead to conflicts between the client and

More information

NCOE whitepaper Master Data Deployment and Management in a Global ERP Implementation

NCOE whitepaper Master Data Deployment and Management in a Global ERP Implementation NCOE whitepaper Master Data Deployment and Management in a Global ERP Implementation Market Offering: Package(s): Oracle Authors: Rick Olson, Luke Tay Date: January 13, 2012 Contents Executive summary

More information

Building Software in an Agile Manner

Building Software in an Agile Manner Building Software in an Agile Manner Abstract The technology industry continues to evolve with new products and category innovations defining and then redefining this sector's shifting landscape. Over

More information

The Software Process. The Unified Process (Cont.) The Unified Process (Cont.)

The Software Process. The Unified Process (Cont.) The Unified Process (Cont.) The Software Process Xiaojun Qi 1 The Unified Process Until recently, three of the most successful object-oriented methodologies were Booch smethod Jacobson s Objectory Rumbaugh s OMT (Object Modeling

More information

What is Application Lifecycle Management? At lower costs Get a 30% return on investment guaranteed and save 15% on development costs

What is Application Lifecycle Management? At lower costs Get a 30% return on investment guaranteed and save 15% on development costs What is Application Lifecycle Management? Increase productivity Stop wasting your time doing things manually by automating every step in your project s Life Cycle At lower costs Get a 30% return on investment

More information

CALCULATING THE COSTS OF MANUAL REWRITES

CALCULATING THE COSTS OF MANUAL REWRITES CALCULATING THE COSTS OF MANUAL REWRITES Know before you go. 2 You ve got an old legacy application and you re faced with the dilemma.. Should I rewrite from scratch? Should I keep trying to maintain it?

More information

Basic Testing Concepts and Terminology

Basic Testing Concepts and Terminology T-76.5613 Software Testing and Quality Assurance Lecture 2, 13.9.2006 Basic Testing Concepts and Terminology Juha Itkonen SoberIT Contents Realities and principles of Testing terminology and basic concepts

More information

Managing Agile Projects in TestTrack GUIDE

Managing Agile Projects in TestTrack GUIDE Managing Agile Projects in TestTrack GUIDE Table of Contents Introduction...1 Automatic Traceability...2 Setting Up TestTrack for Agile...6 Plan Your Folder Structure... 10 Building Your Product Backlog...

More information

Manual Techniques, Rules of Thumb

Manual Techniques, Rules of Thumb Seminar on Software Cost Estimation WS 2002/2003 Manual Techniques, Rules of Thumb Pascal Ziegler 1 Introduction good software measurement and estimation are important simple methods are widely used simple,

More information

The role of integrated requirements management in software delivery.

The role of integrated requirements management in software delivery. Software development White paper October 2007 The role of integrated requirements Jim Heumann, requirements evangelist, IBM Rational 2 Contents 2 Introduction 2 What is integrated requirements management?

More information

If it passes test, it must be OK Common Misconceptions and The Immutable Laws of Software

If it passes test, it must be OK Common Misconceptions and The Immutable Laws of Software If it passes test, it must be OK Common Misconceptions and The Immutable Laws of Software Girish Seshagiri, Advanced Information Services Inc. Abstract. As the saying goes, If it passes test, it must be

More information

Software Engineering. Introduction. Software Costs. Software is Expensive [Boehm] ... Columbus set sail for India. He ended up in the Bahamas...

Software Engineering. Introduction. Software Costs. Software is Expensive [Boehm] ... Columbus set sail for India. He ended up in the Bahamas... Software Engineering Introduction... Columbus set sail for India. He ended up in the Bahamas... The economies of ALL developed nations are dependent on software More and more systems are software controlled

More information

Creating a High Maturity Agile Implementation

Creating a High Maturity Agile Implementation Creating a High Maturity Agile Implementation Creating a High Maturity Agile Implementation www.qaiglobal.com 1 Copyright Notice 2015. Unless otherwise noted, these materials and the presentation of them

More information

An Introduction to. Metrics. used during. Software Development

An Introduction to. Metrics. used during. Software Development An Introduction to Metrics used during Software Development Life Cycle www.softwaretestinggenius.com Page 1 of 10 Define the Metric Objectives You can t control what you can t measure. This is a quote

More information

Lessons from Software Work Effort Metrics 1

Lessons from Software Work Effort Metrics 1 Lessons from Software Work Effort Metrics 1 Karl E. Wiegers Process Impact www.processimpact.com How many of these questions about your software development organization can you answer with confidence?

More information

Software Test Costs and Return on

Software Test Costs and Return on Software Test Costs and Return on Investment (ROI) Issues Bob Hunt, Galorath Tony Abolfotouh, John Carpenter; Robbins Gioia March 2014 1 Introduction The recent Affordable Health Care Web Site issues have

More information

Chapter 9 Software Evolution

Chapter 9 Software Evolution Chapter 9 Software Evolution Summary 1 Topics covered Evolution processes Change processes for software systems Program evolution dynamics Understanding software evolution Software maintenance Making changes

More information

Chap 1. Software Quality Management

Chap 1. Software Quality Management Chap. Software Quality Management.3 Software Measurement and Metrics. Software Metrics Overview 2. Inspection Metrics 3. Product Quality Metrics 4. In-Process Quality Metrics . Software Metrics Overview

More information

WHITE PAPER SPLUNK SOFTWARE AS A SIEM

WHITE PAPER SPLUNK SOFTWARE AS A SIEM SPLUNK SOFTWARE AS A SIEM Improve your security posture by using Splunk as your SIEM HIGHLIGHTS Splunk software can be used to operate security operations centers (SOC) of any size (large, med, small)

More information

Chapter 24 - Quality Management. Lecture 1. Chapter 24 Quality management

Chapter 24 - Quality Management. Lecture 1. Chapter 24 Quality management Chapter 24 - Quality Management Lecture 1 1 Topics covered Software quality Software standards Reviews and inspections Software measurement and metrics 2 Software quality management Concerned with ensuring

More information

WHITE PAPER: STRATEGIC IMPACT PILLARS FOR EFFICIENT MIGRATION TO CLOUD COMPUTING IN GOVERNMENT

WHITE PAPER: STRATEGIC IMPACT PILLARS FOR EFFICIENT MIGRATION TO CLOUD COMPUTING IN GOVERNMENT WHITE PAPER: STRATEGIC IMPACT PILLARS FOR EFFICIENT MIGRATION TO CLOUD COMPUTING IN GOVERNMENT IntelliDyne, LLC MARCH 2012 STRATEGIC IMPACT PILLARS FOR EFFICIENT MIGRATION TO CLOUD COMPUTING IN GOVERNMENT

More information

Software Engineering. So(ware Evolu1on

Software Engineering. So(ware Evolu1on Software Engineering So(ware Evolu1on 1 Software change Software change is inevitable New requirements emerge when the software is used; The business environment changes; Errors must be repaired; New computers

More information

1.1 The Nature of Software... Object-Oriented Software Engineering Practical Software Development using UML and Java. The Nature of Software...

1.1 The Nature of Software... Object-Oriented Software Engineering Practical Software Development using UML and Java. The Nature of Software... 1.1 The Nature of Software... Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 1: Software and Software Engineering Software is intangible Hard to understand

More information

An Analysis of Hybrid Tool Estimator: An Integration of Risk with Software Estimation

An Analysis of Hybrid Tool Estimator: An Integration of Risk with Software Estimation Journal of Computer Science 7 (11): 1679-1684, 2011 ISSN 1549-3636 2011 Science Publications An Analysis of Hybrid Tool Estimator: An Integration of Risk with Software Estimation 1 J. Frank Vijay and 2

More information

Summary of GAO Cost Estimate Development Best Practices and GAO Cost Estimate Audit Criteria

Summary of GAO Cost Estimate Development Best Practices and GAO Cost Estimate Audit Criteria Characteristic Best Practice Estimate Package Component / GAO Audit Criteria Comprehensive Step 2: Develop the estimating plan Documented in BOE or Separate Appendix to BOE. An analytic approach to cost

More information

Enabling Continuous Delivery by Leveraging the Deployment Pipeline

Enabling Continuous Delivery by Leveraging the Deployment Pipeline Enabling Continuous Delivery by Leveraging the Deployment Pipeline Jason Carter Principal (972) 689-6402 Jason.carter@parivedasolutions.com Pariveda Solutions, Inc. Dallas,TX Table of Contents Matching

More information

Measures to get the best performance from your software suppliers

Measures to get the best performance from your software suppliers Measures to get the best performance from your software suppliers Charles Symons Founder & Past President, COSMIC 8 th November, 2012 1 ITMPI005 COSMIC COSMIC is a not-for-profit organization, founded

More information

What do you think? Definitions of Quality

What do you think? Definitions of Quality What do you think? What is your definition of Quality? Would you recognise good quality bad quality Does quality simple apply to a products or does it apply to services as well? Does any company epitomise

More information

Making the Most of the Software Development Process

Making the Most of the Software Development Process Making the Most of the Software Development Process Dr Graham Stone, Dunstan Thomas Consulting http://consulting.dthomas.co.uk Organisations are under increased pressure to look at development initiatives

More information

How Small Businesses Can Use a Cycle-Count Program to Control Inventory

How Small Businesses Can Use a Cycle-Count Program to Control Inventory A-02 Janet Steiner Soukup, CPIM How Small Businesses Can Use a Cycle-Count Program to Control Inventory INTRODUCTION In today s economic environment, it is more critical than ever that small businesses

More information

Understanding the Financial Value of Data Quality Improvement

Understanding the Financial Value of Data Quality Improvement Understanding the Financial Value of Data Quality Improvement Prepared by: David Loshin Knowledge Integrity, Inc. January, 2011 Sponsored by: 2011 Knowledge Integrity, Inc. 1 Introduction Despite the many

More information

BUYER S GUIDE. The Unified Communications Buyer s Guide: Four Steps to Prepare for the Modern, Mobile Workforce

BUYER S GUIDE. The Unified Communications Buyer s Guide: Four Steps to Prepare for the Modern, Mobile Workforce BUYER S GUIDE The Unified Communications Buyer s Guide: Four Steps to Prepare for the Modern, Mobile Workforce Not all that long ago, the word office had a pretty straightforward meaning. When you heard

More information

Technology. Accenture Application Testing Services. Embedding quality into the application development life cycle

Technology. Accenture Application Testing Services. Embedding quality into the application development life cycle Technology Accenture Application Testing Services Embedding quality into the application development life cycle 1 Quality First for Better Outcomes IT costs are continuing to climb. Technology is getting

More information

Your Software Quality is Our Business. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc.

Your Software Quality is Our Business. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc. INDEPENDENT VERIFICATION AND VALIDATION (IV&V) WHITE PAPER Prepared by Adnet, Inc. February 2013 1 Executive Summary Adnet is pleased to provide this white paper, describing our approach to performing

More information

Driving Quality Improvement and Reducing Technical Debt with the Definition of Done

Driving Quality Improvement and Reducing Technical Debt with the Definition of Done Driving Quality Improvement and Reducing Technical Debt with the Definition of Done Noopur Davis Principal, Davis Systems Pittsburgh, PA NDavis@DavisSys.com Abstract This paper describes our experiences

More information

A Design Technique: Data Integration Modeling

A Design Technique: Data Integration Modeling C H A P T E R 3 A Design Technique: Integration ing This chapter focuses on a new design technique for the analysis and design of data integration processes. This technique uses a graphical process modeling

More information

A business intelligence agenda for midsize organizations: Six strategies for success

A business intelligence agenda for midsize organizations: Six strategies for success IBM Software Business Analytics IBM Cognos Business Intelligence A business intelligence agenda for midsize organizations: Six strategies for success A business intelligence agenda for midsize organizations:

More information

Center for Effective Organizations

Center for Effective Organizations Center for Effective Organizations HR METRICS AND ANALYTICS USES AND IMPACTS CEO PUBLICATION G 04-8 (460) EDWARD E. LAWLER III ALEC LEVENSON JOHN BOUDREAU Center for Effective Organizations Marshall School

More information

SEER for Software - Going Beyond Out of the Box. David DeWitt Director of Software and IT Consulting

SEER for Software - Going Beyond Out of the Box. David DeWitt Director of Software and IT Consulting SEER for Software - Going Beyond Out of the Box David DeWitt Director of Software and IT Consulting SEER for Software is considered by a large percentage of the estimation community to be the Gold Standard

More information

Introduction to Function Points www.davidconsultinggroup.com

Introduction to Function Points www.davidconsultinggroup.com By Sheila P. Dennis and David Garmus, David Consulting Group IBM first introduced the Function Point (FP) metric in 1978 [1]. Function Point counting has evolved into the most flexible standard of software

More information

Managing Successful Software Development Projects Mike Thibado 12/28/05

Managing Successful Software Development Projects Mike Thibado 12/28/05 Managing Successful Software Development Projects Mike Thibado 12/28/05 Copyright 2006, Ambient Consulting Table of Contents EXECUTIVE OVERVIEW...3 STATEMENT OF WORK DOCUMENT...4 REQUIREMENTS CHANGE PROCEDURE...5

More information

Implementing a Metrics Program MOUSE will help you

Implementing a Metrics Program MOUSE will help you Implementing a Metrics Program MOUSE will help you Ton Dekkers, Galorath tdekkers@galorath.com Just like an information system, a method, a technique, a tool or an approach is supporting the achievement

More information

An introduction to the benefits of Application Lifecycle Management

An introduction to the benefits of Application Lifecycle Management An introduction to the benefits of Application Lifecycle Management IKAN ALM increases team productivity, improves application quality, lowers the costs and speeds up the time-to-market of the entire application

More information

Applying 4+1 View Architecture with UML 2. White Paper

Applying 4+1 View Architecture with UML 2. White Paper Applying 4+1 View Architecture with UML 2 White Paper Copyright 2007 FCGSS, all rights reserved. www.fcgss.com Introduction Unified Modeling Language (UML) has been available since 1997, and UML 2 was

More information

Compliance Cost Associated with the Storage of Unstructured Information

Compliance Cost Associated with the Storage of Unstructured Information Compliance Cost Associated with the Storage of Unstructured Information Sponsored by Novell Independently conducted by Ponemon Institute LLC Publication Date: May 2011 Ponemon Institute Research Report

More information

Function Point: how to transform them in effort? This is the problem!

Function Point: how to transform them in effort? This is the problem! Function Point: how to transform them in effort? This is the problem! Gianfranco Lanza Abstract The need to estimate the effort and, consequently, the cost of a software project is one of the most important

More information

A DIFFERENT KIND OF PROJECT MANAGEMENT

A DIFFERENT KIND OF PROJECT MANAGEMENT SEER for Software SEER project estimation and management solutions improve success rates on complex software projects. Based on sophisticated modeling technology and extensive knowledge bases, SEER solutions

More information

Laws of Software Engineering Circa 2014. Version 7.0 February 17, 2014. Capers Jones, VP and CTO, Namcook Analytics LLC.

Laws of Software Engineering Circa 2014. Version 7.0 February 17, 2014. Capers Jones, VP and CTO, Namcook Analytics LLC. Laws of Software Engineering Circa 2014 Version 7.0 February 17, 2014 Capers Jones, VP and CTO, Namcook Analytics LLC. Copyright 2014 by Capers Jones. All rights reserved. Introduction Software development

More information

CUT COSTS, NOT PROJECTS

CUT COSTS, NOT PROJECTS CUT COSTS, NOT PROJECTS Understanding and Managing Software Development Costs A WEBINAR for State of Washington Agencies Critical Logic, Inc. July 9 2009 Starting at 3pm, Pacific Daylight Time Critical

More information

Deloitte and SuccessFactors Workforce Analytics & Planning for Federal Government

Deloitte and SuccessFactors Workforce Analytics & Planning for Federal Government Deloitte and SuccessFactors Workforce Analytics & Planning for Federal Government Introduction Introduction In today s Federal market, the effectiveness of human capital management directly impacts agencies

More information

The Stacks Approach. Why It s Time to Start Thinking About Enterprise Technology in Stacks

The Stacks Approach. Why It s Time to Start Thinking About Enterprise Technology in Stacks The Stacks Approach Why It s Time to Start Thinking About Enterprise Technology in Stacks CONTENTS Executive Summary Layer 1: Enterprise Competency Domains Layer 2: Platforms Layer 3: Enterprise Technology

More information

Realizing the Benefits of Professional Services Automation with the astest ROI

Realizing the Benefits of Professional Services Automation with the astest ROI Realizing the Benefits of Professional Services Automation with the astest ROI A white paper that analyzes how Internet Business Services use new technologies the Web, open source software, and the emergence

More information

COST ESTIMATING METHODOLOGY

COST ESTIMATING METHODOLOGY NCMA DINNER MEETING TRAINING COST ESTIMATING METHODOLOGY 1 David Maldonado COST ESTIMATING METHODOLOGY TABLE OF CONTENT I. Estimating Overview II. Functional Estimating Methods III. Estimating Methods

More information

A European manufacturer

A European manufacturer A European manufacturer Saving nearly USD19 million through improved workplace management Overview The need Balance the pressure for growth against aggressive cost reduction targets. The solution The organization

More information

How To Save Money On It (It)

How To Save Money On It (It) Overview Presenting the Executive Business Case for IT Service Management By Randy Steinberg RandyASteinberg@gmail.com The IT executive that cannot clearly articulate the services they deliver, the IT

More information

Monitor. Manage. Per form.

Monitor. Manage. Per form. IBM Software Business Analytics Cognos Business Intelligence Monitor. Manage. Per form. Scorecarding with IBM Cognos Business Intelligence 2 Monitor. Manage. Perform. Contents 2 Overview 3 Three common

More information

Predictive Intelligence: Identify Future Problems and Prevent Them from Happening BEST PRACTICES WHITE PAPER

Predictive Intelligence: Identify Future Problems and Prevent Them from Happening BEST PRACTICES WHITE PAPER Predictive Intelligence: Identify Future Problems and Prevent Them from Happening BEST PRACTICES WHITE PAPER Table of Contents Introduction...1 Business Challenge...1 A Solution: Predictive Intelligence...1

More information

Driving Your Business Forward with Application Life-cycle Management (ALM)

Driving Your Business Forward with Application Life-cycle Management (ALM) Driving Your Business Forward with Application Life-cycle Management (ALM) Published: August 2007 Executive Summary Business and technology executives, including CTOs, CIOs, and IT managers, are being

More information

Predictive Intelligence: Moving Beyond the Crystal Ball BEST PRACTICES WHITE PAPER

Predictive Intelligence: Moving Beyond the Crystal Ball BEST PRACTICES WHITE PAPER Predictive Intelligence: Moving Beyond the Crystal Ball BEST PRACTICES WHITE PAPER Table of Contents Introduction...1 Business Challenge...1 A Solution: Predictive Intelligence...1 > Dynamic Thresholding...2

More information

See What's Coming in Oracle Project Portfolio Management Cloud

See What's Coming in Oracle Project Portfolio Management Cloud See What's Coming in Oracle Project Portfolio Management Cloud Release 9 Release Content Document Table of Contents GRANTS MANAGEMENT... 4 Collaborate Socially on Awards Using Oracle Social Network...

More information

USmax methodology to Conduct Commercial off-the-shelf (COTS) Product Evaluation

USmax methodology to Conduct Commercial off-the-shelf (COTS) Product Evaluation USmax methodology to Conduct Commercial off-the-shelf (COTS) Product Evaluation Text Block WHITE PAPER Copyright 2010 USmax Corporation. All rights reserved. This document is provided for the intended

More information

IBM Tivoli Netcool network management solutions for enterprise

IBM Tivoli Netcool network management solutions for enterprise IBM Netcool network management solutions for enterprise The big picture view that focuses on optimizing complex enterprise environments Highlights Enhance network functions in support of business goals

More information

CONTENTS. As more and more organizations turn to agile development, the reality of what agile really is often gets obscured. Introduction...

CONTENTS. As more and more organizations turn to agile development, the reality of what agile really is often gets obscured. Introduction... CONTENTS Introduction...1 Myth #1: Agile Development is Undisciplined...2 Myth #2: Agile Teams Do Not Plan...2 Myth #3: Agile Development is Not Predictable...2 Myth #4: Agile Development Does Not Scale...4

More information

Using the Agile Methodology to Mitigate the Risks of Highly Adaptive Projects

Using the Agile Methodology to Mitigate the Risks of Highly Adaptive Projects Transdyne Corporation CMMI Implementations in Small & Medium Organizations Using the Agile Methodology to Mitigate the Risks of Highly Adaptive Projects Dana Roberson Quality Software Engineer NNSA Service

More information

Java Development Productivity and Quality Using Eclipse:

Java Development Productivity and Quality Using Eclipse: Java Development Productivity and Quality Using Eclipse: A Comparative Study of Commercial Eclipse-based IDEs The productivity benefits of using commercial Eclipse-based Java IDE products from IBM (IBM

More information

Application Outsourcing: The management challenge

Application Outsourcing: The management challenge White Paper Application Outsourcing: The management challenge Embedding software quality management for mutual benefit Many large organizations that rely on mainframe applications outsource the management

More information

Software Metrics. Lord Kelvin, a physicist. George Miller, a psychologist

Software Metrics. Lord Kelvin, a physicist. George Miller, a psychologist Software Metrics 1. Lord Kelvin, a physicist 2. George Miller, a psychologist Software Metrics Product vs. process Most metrics are indirect: No way to measure property directly or Final product does not

More information

Accenture and Software as a Service: Moving to the Cloud to Accelerate Business Value for High Performance

Accenture and Software as a Service: Moving to the Cloud to Accelerate Business Value for High Performance Accenture and Software as a Service: Moving to the Cloud to Accelerate Business Value for High Performance Is Your Organization Facing Any of These Challenges? Cost pressures; need to do more with the

More information

A Capability Maturity Model (CMM)

A Capability Maturity Model (CMM) Software Development Life Cycle (SDLC) and Development Methods There are some enterprises in which a careful disorderliness is the true method. Herman Melville Capability Maturity Model (CMM) A Capability

More information

Organization of Business Intelligence

Organization of Business Intelligence Organization of Business Intelligence The advantage gained by companies using competency centers to coordinate their business intelligence initiatives BARC Institute, Wurzburg, August 2008 Business Business

More information

Practical Metrics and Models for Return on Investment by David F. Rico

Practical Metrics and Models for Return on Investment by David F. Rico Practical Metrics and Models for Return on Investment by David F. Rico Abstract Return on investment or ROI is a widely used approach for measuring the value of a new and improved process or product technology.

More information

An ROI model for remote machine

An ROI model for remote machine An ROI model for remote machine monitor ing Based on field experience, a vending operator can expect to improve operating efficiencies, sales and profitability by introducing remote machine monitoring

More information

The Power of Risk, Compliance & Security Management in SAP S/4HANA

The Power of Risk, Compliance & Security Management in SAP S/4HANA The Power of Risk, Compliance & Security Management in SAP S/4HANA OUR AGENDA Key Learnings Observations on Risk & Compliance Management Current State Current Challenges The SAP GRC and Security Solution

More information

Software Engineering: Analysis and Design - CSE3308

Software Engineering: Analysis and Design - CSE3308 CSE3308/DMS/2004/25 Monash University - School of Computer Science and Software Engineering Software Engineering: Analysis and Design - CSE3308 Software Quality CSE3308 - Software Engineering: Analysis

More information

A CASE STUDY ON SOFTWARE PROJECT MANAGEMENT IN INDUSTRY EXPERIENCES AND CONCLUSIONS

A CASE STUDY ON SOFTWARE PROJECT MANAGEMENT IN INDUSTRY EXPERIENCES AND CONCLUSIONS A CASE STUDY ON SOFTWARE PROJECT MANAGEMENT IN INDUSTRY EXPERIENCES AND CONCLUSIONS P. Mandl-Striegnitz 1, H. Lichter 2 1 Software Engineering Group, University of Stuttgart 2 Department of Computer Science,

More information

The Worksoft Suite. Automated Business Process Discovery & Validation ENSURING THE SUCCESS OF DIGITAL BUSINESS. Worksoft Differentiators

The Worksoft Suite. Automated Business Process Discovery & Validation ENSURING THE SUCCESS OF DIGITAL BUSINESS. Worksoft Differentiators Automated Business Process Discovery & Validation The Worksoft Suite Worksoft Differentiators The industry s only platform for automated business process discovery & validation A track record of success,

More information

2015 Global Identity and Access Management (IAM) Market Leadership Award

2015 Global Identity and Access Management (IAM) Market Leadership Award 2015 Global Identity and Access Management (IAM) Market Leadership Award 2015 Contents Background and Company Performance... 3 Industry Challenges... 3 Market Leadership of IBM... 3 Conclusion... 6 Significance

More information