I Using Metrics to Manage



Similar documents
3.3 SOFTWARE RISK MANAGEMENT (SRM)

Order-to-Cash Processes

Chapter 2 Traditional Software Development

Bite-Size Steps to ITIL Success

Australian Bureau of Statistics Management of Business Providers

Teamwork. Abstract. 2.1 Overview

Advanced ColdFusion 4.0 Application Development Server Clustering Using Bright Tiger

Early access to FAS payments for members in poor health

Let s get usable! Usability studies for indexes. Susan C. Olason. Study plan

ELECTRONIC FUND TRANSFERS YOUR RIGHTS AND RESPONSIBILITIES. l l

IT Governance Principles & Key Metrics

WHITE PAPER BEsT PRAcTIcEs: PusHIng ExcEl BEyond ITs limits WITH InfoRmATIon optimization

A Description of the California Partnership for Long-Term Care Prepared by the California Department of Health Care Services

Chapter 3: e-business Integration Patterns

Vital Steps. A cooperative feasibility study guide. U.S. Department of Agriculture Rural Business-Cooperative Service Service Report 58

ELECTRONIC FUND TRANSFERS YOUR RIGHTS AND RESPONSIBILITIES. l l l. l l

ELECTRONIC FUND TRANSFERS YOUR RIGHTS AND RESPONSIBILITIES. l l. l l

ELECTRONIC FUND TRANSFERS. l l l. l l. l l l. l l l

ELECTRONIC FUND TRANSFERS YOUR RIGHTS AND RESPONSIBILITIES. l l. l l. l l. l l

Art of Java Web Development By Neal Ford 624 pages US$44.95 Manning Publications, 2004 ISBN:

ELECTRONIC FUND TRANSFERS YOUR RIGHTS AND RESPONSIBILITIES

Learning from evaluations Processes and instruments used by GIZ as a learning organisation and their contribution to interorganisational learning

READING A CREDIT REPORT

Oracle Project Financial Planning. User's Guide Release

Business Banking. A guide for franchises

DigitalKitbag. marketing

LADDER SAFETY Table of Contents

Introduction the pressure for efficiency the Estates opportunity

Avaya Remote Feature Activation (RFA) User Guide

Normalization of Database Tables. Functional Dependency. Examples of Functional Dependencies: So Now what is Normalization? Transitive Dependencies

CONTRIBUTION OF INTERNAL AUDITING IN THE VALUE OF A NURSING UNIT WITHIN THREE YEARS

The BBC s management of its Digital Media Initiative

ELECTRONIC FUND TRANSFERS YOUR RIGHTS AND RESPONSIBILITIES

Pay-on-delivery investing

The guaranteed selection. For certainty in uncertain times


Software Quality - Getting Right Metrics, Getting Metrics Right

Design Considerations

Best Practices for Push & Pull Using Oracle Inventory Stock Locators. Introduction to Master Data and Master Data Management (MDM): Part 1

Fixed income managers: evolution or revolution

Key Features of Life Insurance

SABRe B2.1: Design & Development. Supplier Briefing Pack.

Older people s assets: using housing equity to pay for health and aged care

Information Systems Technician Training Series

NCH Software BroadCam Video Streaming Server

Estimation of Liabilities Due to Inactive Hazardous Waste Sites. by Raja Bhagavatula, Brian Brown, and Kevin Murphy

MICROSOFT DYNAMICS CRM

INDUSTRIAL PROCESSING SITES COMPLIANCE WITH THE NEW REGULATORY REFORM (FIRE SAFETY) ORDER 2005

Advantages and Disadvantages of Sampling. Vermont ASQ Meeting October 26, 2011

Secure Network Coding with a Cost Criterion

Management Accounting

DECEMBER Good practice contract management framework

Business schools are the academic setting where. The current crisis has highlighted the need to redefine the role of senior managers in organizations.

ASSET MANAGEMENT OUR APPROACH

Chapter 3: JavaScript in Action Page 1 of 10. How to practice reading and writing JavaScript on a Web page

Integrating Risk into your Plant Lifecycle A next generation software architecture for risk based

TMI ING Guide to Financial Supply Chain Optimisation 29. Creating Opportunities for Competitive Advantage. Section Four: Supply Chain Finance

Teach yourself Android application development - Part I: Creating Android products

Strengthening Human Resources Information Systems: Experiences from Bihar and Jharkhand, India

GREEN: An Active Queue Management Algorithm for a Self Managed Internet

GRADUATE RECORD EXAMINATIONS PROGRAM

Oracle. L. Ladoga Rybinsk Res. Volga. Finland. Volga. Dnieper. Dnestr. Danube. Lesbos. Auditing Oracle Applications Peloponnesus

I m pretty lucky as far as teen librarians

Budgeting Loans from the Social Fund

Qualifications, professional development and probation

Subject: Corns of En gineers and Bureau of Reclamation: Information on Potential Budgetarv Reductions for Fiscal Year 1998

Leadership & Management Certificate Programs

DOING BUSINESS WITH THE REGION OF PEEL A GUIDE FOR NEW AND CURRENT VENDORS

What makes a good Chair? A good chair will also: l always aim to draw a balance between hearing everyone s views and getting through the business.

Ricoh Healthcare. Process Optimized. Healthcare Simplified.

CERTIFICATE COURSE ON CLIMATE CHANGE AND SUSTAINABILITY. Course Offered By: Indian Environmental Society

APIS Software Training /Consulting

NCH Software MoneyLine

Income Protection Options

Internal Control. Guidance for Directors on the Combined Code

Setting Up Your Internet Connection

Fast Robust Hashing. ) [7] will be re-mapped (and therefore discarded), due to the load-balancing property of hashing.

INTERNATIONAL PAYMENT INSTRUMENTS

SPOTLIGHT. A year of transformation

MARKETING INFORMATION SYSTEM (MIS)

How to deal with personal financial problems

FINANCIAL ACCOUNTING

CI/SfB Ro8. (Aq) September The new advanced toughened glass. Pilkington Pyroclear Fire-resistant Glass

AA Fixed Rate ISA Savings

Transcription:

x Using Metrics to Manage Software Projects Edward E Weer Bu HN nformation Systems* n 1989, Bu s Arizona faciity aunched a project management program that required additiona software metrics and inspections. Today, the company enjoys improvements in quaity, productivity, and cost. September 1994 ive years ago, Bu s Enterprise Servers Operation in Phoenix, Arizona, used a software process that, athough understandabe, was unpredictabe in terms of product quaity and deivery schedue. The process generated products with unsatisfactory quaity eves and required significant extra effort to avoid major schedue sips. A but the smaest software projects require metrics for effective project management. Hence, as part of a program designed to improve the quaity, productivity, and predictabiity of software deveopment projects, the Phoenix operation aunched a series of improvements in 1989. One improvement based software project management on additiona software measures. Another introduced an inspection program, since inspection data was essentia to project management improvements. Project sizes varied from severa thousand ines of code (KLOC) to more than 300 KLOC. The improvement projects enhanced quaity and productivity. n essence, Bu now has a process that is repeatabe and manageabe, and that deivers higher quaity products at ower cost. n this artice, describe the metrics we seected and impemented, iustrating with exampes drawn from severa deveopment projects. Project management eves There are three eves of project management capabiity based on softwaremetrics visibiity. (These three eves shoudn t be equated with the five eves in the Software Engineering nstitute s Capabiity Maturity Mode.) Describing them wi put the Bu exampes in perspective and show how we enhanced our process through gathering, anayzing, and using data to manage current projects and pan future ones. First eve. n the simpest terms, software deveopment can be modeed as shown in Figure 1. Effort, in terms of peope and computer resources, is put into a process that yieds a product. A too often, unfortunatey, the process can ony be described * Since writing this artice, the author has joined Motoroa. 001%9162194/$4.00019941EEE 27

Figure 1. Sottwn pment eve 1: no contro of the deveopment process. Some amount of effort goes into the process. and a product of indeterminant size and quaib is deveoped eary or (usuay) ate. compared to the pan. Figure 3. Software deveopment at eve 2: measurement of the code and test phases begins. Figure 4. Software deveopment eve 3: contro of the entire deveopment process. You measure the requirements and design process to provide feedforward to the rest of the deveopment f Feedback ~ A A...!.-W Fe&forward j...... i as we as feedback to future panning activities. Figure 2. Defect discovery p&fie for ower deveopment eves. The number of defects in the product exceeds the abiity of imited resources to discover and fix defects. Once the defect number has been reduced sufficienty, the discoven- rate decines toward zero. Predicting when the knee wi occur is the chaenge. by the question mark in Figure 1. Project managers and deveopment staff do not pan the acti\-ities or coect the metrics that woud aow them to contro their project. Second eve. The process depicted in Figure 1 rarey works for an organization de\ eoping operating-system software or arge appications. There is usuay some form of contro in the process. We coected test defect data in integration and system test for many years for severa arge system reeases. and we deveoped profies for defect remova that aowed us to predict the number of weeks remaining before test-cyce competion. The profie was typicay fat for many weeks (or months. for arger system reeases in the OO- to 300.KLOC range) unti we reached a knee in the profie where the defect discovery rate dropped toward zero (see Figure 2). Severa factors imit the defect discovery rate: Defects have a higher probabiity of being bocking defects, which prebent other test execution eary in the integration- and system-test cyce.. The defect discovery rate exceeds the deveopment staff s capacity to anayze and fix probems, as test progresses and more test scenarios can be run in parae. Athough this process gave us a fairy predictabe deivery date once the knee was reached, we coud not predict when the knee woud occur. There were sti too many variabes (as represented by the first two boxes in Figure 3). There was no instrumentation on the requirements or design stages (the? in Figure 3). Our attempts to count or measure the size of the coding effort were. in a sense, counterproductive. The focus of the deveopment effort was on coding because code competed and into test was a countabe, measurabe eement. This ed to a syndrome we caed WSCY for Why isn t Sam coding yet? We didn t know how to measure requirements anaysis or design output. other than by document size. We aso didn t know how to estimate the number of defects entering into test. Hence, there was no way to te how many weeks we woud spend on the fat part of the defect-remova profie. Predicting deivery dates for arge product reeases with 200 to 300 KLOC of new and changed source code was difficut. at best. A ist of measures is avaiabe in the second-eve mode (Figure 3): effort in person-months, computer resources used, the product size when shipped, and the number of defects found in the integration and system tests. Athough these measures are avaiabe, we found them difficut to use in project panning - there was itte correation among the data. and the data was not avaiabe at the right time (for instance. code size wasn t known unti the work was competed). Project managers needed a way to predict and measure what they were panning. Third eve. The key eement of the initiative was to be abe to predict deveopment effort and duration. We chose two measures to add to those we were aready using: (1) size estimates and (2) defect remova for the entire deveopment cyce. Because the inspection program had been in pace since eary 1990, we knew we woud have significant amounts of data on defect remova. Size estimating was more difficut because we had to move from an effort-based estimating system (sometimes biased by avaiabe resources) to one based on quantitative measures that were unfamiiar to most of the staff. The size measures were necessary to derive expected numbers of defects, which then coud be used to predict test schedues with greater accuracy. This data aso provided feedback to the panning organization for future projects. To meet the needs of the mode shown in Figure 4, we needed the foowing measures (itaics designate changes from the prior ist): 8 COMPUTER

The sidehar Data cocct~on sheet shows a sampe orm used to compie data. Project panning Once the project team deveops the first six estimate. the project manager xsini; to USL the data ~ as we as his- ~orica data tram our metrics database ~ for cyort and xhcdue estimating. Sevc~-a exampes -oni actua projects iustrate these point\. Using de&t data to pan test activities. u c use the nspection and test defect database\ as the primary defect-cstimation SOWcc. The in\pcction data provides defect detection r;r\ for design and code ~! product idcnti ier (PD). Our test dat;~basc can x \carchcd b\ the same PD. w a defect depetion curve? for the project can be constructed by summarizing a the project s PDs. (Severa interesting exampes in Humphrey provided a tempate for constructing a simpe spreadsheet appication that we used to pan and track defect injection and remova rates.) Figure 5 shows such a curve for one project. The size and defect density estimates were based on experience from a prior project. The project manager estimated thr unit and integration test effort from the defect estimates and the known cost to find and fix defects in test. The estimates and actua amounts are compared in the Project trackins and anaysis s;cction beow. Data coection sheet This ~~a~~if~ sheet, deveoped by Kathy Grif- eements tare estimated Q coieoted at each deveopfit& Software Engkxeering Process Group projeot man- merit-cyce phase, The &is with XX in them indicate data ager at B&k, compie6 effort, size, defect, and competion ooi at the et@ of hiigh-eve design; the &s with YY data. Athough the sheet L somewhat busy, ony six data efe derive4 from the XX data.. DATA COLLECTON SHEET Project Name 1 c 4 Buid Product or Feature Group dentifier(s) (PDs, DS, etc.) Date of nitia Estimates N&C Origina KLOC Est N&C Revised KLOC Est N&C KLOC Actuas Effort - Estimate Effort - Revised (PM) (PM) Effort - Actua (PM) # Defects - Estimate # Defects Actua YY Est Phase End Dates xx GS = Genera Ship HLD = High-Leve Design LLD = Low-Leve Design LEV = Unit. or Leve 1, Test LEV2 = ntegration. or Leve 2. Test LEV3 = System. or Leve 3, Test LEV4 = Beta. or Leve 4. Test N&C = New and Changed PD = Product Dentifier PM = Person Months REQ = Requirements Anaysis

Deveopment phase Figure 5. Estimated versus actua defect depetion curves. Mutipe data views. The data in Figure the ater stages of the system test: this 6 heped the project manager anayze re- demonstrated why it s important to ook suts from the integration test. The proj- at more than the tota number of defects ect team had itte experience with the or the defect density, even when the numtype of product to be deveoped, so a ber of defects is beow expectations. A arge number of defects were predicted. coser ook at the eary deveopment The team aso decided to spend more ef- stages shows that very few requirements fort on the unit test. After the unit test, or high-eve design defects were found in the resuts seemed within pan, as shown the inspections. The ow-eve design inin Figure 6a. During the integration test, spections aso found fewer defects than some concern was raised that the num- expected. What the project members ber of defects found was too high. Once missed in the data anaysis during the unit the data was normaized against the proj- and integration tests was the arge numect size and compared to projections for ber of design errors being detected (see the number of defects expected by the Figure 6b). This exampe demonstrates deveopment team, the eve of concern the vaue of independent data coection was owered. and anaysis as soon as it is avaiabe. However, this project had a serious re- An objective anaysis. or at east an quirement error that was discovered in anaysis that ooked at the project from a different viewpoint. might have spotted the anomay. Unit test data accuracy might have been questioned as foows: *Are some of the errors caused by high-eve design defects? Why weren t any design defects found in the integration test? When the data was charted with the defect source added, the design-defect data discovery rate in the unit test was obvious. The inaccuracy of the integration test data aso became apparent. A coser ook at the project reveaed the source of the defect data had not been coected. We aso questioned members of the design inspection teams: we found that key peope were not avaiabe for the high-eve design inspection. As a resut, we changed the entry criteria for oweve design to deay the inspection for two to three weeks, if necessary, to et a key system architect participate in the inspection. Part of the change required a risk anaysis of the potentia for scrubbing the ow-eve design work started before the inspection. Using test cost. On one arge project. the measured cost in the integration test was much higher than expected. Even though you know the cost of defects in test is high. an accurate cost tay can surprise you. f you haven t gathered the data for a project. the foowing exampe may convince you that the effort is worthwhie. On this arge project. it took 40-35- 30-25- 20-15- 10-5- O- n Phases 0 Coding 0 LLD n HLD n R/A 1 (a) Deveopment phase W Deveopment phase Figure 6. Projected versus actua number of defects found per thousand ines of code from inspections and test (a), and additiona information when the defect source is incuded (b). 30 COMPUTER, DUE TO LACK OF CONTRAST, GRAPHS DD NOT REPRODUCE WELL. GRAF HS FOLLOW SAME SEQUENCE AS LEGEND _~_...~.

80 hours to find and fix the typica defect, 03.5 person-months to rebuid the product and rerun the test suite, and 8-10 caendar days to compete the retest cyce. Three months effort represents the fixed cost of test for this product area. This anaysis reemphasizes the need to spend more time and effort on inspecting design and code work products. How many additiona hours shoud be spent inspecting the work products (design, code, and so forth) versus the months of effort expended in test? Tabe 1. Defect density inferences. Defect Density Observation Lower than expected Higher than expected nferences Size estimate is high (good). nspection defect detection is ow (bad). Work product quaity is high (good). nsufficient eve of detai in work product (bad). Size estimate is ow (bad). Work product quaity is poor (bad). nspection defect detection is high (good). Too much detai in work product (good or bad). Sanity check. Size estimates and software deveopment attribute ratings are used as input to the Cocomo (Constructive Cost Mode) estimating model3 (Joe Wiechec of Bu s Reease Management group deveoped a Cocomo spreadsheet appication based on Boehm3 for schedue and effort sanity checks.) The accuracy of the effort estimate produced by Cocomo depends on accurate size estimate and software deveopment attribute ratings (anayst capabiity, programmer capabiity, and so forth). We compare the assumed defect rates and cost to remove these defects with the Cocomo output as a sanity check for the estimating process. Since project managers often assign attribute ratings optimisticay, the defect data based on project and product history provides a crosscheck with the cost for unit and integration test derived from the Cocomo estimate. Reasonabe agreement between Cocomo test-effort estimates and estimates derived from defect density and cost to find and fix per-defect figures confirm that attribute ratings have been reasonaby revised. This sanity check works ony for the attribute ratings, since both the Cocomo effort estimates and test cost estimates depend on the size estimate. Project tracking The keys to good project tracking are defining measurabe and countabe entities and having a repeatabe process for gathering and counting. During the design phase, the metrics avaiabe to the project manager are effort spent in person-months, design-document pages, and defects found via work product re- view, inspection, or use in subsequent deveopment stages. nterpreting effort variance. When effort expenditures are beow pan, the project wi typicay be behind schedue because the work simpy isn t getting done. An aternative expanation might be that the design has been competed, but without the detai eve necessary to progress to the next deveopment phase. This merey sets the stage for disaster ater. We use inspection defect data from design inspection to guard against such damaging situations. f the density fas beow a ower imit of 0.1 to 0.2 defects per page versus an expected 0.5 to 1.0 defects per page, the possibiity increases that the document is incompete. When defect detection rates are beow 0.5 defects per page, the preparation and inspection rates are examined to verify that sufficient time was spent in document inspection. We aso cacuate the inspection defect density using the number of major defects and the estimated KLOC for the project. f the defect density is ower than expected, either the KLOC estimate is high or the detai eve in the work product is insufficient (see Tabe 1). When trying to determine which of the eight possibe outcomes refect the project status, project managers must draw on their experience and their team and product knowedge. beieve the project manager s abiity to evauate the team s inspection effectiveness wi be better than the team s abiity to estimate the code size. n particuar, a 2-to-1 increase in detection effectiveness is far ess ikey than a 2-to-1 error in size estimating. When effort expenditures are above the pan and work product deiverabes are on or behind schedue, the size estimate was ceary ower than it shoud have been. This aso impies ater stages wi require more effort than was panned. n both cases, we found that the process instrumentation provided by inspections was very usefu in vaidating successfu design competion. The famiiar 90 percent done statements have disappeared. nspections are a visibe, measurabe gate that must be passed. Project tracking and anaysis. nspection defect data adds severa dimensions to the project manager s abiity to evauate project progress. Gass4 and Graham5 caim that defects wi aways be a part of the initia deveopment effort. (Gass says design defects are the resut of the cognitive/creative design process, and Graham says errors are inevitabe, not sinfu, and unintentiona. ) Humphrey presents data indicating that injection rates of 100 to 200 defects per KLOC of deivered code are not unusua.6 Athough we have seen a 7-to-1 difference in defect injection rates between projects, the variance is much ess for simiar projects. For the same team doing simiar work, the defect injection rate is neary the same. The project anayzed in Figure 5 invoved a second-generation product deveoped by many of the peope who worked on the first-generation product. At the end of the ow-eve design phase, Steve Magee, the project manager, noticed a significant difference in the estimated and actua defect-depetion counts for ow-eve design defects. The estimate was derived from the earier project, which featured many simiar characteristics. There was a significant increase in the actua defect data. We September 1994 31

Tabe 2. Measures and possibe inferences during requirements Measure Vaue nference and design phases. Effort Above pan Project is arger than panned, if not ahead of schedue; or project is more compex than panned, if on schedue. Defects Detected Beow pan Above pan Beow pan Project is smaer than estimated, if on schedue; or project is behind schedue; or design detai is insufficient, if on or ahead of schedue. Size of project is arger than panned; or quaity is suspect; or inspection detection effectiveness is better than expected. nspections are not working as we as expected; or design acks sufficient content; or size of project is smaer than panned; or quaity is better than expected. Size Above pan Marketing changes or project compexity are growing - more resource or time wi be needed to compete ater stages. Beow pan aso had some numbers from the first project that suggested inspection effectiveness (defects found by inspection divided by the tota number of defects in the work product) was in the 75 percent range for this team. Again, we were abe to use our inspection data to infer severa theories that expained the differences. The defect shift from code to ow-eve design coud be attributed to finding defects in the ow- Project is smaer than estimated; or something has been forgotten. Dewbmmt eve design inspections on the second project, rather than during code inspections in the first project. A coser ook at the first project defect descriptions from code inspections reveaed that a third of the defects were missing functionaity that coud be traced to the ow-eve design, even though many defects had been incorrecty tagged as coding errors. Reevauating the detaied defect descriptions brought out an important fact: phase Figure 7. Actua defect density data for the project depicted in Figure 5. The percentage of design defects detected in code inspection on the first project was higher than we thought. We were aso concerned by the number of defects, which exceeded the tota we had estimated even after accounting for the earier detection. t seemed more ikey that the size estimate was ow rather than that there was a significant increase in defect detection effectiveness. n fact, the size estimate was about 50 percent ow at the beginning of oweve design. The project manager adjusted his size estimates and consequenty was better abe to predict unit test defects when code inspections were in progress, and time for both unit and integration test. Using defect data heped the project manager determine that design defects were being discovered earier and project size was arger than expected. Hence, more coding effort and unit and integration test time woud be needed. Figure 7 shows the actua data as this project entered system test. Comparing the data in Figures 5 and 7 indicates a shift in defect detection to earier stages in the deveopment cyce; hence, the project team is working more effectivey. D efect data can be used as a key eement to improve project panning. Once a size estimate is avaiabe, historica data can be used to estimate the number of defects expected in a project, the deveopment phase where defects wi be found, and the cost to remove the defects. Once the defect-depetion curve for the project is deveoped, variances from the predictions provide indicators that project managers can examine for potentia troube spots. Tabe 2 summarizes these measures, their vaue (above or beow pan), and the possibe troubes indicated. These measures and those isted in Tabe 1 answer many of the questions in the design box in Figure 3. One difficuty project managers must overcome is the unwiingness of the deveopment staff to provide defect data. Grady mentions the concept of pubic versus private data, particuary regarding inspection-data usage. Uness the project team is comfortabe with making this data avaiabe to the project manager, it is difficut to gather and anayze the data in time for effective use. beieve that continuing education on the pervasiveness of defects, and recog- 13 JL _. U-M- DUE TO LACK OF CONTRAST, GRAPHS DD NOT REPRODUCE WELL. GRAF HS FOLLOW SAME SEQUENCE AS LEGEND.- COMPUTER.--

nition that defects are a norma occurrence in software deveopment, is a critica first step in using defect data more effectivey to measure deveopment progress and product quaity. Ony through coecting and using defect data can we better understand the nature and cause of defects and utimatey improve product quaity. n Acknowedgments thank John T. Harding and Ron Radice for their many hours of discussion on defect remova as a project management too; Steve Magee, Fred Kuhman, and Ann Hoaday for their wiingness to use the methods described in this artice on their projects; Jean-Yves LeGoic for his exceent critique; Barbara Ahstrand, Robin Fuford, and George Mann for inspecting the manuscript; and the anonymous referees for their hepfu recommendauons.,. References 1. E. Weer, Lessons from Three Years of nspection Data, EEE Software, Vo. 10, No. 5, Sept. 1993, pp. 38-45. 5. W. Humphrey, Managing the Software Process, 1990, Addison-Wesey, Reading, Mass., pp. 352-355. B.W. Boehm, Software Engineering Economics, Prentice Ha, Engewood Ciffs, N.J.. 1981. R. Gass, Persistent Software Errors: 10 Years Later, Proc. First nt Software Test, Anaysis, and Rev. Conf.. Software Quaity Engineering, Jacksonvie, Fa., 1992. D. Graham, Test s a Four Letter Word: The Psychoogy of Defects and Detection, Proc. First nt Software Testing, Anaysis, and Rev. Conf, 1992, Software Quaity Engineering, Jacksonvie, Fa. 6. W. Humphrey, The Persona Software Process Paradigm, Sixth Software Eng. Process Grouo Nat 1 Meetinp. Software Eng. nst., Carnegie Meon Univ., Pittsburgh, 1994. Edward F. Weer is a technica staff engineer at Motoroa s Sateite Communications Division, where he is responsibe for deveoping software subcontract management processes. Previousy, he was the software process architect for the Bu HN nformation System s Mainframe Operating Systems Group. Weer received a BSEE from the University of Michigan and an MSEE from the Forida nstitute of Technoogy. He was awarded Artice of the Year honors in 1993 by EEE Software for authoring Lessons from Three Years of nspection Data. He is a member of the Software Engineering nstitute s Software Measurements Steering Committee and is co-chair of the Software nspection and Review Organization, a specia-interest group promoting inspection process usage. He is a senior member of the EEE and a member of the EEE Computer Society.. R. Grady, Practica Software Metrics For Readers can contact Weer at Motoroa Project Management and Process m- Government Systems and Technoogy, 2501 S. provement, Prentice Ha, Engewood Price Rd., MS G1137, Chander, AZ 85248- Ciffs, N.J., 1992, pp. 104-107. 2899, e-mai ed-weer-p26708@emai.mot.com. Does Your Software Have Bugs? You need nsight++ 2.0 A source-eve automatic runtime debugger for C and Ctt /bght++ automaticay detects on average 30% more bugs than other debuggers, heping you to produce higher quaity software faster. For a imited time, get a mutiuser icense for ony $1495 or ca for a free tria. nsight++ finds a bugs reated to: d memory corruption dynamic, static/goba, and stack/oca (/ memory eaks (/ memory aocation new and deete (/ /O errors (/ pointer errors Avaiabe for Sun, SG, DEC, / ibrary function cas HP9000, BM RS/6000, and mismatched arguments others. invaid parameters ParaSoft Corporation Phone: (818) 792-9941 FAX: (818) 792-0819 E-mai: insight@parasoft.com Web: http://www.parasoft.com Reader Service Number 3 --T -