Metrics for the transformational age Essential Metrics for Agile Project Management Alex Birke, Agile World 2015 Accenture, its logo, and 'High Performance. Delivered.' are trademarks of Accenture.
Why Metrics? they help [ ] to make decisions (Eric Ries, Lean Startup) 2
Wait! Aren t Scrum s metrics, Burndown and Velocity, not good enough? 3
Metrics especially for the current transformational decade There are occasions when you are really agile and can get rid of most metrics you lucky one! But facts are: Companies out there are still not agile enough! 4
What makes a good agile metric? As many as required, as less as possible Leading indicators over trailing indicators Measure results, not activity Assess trends, not snapshots Minimize overhead 5
ASet of Core Metrics for Agile Project management* Scope Cost! Scope Volatility Schedule Quality! Running Tested Features! Cost of Rework! Story Point Effort Performance Indicator Agile Benefits! Release Slippage Risk Indicator! Sprint Burndown (Performance Indicator)! Release Burnchart (Performance Indicator)! Time to market! Business value delivered! User satisfaction! Employee Engagement *) Project management is agnostic of technology or domain 6
Scope Volatility (SV) Definition Scope Volatility depicts the amount of change in size of the release scope, comparing the release scope size measured at start of the release and after the last completed sprint. Calculation Current Size of Must-Have Scope Initial Size of Must-Have Scope SV = * 100 Initial Size of Must-Have Scope SV > 0: Scope creep < 0: Scope drop = 0: Planned scope size retained (often a corridor) Alternative metric: Changed Scope % measures not only the difference in absolute amount 7
Scope Volatility Release Burn Up Chart 1800 1600 Rating epics de-scoped Work ( Story Points) 1400 1228 1143 1200 Must-Have Scope 1237 1000 2 new Billing epics added 800 600 400 200 1273 1330 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Must-Have Scope Sprint # Must-Have + Should-Have Scope Planned Dev Complete or Accenture Scope Complete Ideal Burn-Up Ideal Burn-Up In the example above, Scope Volatility = (1273 [SP] 1143 [SP] ) / 1143 [SP] * 100 =.11 8
Story Point Effort Performance Indicator (SPEPI) Definition SPEPI (aka CPI, but sprint-wise) indicates if the ongoing project release is currently on budget, depicting deviation in planned effort per story point to the actual effort per story point. Calculation SPEPI = Planned effort to be spent in the release so far Planned story points delivered in the release so far Actual effort spent on the release so far Actual story points delivered in the release so far SPEPI = 1 : as planned < 1 : cost overrun > 1 : under budget 9
Story Point Effort Performance Indicator Value per Effort / Money SPEPI = (875 / 630 ) / ( 957 / 490 ) = 0.71, which indicates a cost overrun. 10
Release Slippage Risk Indicator (RSRI) Definition RSRI indicates whether at least a minimal viable release (MVR) can be deployed to production on the scheduled date. Calculation RSRI = Past Sprint Productivity (P) Required Sprint Productivity for release date (RP) RSRI = 1 : as planned < 1 : delayed > 1 : ahead of plan 11
Sprint Productivity (P) Definition Productivity is measured as generated value in story points that can be completed per person day. Calculation P = # story points [SP] of a sprint # person days of a sprint 12
Required Productivity (RP) Definition RP is the productivity that the team would require in the remaining sprints, to complete at least remaining Must-Have user stories, so that a Minimal Viable Release (MVR) can be deployed to production. Calculation RP = SP estimate of remaining Must-Have stories Total planned effort in remaining sprints 13
Required Productivity Required Velocity = ( 3 [SP] + 3 [SP] + 8 [SP] + 3 [SP] ) / 3 [Sprints] = 5.66 [SP] Assuming 105 hours per upcoming Sprint: RP = 5.66 [SP] / 105 [hours] = 0.054 [SP / hours] 14
Release Slippage Risk Indicator Past Productivity 5 [SP] / 90 [hours] 0.055 [SP / hours] RSRI = = = = 1.02 Required Productivity 5,66 [SP] / 105 [hours] 0.054 [SP / hours] 15
Release BurnChart Performance Indicator (RBPI) Definition RBPI indicates if a project is on schedule by showing the variation in the amount of completed work compared to amount of work planned. Calculation RBPI = Story points fully completed so far in the release Story points planned currently in the release RBPI = 1 : as planned < 1 : delayed > 1 : ahead of plan 16
Release BurnChart Performance Indicator 1800 Release Burn Up Chart Work ( Story Points) 1600 Rating epics de-scoped 1330 1400 1210 1200 1110 1015 Summary Reports added 1000 920 2 new Billing epics added 820 800 720 560 600 400 400 280 160 200 0 20 60 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 Must-Have Scope Planned Ideal Burn-Up Sprint # Must-Have + Should-Have Scope Dev Complete or Accenture Scope Complete Ideal Burn-Up In the example above, at the end of Sprint 7: RBPI = 490 / 720 = 0.68, which indicates that the amount of user stories done was less than expected. 17
Sprint Burndown Performance Indicator (SBPI) Definition SBPI shows the deviation in completed work compared to work planned, for the current sprint ( Sprint Scope variance ) Calculation SBPI = Ideal remaining work Actual remaining work SBPI = 1: as planned < 1: delayed > 1: ahead of plan 18
Sprint Burndown Performance Indicator Fig. 1: Task Burndown (effort based) Fig. 2: User Story Burndown (Story Point based) Sprint Burndown Performance Indicator = 120 [hours] / 160 [hours] =.75 Sprint Burndown Performance Indicator = 40 [SP] / 50 [SP] =.80 (i.e. -25% deviation) 19
Running Tested Features* (RTF) Definition RTF depicts the variance of working (running) features over total features built to date. Calculation # completed user stories that still pass all acceptance tests RTF = x 100 total # of completed user stories to date *) A Metric leading to Agility, Ron Jeffries 20
Running Tested Features Sprint 6 RTF rate was only 71%. This indicates that the remaining 29% of the user stories built and tested are not working and cannot be deployed to production. 21
Cost of Rework: Delivered Defect Density (DDD) Definition DDD indicates the effectiveness of the review and testing activities, thus ensuring that fewer defects are identified on the delivered product (increment). Calculation DDD = Defects identified after Done-ness of user stories Size of Done user stories in SP Alternative: The metric Defect Rate is # of defects identified in completed user stories per total effort spent in all tasks until date. 22
Delivered Defect Rate DDR Sprint 1 Sprint 2 Sprint 3 Sprint 4 Defects Identified 7 12 19 25 SPE Effort 400 831 1262 1685 Defect Rate 0,018 0,014 0,015 0,015 Count only the defects logged after the story is marked complete by the developer Engineering effort includes effort from agile lifecycle tool for design, build, test, defect fix tasks. Includes PO & SM time as % of completed stories In the example above, DDR trend is stable. Therefore the delivered quality is stable. 23
ASet of Core Metrics for Agile Project management* Scope Cost! Scope Volatility Schedule Quality! Running Tested Features! Cost of Rework! Story Point Effort Performance Indicator Agile Benefits! Release Slippage Risk Indicator! Sprint Burndown (Performance Indicator)! Release Burnchart (Performance Indicator)! Time to market! Business value delivered! User satisfaction! Employee Engagement *) Project management is agnostic of technology or domain 24
Some further -sometimes helpful- metrics Stakeholder Involvement Index Test Automation Coverage % Stories accepted (Retrospective) Process improvement Epic Progress Report 25
Questions & Answers 26