A Capability Model for Business Analytics: Part 3 Using the Capability Assessment The first article of this series presents a capability model for business analytics, and the second article describes a process and provides a tool for analytic capability assessment. Capability assessment is interesting, as indicated by the sustained popularity of Carnegie-Mellon s SEI Capability Maturity Model. Interesting, however, is not enough. As anyone who is involved in analytics knows, measurement without action is pointless. This third and final article of the series takes the next step from an interesting assessment to informative and actionable measures of analytic capability. A Quick Review of Analytic Capability Assessment The model that provides the foundation for assessment is based upon the SEI model from Carnegie-Mellon. That model describes six levels of process capability: Level 0 Incomplete Level 1 Performed Level 2 Managed Level 3 Defined Level 4 Quantitatively Managed Level 5 Optimized To apply the model to analytics we examine two kinds of analytic processes the processes that create analytics and the processes that use analytics to understand, gain insights, and make decisions about the business. The processes for creating and using analytics are evaluated for each of seven common kinds of business analysis performance, behavioral, predictive, causal, discovery, textual, and geospatial. For a detailed explanation of the analytic capability model see the first article in this series. Capability assessment uses the model as a structure for data collection. Fourteen data points (two process types for each of seven kinds of analysis) make a self-assessment tool that is fast and light. The response for each data point is made by selecting best-fit among a predefined set of descriptive phrases. Quantified assessment is achieved by assigning each phrase a numerical value that corresponds with the SEI capability levels. This method of analytic capability assessment is illustrated in Figure 1. Quality and accuracy of the assessment are, of course, influenced by the participants in the process and the care and consideration given to each response. For a detailed description of the assessment process see the second article of the series. Assessment with Meaning Every analyst knows that measures alone can t tell a story. To derive meaning from measurement requires comparative context. Consider, for example, that you know a vehicle is traveling at a speed of 60 miles per hour. It is impossible to distinguish good news from bad news given that single measure. Is the vehicle an automobile on the highway? Is it an automobile in a school zone? A runaway bicycle on a steep slope? Or a jet airliner at an altitude of 35 thousand feet? The point is that you must know what speed is desirable or appropriate a target value before you can determine if 60 mph is a number that indicates need for action.
At first glance, the assessment shown in Figure 1 appears to be a case of really bad news: A score of 2.1 on a scale of zero to five that s barely above twenty percent and most certainly a failing grade! The news, however, may not be as bad as it seems. There may, in fact, be good news in this assessment. Two fundamental errors are made in the leap to conclusion that 1.7 is a failing score. The first interpretive error occurs by using the top of the five point scale as the basis for comparison, and by assuming that 5 is the target value. Not every organization needs to have level 5 analytic capabilities. Thus the comparative basis should not be what is possible the top of the scale but what is needed. The second error occurs by looking only at the aggregate score. Not every organization needs to have top-of-scale analytic capabilities, and very few need to be at level 5 for every category of analysis. When examined on a row-by-row basis the assessment shows a substantial score of 3.5 for performance analysis a clear statement that this organization has defined performance analysis processes. A closer look shows that use of performance analytics is at level 4. Use of performance analytics is quantitatively managed in this organization. Quantitatively managed performance certainly seems like good news. But is it really? Once again, it is difficult to know because the basis for comparison is an external scale with no direct connection to the needs of the organization. Perhaps the capabilities far exceed the needs creating analytics cost from which no value is derived. Another row-by-row look highlights some apparent bad news. The very low 0.5 score for textual analysis looks particularly weak. It bears strong influence on the relatively low aggregate score. Yet, if there is no need for text analytics capability level is appropriate to the needs, and a cursory look at the low aggregate score may be misleading. Figure 2 shows corresponding needs assessment for the organization. The ability to compare current capabilities with needs makes the assessment much more informative.
To find real meaning in this assessment we must know what is needed. A complete analytic capability assessment must collect data about both current capabilities and needed capabilities, and must provide the basis to perform capability gap analysis. Assessment with Purpose With all three assessment functions current capabilities, needed capabilities, and gap analysis we now have the essential elements to measure and evaluate with purpose. The purposes for which this tool is designed are: To confidently know your current level of analytic capabilities. Develop a well-reasoned consensus view of your need for analytic capabilities. Understand the gap between analytic capabilities and needs. Gain insights that help you make plans to close the gap. The assessment data shown in figures 1 and 2 responses and scoring of both capabilities and needs is the input data to gap analysis. But the data isn t easily analyzed because it is not organized for comparison. The gap analysis tab of the spreadsheet reorganizes the data to support visual comparison of capabilities and needs. Figure 3 shows a tabular view that helps to compare the numbers. This table takes attention away from the aggregate scores by not displaying them. (Remember that too much attention to the aggregate score causes errors in interpreting the assessment.) The table is organized to suggest row-by-row and column-by-column analysis. Nothing new is found by examining the first two sets of columns. They simply restate the capabilities and needs numbers shown in the earlier tables.
The interesting data appears in the third set of columns where gaps are quantified for each row. Examining the table I can quickly observe that: For all seven types of analysis the needs exceed the capabilities. For most of the analysis types a moderately wide gap of 1.5 exists. Discovery analysis and geospatial analysis have narrower gaps of 0.5 and 1.0 respectively. There is no significant difference in the width of gap for creating analytics vs. the gap for using analytics. The logical conclusion from these observations is that some action should be taken to close the gap between needs and capabilities. Obviously the answer is to increase capabilities. The hard questions for planning are Which capabilities to increase? and Where to begin closing the gap? To answer these questions it helps to visualize the gap. The gap analysis charts shown in Figure 4 help with visualization. The visual perspective brings new observations, adding to the understanding gained by looking at the gap analysis table. Now we see that:
Performance analysis is the area of greatest need, and it does have a significant gap between needs and capabilities. The performance analysis gap is wider for creating analytics than for using analytics. The capability to use performance analytics exceeds the capacity to produce them. Predictive analysis is another area of significant need, particularly for processes to create predictive analytics. Behavioral analysis and causal analysis both have relatively high levels of need and somewhat lower levels of capability. In general the gap for creating analytics is wider than that for using analytics. Causal analysis has the widest gap of all analytic types for using analytics, and is the one instance where the using gap is wider than the creating gap. Discovery analysis, textual analysis, and geospatial analysis have relatively low levels of need and relatively narrow gaps. There is no gap between needs and capabilities for using discovery analytics. The organization has no capability for using text analytics. The twelve observations gained through gap analysis provide good insight into the state of analytic capabilities. Having examined the gap at the level of columns, rows, and visual comparisons, we can now return briefly to aggregate scores. The aggregate score for current capability is 2.1 as shown in figure 1, and the aggregate score for needed capability is 3.4 as shown in figure 2. The aggregate gap, then, is 1.3 more than half the value of current capabilities. We seek to increase analytic capabilities by approximately 60% over their current level. Now we have a sense of the real magnitude of the overall gap. From Assessment to Action Once the assessment has helped to gain insight, the work to be done is that of setting priorities, making decisions, and making plans. Many variables, of course, beyond the data of assessment go into that process. But we can follow the example a bit further using some assumptions. Let s assume that: Analysis areas with greatest need are given highest priority. Areas with the widest gaps are to be addressed first and most aggressively. Working with these assumptions, a plan to increase analytic capability should have as highest priority improved capability to create performance analytics and to create predictive analytics. Second priority improvements include ability to use performance and causal analytics, and to create behavioral analytics. The near-term plan, then has five distinct goals: 1. Improved capability to create performance analytics. 2. Improved capability to create predictive analytics. 3. Improved capability to use predictive analytics. 4. Improved capability to use causal analytics. 5. Improved capability to create behavioral analytics. The goals describe what needs to be accomplished. And they are measurable goals the assessment process has already defined the measures and set the targets. The next
stages of planning address how to achieve the goals. Here you ll explore available resources that likely include training, technology, collaboration, and consulting. Making the right choices is very specific to your organization. Using analytic capability assessment, you can be sure that you are making informed choices. You can download the assessment spreadsheet from the Business Analytics Collaborative Learning Center.