You Can Create Measurable Training Programs A White Paper For Training Professionals
Executive Summary Determining training return on investment (ROI) is a pressing need for all training professionals. One of the key ingredients for meeting the need for ROI is creating measurable training. Like the name implies, measurable training is training that is standardized, repeatable, and can be measured for its impact on both your employees and your organization. In current measurement research, you will often hear the experts say that training results should be consistent, linked to behavior changes and the metrics within an organization. How is that done? How can we ensure that our programs create a change in behavior that is reflected in This paper will prove how using the proper design process, the right metric, and an updated instructional design model produces measurable training every time. Why Training Design Matters organizational metrics? What is the relationship between a training program and the metrics of an organization? Many have struggled with trying to make connections to verify results from training programs. Using a measurable training process provides us with a clear answer. Creating programs using Measurable Instructional Design provides a strong and practical solution for enabling training to be measured against the acquisition of learning, the application on the job, and the changes to the organization. This paper will prove how using the proper design process, the right metric, and an updated instructional design model produces measurable training every time. Measuring training is important because the only way to gather data for calculating training ROI and showing the value of training is to measure training results. We know that training programs can increase knowledge and skills, and change attitudes. We also know it is the design of training that provides both the methods and the mechanism for gaining knowledge and skill. Therefore, the ultimate measurement of a training program must be the ability of the design to successfully create the acquisition of new knowledge and skills by the learner. To be successful, training must be designed to enable clear changes in behavior. The value training professionals bring to the table is the knowledge of how adults learn, the methods used to teach, and the application of that knowledge to their program design. They need to design programs that enable learning and improve capability. Eventually, the instructional design methods dictate the effectiveness and success of training. When we know that effective training design is our end product, then measurement becomes much easier. Why? Training should result in new knowledge or skill gains and as training professionals we already know how to measure these gains. Knowledge gain is measured with a knowledge test, and skill gain is observed through behavior change. When you use the right instructional design model, then the ability to measure training impact is practically assured. The most common model used for instructional design is the ADDIE model. ADDIE stands for Analysis, Design, Development, Implementation, Evaluation. ADDIE provides a closed loop training solution that starts with analysis and ends with back-end evaluation. However, the two major criticisms of this model have been: The analysis examines learning requirements but not the requirements of the business. The evaluation measures the learning but not how the organization is impacted by the learning. These criticisms make way for a new model that incorporates business requirements and an improved way to measure training impact. An instructional design model that provides a closed loop solution and incorporates the required levels of impact can be measured against both behavior change and organizational effect. 2
A New Model Measurable Instructional Design eparamus uses the five part Measurable Instructional Design (MID) model, which includes: Key Performance Metric (KPM) Job Requirements Training Objectives Instructional Strategies Mastery Test The figure below shows how each component links. MID addresses some of the main criticisms of ADDIE by ensuring training outputs and business objectives align and making them the cornerstone of design. The MID model enhances the ADDIE approach by teaching instructional designers to collaborate with business unit managers in the analysis. Specifically, the business manager and trainer work together to identify the KPMs and standards of performance that are used to measure effective results. Once identified, improving the metric becomes the goal of the training and the foundation for the training design. Therefore, aligning training objectives to business outcomes begins the process. With the MID model, the training professional learns how to support the manager in identifying the business requirement/measurement. The trainer educates the business unit manager on how training is able to improve performance using standards of performance as a gage. This collaboration and alignment typically starts with the training request. A Tour Through the Measurable Instruction Design Process The Training Request A training request normally launches the development of any training program. Training professionals use the training request as the impetus to analyze the problem that led to the request. Since business managers use operational metrics to identify problems, their goal for requesting training is typically centered on improving a metric. For instance, if a technical call center manager typically sees 5 errors out of every 100 orders processed, but begins to see 20 or 30 errors for every 100 orders processed, the manager looks for reasons for the errors. The manager may conclude that there is a need for training to fix the problem. Another example may be new hires are leaving the company at a 75% rate after 90 days. Senior leadership believes that managers are not coaching their new hires in the right way to ensure their success beyond this period. A training course on coaching is requested. Both of these examples show how changes in business indicators (metrics) drive the request for training programs. Determining the KPM As training professionals, we know training can impact knowledge, skills, or attitudes. Knowing this, the only reasonable action is to determine the metric that can be influenced by a change in knowledge, skill, or attitude. Since knowledge and skill gains create behavior change, a metric that measures behavior is the only direct tie to training. If you read the current research on training analytics, you will often hear the experts say that training should be linked to metrics such as customer satisfaction or revenue generated. These are very high-level metrics. For years, training measurement experts have tried to connect training results to these types of metrics. They have tried using 3
complicated mathematical methods to isolate training impact from all of the other things that influence the metric. The goal of isolation was to determine what part training played in any changes to the metric. Unfortunately, this thinking is misguided and impractical for the business environment. No one has the time, money, or even the desire to hire everyone who would be needed to accurately isolate all of the influences to these high-level metrics and make that measurement. More importantly, no other function in business attempts to show value in this way. There is a far more practical solution for finding the correct metric that will measure training. The first part of analysis in the MID process involves identifying the KPM and verifying that the change to that KPM can be achieved through a change in knowledge, skill, or attitude for a group within the organization. This helps the instructional designer and the manager to identify the metrics that are directly related to behavior and eliminates the need to isolate training s impact. Job Requirements (Performance Standards) The second stage of analysis focuses on job requirements (also known as performance standards) that impact the KPM. Job performance standards give clues to the trainer of what the training program content should be, provide both the trainer and business manager a way to determine training success, and provide the learner with a reference for performance. Requiring alignment between the trainer and business manager on the job standards and the means of measurement (KPM) ensures that training dollars are spent on something that the organization agrees is necessary for performance improvement. In addition, it provides a connection between the design and the intended organizational results. This is paramount to enabling training to be measured for organizational impact. If trainers learn to work with the training requester to reach agreement on the operational metric that will be measured for success (in the example above, the number of errors per order processed) and collaborate on the job performance standards needed (the knowledge and skill necessary to process error-free orders) to address the metric, then the chances of training success greatly increase. Standards of performance are aligned with metric outcomes. Aligning metrics and standards enables both stakeholders to be held accountable for their influence on improvements to the identified KPM. Connecting the KPM to the Training Goal To connect the KPM to the training goal you need to state the training outcome that will impact the KPM. In other words, the training goal should align with what the KPM measures. The trainers produce a program with the stated goal that will result in performance changes that will impact the KPM. If the KPM measures the results of behavior change, and if both the business manager and the trainer agree on the KPM and training goal, then when a course is measured for success we can test the validity of the assumptions made in the process. We will know if the increase in knowledge and skills (improvement in behavior) solved the problem and if the behavior change impacted the KPM. Connecting the Training Goal to the Training Objectives Once the KPM and training goal are aligned, next steps become very clear. The instructional designer can identify the training objectives to meet to reach the training goal. In our call center error example, the training goal is to reduce errors (and our KPM measures the number of errors). If job performance standards are identified then the types of objectives needed would be evident because the trainer would know that the job performance standards reflect the best practices that achieve desired outcomes. Once the trainer knows the job standards, he or she considers the needed performance and determines the objectives. They might conclude that objectives would contain learning about the different types of errors made, or how to avoid them. Connecting the training goal to the objectives in the course essentially maps out the steps needed to reach the goal. Measurable Objectives Make Training Measurable Before we consider the next design steps that make training measurable, let s consider what makes anything measurable. For something to be measurable, the process needs to be: 1. Easy 2. Observable 3. Repeatable 4
The training industry consistently uses knowledge and skills tests and end-of-course surveys to assess training. The popularity of these methods clearly demonstrates that they are easy to complete. Those types of tools take care of the first requirement, easy to measure. The second requirement is a process that is observable. Measurable training requires an observable means to see knowledge or skill gain. Currently, we can measure an increase in knowledge through a knowledge test and the acquisition of skills through a skills test, so that meets our second requirement for measurement. The final need is repeatability. If we use knowledge and skills tests then we can easily repeat those tests. The real challenge in measuring training does not come from having easy methods to measure (criteria one) or a repeatable process (criteria three). The challenge comes with criteria two observing knowledge acquisition and skill gain. The key to meeting criteria two hinges on how course objectives are written. How many times have you taken a training course with an ambiguous objective like understand Excel or know how to give feedback or even be aware of a policy? None of these objectives can be measured because they are stated in terms so broad that no one can observe (at least in the same way) when they are complete. Without clear parameters those that observe the behavior will have to rely on their own interpretation of the conditions of performance or the level of performance required for the task. Creating Measurable Training Objectives For training program success to be observable, you need measurable training objectives. A measurable training objective is an objective that is clearly defined so that the outcomes can be quantified. General objectives describe the content area to be addressed by training but often leave the details of the performance up to interpretation. On the other hand, a measurable objective includes the required performance, conditions of performance, and criteria of successful performance allowing for a very specific outcome. These three characteristics form an objective that can become the cornerstone of what to teach, how to teach it, and how to measure the success of what was taught. In our earlier example where a call center s error rate increased in order processing, the business manager may determine that the call center representatives are not using Excel properly, which is leading to increased errors. The KPM is the number of errors. The learning goal is to know best practices in Excel to reduce errors when processing orders, and the objectives are the specific tasks necessary (based on the gap analysis) to use Excel effectively. Each measurable objective would address the identified gap in knowledge and skill in using Excel on the job. Perhaps the objective would look something like this: Students will be able to create formula X using Y data and arrive at Z result. This objective can be easily understood by both the manager and the trainer, it can be easily viewed against the job performance standard, and it can be easily measured through observing the students ability to complete. Since the identified KPM is supported by the goal of the course and the goal of the course is supported by the objectives, when the objectives are realized they have the ability to impact both the job and the organization. In addition, measurable objectives that are written to represent the conditions of the job and the standards of performance provide clarity on the instructional strategies needed to help the students acquire the necessary knowledge and skill. Linking Measurable Objectives to Instructional Strategies Once you have objectives, you can choose the best teaching method to achieve those objectives. Choosing the best strategy to ensure learners are able to acquire the needed knowledge and skill is a key competency of the instructional designer. Instructional designers choose strategies to address what needs to be learned and the level at which it needs to be learned. Let s consider the following example where the manager believes the increased customer complaints of rudeness are caused by an increase in stress on the job: Course KPM: Reduce customer complaints of rudeness. Course goal: Provide stress management techniques to use at the customer service desk. General course objective: The student will use stress reducing techniques. Measurable course objective: When working the call center desk, the student demonstrates the 30-second stress reducing technique, including all the steps in the correct order. 5
The general course objective is not measurable because what constitutes using the technique is not clear. The measurable course objective, however, specifically identifies what the student will know and be able to do. In the measurable objective, we know the students need to be aware of the technique while operating in their work environment. We know they need to use the 30-second technique while on the job, and be able to execute all steps of the technique in the proper order. It is much easier to determine if the objective is achieved when it provides this level of detail. When objectives are measurable they enable both the training professional and the business manager to clearly understand what the student needs to learn from training and to agree on the specific performance objectives that will reach the intended business goal. In our example, the measurable objective helps the instructional designer to know what instructional strategy is necessary to reinforce the timing of the stress reducing technique and what strategy will enable the student to absorb all steps in the process. No matter which strategy is chosen, knowing the performance conditions and performance criteria gives the instructional designer the information required to select the instructional strategy to achieve the intended result. With this information, they can determine the methods of learning and practice necessary to achieve proficiency. Linking Measurable Objectives to Knowledge or Skill Mastery Just as the measurable objective provides information necessary to determine a good instructional strategy, it also provides vital information on the best way to measure the objective. The type of question you use to measure your objective is directly related to the condition and criteria of the objective. To continue our example: General course objectives: The student will know stress reducing techniques (knowledge objective). The student will use stress reducing techniques (skill objective). Measurable course objectives: When working the call center desk, the student knows the 30-second stress reducing technique, including all the steps in the correct order (knowledge objective). In between taking calls, the student displays the 30-second stress reducing technique with all of the steps completed in order (skill objective). For knowledge objectives, you need to consider how detailed or how in-depth a student needs to understand the material to achieve the objective. The verb chosen aligns with the condition and criteria to determine the type of mastery. For instance, does the student need to immediately recall the information or will the student have the benefit of a job aid when needed? Do students need to recognize the answer or do they need to produce the information without prompting? Answers to these questions are found in the stated condition (when working at the call center desk). The condition and criteria help identify the type of evaluation because they provide the parameters for success. If while on the job students will only need to recognize the correct information, then perhaps a multiple choice question is sufficient. If, on the other hand, students need to understand a concept, perhaps an open-ended question is the best way to test understanding. In our example, the student needs to know the right technique to use (the 30-second technique) and needs to know all of the steps in order (the criteria). A multiple-choice question may be sufficient to determine knowledge of the correct technique, but a checklist may be needed if students are to list the steps in order. In our skill objective, the student needs to use the technique in an environment with limited time available. Perhaps a role play that simulates the customer service environment would be best. Or a demonstration/observation would enable students to become comfortable enough with the skill to complete it in a distracting environment. These are the decisions of the instructional designer, but these decisions are much easier to make when the objective is specific. When an instructional designer is clear on the conditions under which the student must perform and the level to which the student needs to perform, it is easier for the designer to create a program that enables achievement of the goal. The conditions on the job and the skill level needed are paramount to ensuring the proper rigor in the instructional strategy and in the mastery by the student. Only by considering the job conditions and skill requirements can the instructional designer hope to design and measure a program that leads to behavior change on the job and influences the metric that reflects the behavior change. 6
Connecting the Design to the Measurement Now that we understand the design elements necessary for measurable training, it helps to consider at what specific points we are able to measure our success. The points where we find the links in our chain of evidence provide a good guide. Our industry has focused on the points (levels) to measure for several years, the first major focus in training measurement was introduced in 1959 when Donald Kirkpatrick examined levels of training impact. He introduced four levels of measurement: Level 1 Reaction: What does the student think and feel about the training? Level 2 Learning: How much did the knowledge and skill of the student increase? Level 3 Job Change: Did the student s behavior on the job change due to the training? Level 4 Results: Did the organization change due to the training? Later, Jack Phillips expanded on Kirkpatrick s model. He suggested that linking business results to training is the ultimate level of evaluation. He said, The process isn t complete until the results have been converted to monetary values and compared with the cost of the program. In other words, Phillips created a fifth level that converts results into ROI, which can be stated like this: Level 5 ROI: Did the training result in monetary benefit to the company? Did the cost of training result in monetary benefit that exceeded that cost? To measure results at each level of impact, you must design a program that identifies learning in the classroom, the application of learning on the job, and the results of that application on the organization. This type of design measures Kirkpatrick s levels 2 to 4. To measure the financial component and compute ROI, you will need to convert level 4 (changes to the organization) into financial terms. By designing with the end in mind, you create steps in the process that represent each area that you want to measure. Creating these steps ensures you can measure the impact to those areas. The MID model was specifically created for this purpose. Each step considers the intended point of impact. Each step is linked to the next, weaving measurement and evaluation into the entire process. Matching the method of design to the points of impact creates a complete chain of evidence. This chain shows the impact of training on the student, on the job, and to the organization. The color-coded diagram that follows shows how the components of MID match to the levels of measurement. The specific steps of the MID model support the measurement of Kirkpatrick Levels 2 to 4. Using MID you can measure your program because you include the requirements for each level of measurement in your design. Steps in Instructional Design Identify the key performance metric to be impacted. Align job requirements to the program using standards. Use training objectives that include conditions, criteria, and performance on the job. Tie mastery directly to objectives. Use instructional strategies appropriate for knowledge and skills measurement. Points to Evaluate Program Knowledge or skill gain in the classroom (Level 2). Application of new knowledge and skills on the job (Level 3). Changes to the organization (Level 4). 7
How Measurable Instructional Design Impacts Training Evaluation Measurable Instructional Design enables training professionals to clearly demonstrate how their training design creates the behavior changes that can impact performance. Measurable Instructional Design also allows training professionals to achieve predictable results because it tests the impact against a specific design. Most importantly, the MID model allows training professionals to truly see their impact on an organization. If we begin to understand the influence that design has on results, we learn what training can (and cannot) do. Once we become clear on how we make an impact, we can focus on delivering training that will improve business results using measurable data. Summary Developing measurable training requires a mind shift. As instructional designers we need to shift our thinking toward specifically addressing business indicators (KPM) because it is those indicators that our stakeholders are using to determine there is a training need and will use to determine our success. We need to solidify the job performance standards with our stakeholders so we can create training that impacts job performance. Without this step, trainers will have a continuously moving target and will be unable to empirically show the connections between training and improved performance. Once trainers are aware of the business requirements, they can more easily connect the objectives, instructional methods, and evaluations to meet those requirements. Ensuring precision of objectives and the connections between all five steps in the MID process creates measurable training. This design standard will guarantee your ability to measure training programs to show the impact to an individual, a job, and ultimately to an organization. 8
About eparamus eparamus (www.eparamus.com) helps organizations determine the value and ROI of their training programs.eparamus helps you: Develop measurable training programs and quantify their effectiveness. Incorporate business goals in your design to align with stakeholders. Collaborate with your business counterparts to create a partnership in results. Show the before, during, and after progress from all learning programs. Translate the language of training into the language of business. Prove the value of your training team by providing a real ROI for your training programs. eparamus gives you what no other training providers can an easy way to measure knowledge gained and skills used on the job due to the training. eparamus delivers hard facts on how the training positively affected your bottom line. About Laura Paramoure, EdD Laura Paramoure, EdD, is president and CEO of eparamus. She has more than 25 years of academic and private sector experience in organizational development, training measurement, and training design. She recently authored ROI by Design, available on Amazon.com. Dr. Paramoure regularly speaks about training ROI at seminars for organizations such as the Association for Talent Development (ATD) and International Society of Performance Improvement (ISPI). 9
2015 eparamus All rights reserved. 8601 Six Forks Road, Suite 400 Raleigh, NC 27615 Email: info@eparamus.com Main: (919) 882-2108 Fax: (919) 926-1404