Using Indicators for Program Planning and Evaluation



Similar documents
Division for Heart Disease and Stroke Prevention. Division Budget and Program Briefing for the Institute of Medicine April 9, 2009

Georgia s Worksite Wellness Initiative

The information presented here is for training purposes and reflects the views of the presenters. It does not necessarily represent the official

The information presented here is for training purposes and reflects the views of the presenter. It does not necessarily represent the official

Program Evaluation and the Work Plan for Applicants to STD AAPPS

3 Focus the Evaluation Design

Arkansas s Systems Training Outreach Program:

Developing an Effective Evaluation Plan. Setting the course for effective program evaluation

National Heart Disease and Stroke Prevention Program

How To Be A Health Care Provider

Improving Clinical Processes: The Million Hearts Hypertension Control Change Package for Clinicians

8/5/2015. Magon Saunders. Apophia Namageyo-Funa. Leslie Kolb. Jo Ellen Condon. DHSc, MS, RDN, LD. Program Development Consultant

HEDIS 2012 Results

What is the Cardiovascular Health Awareness Program?

Coronary Heart Disease (CHD) Brief

Assessing and Forecasting Population Health

Setting Targets for Objectives

Evaluating health promotion programs. Region of Peel September 27, 2013

Subdomain Weight (%)

Logic Models in Public Health Program Management

Health risk assessment: a standardized framework

Evaluating health promotion programs

Assessing and Forecasting Population Health

Approaches to Asthma Management:

Using Health Information Technology to Improve Quality of Care: Clinical Decision Support

World Health Organization

STEP. Workplans: A Program Management Tool. René Lavinghouze MA Kathleen Heiden RDH, MSPH. Step 2: Describing the Program

DIABETES DISEASE MANAGEMENT PROGRAM DESCRIPTION FY11 FY12

Domain #1: Analytic Assessment Skills

Developing an Effective Evaluation Report. Setting the course for effective program evaluation

D. Main Section of the proposal

Corporate Wellness Programs

Healthy Living with Diabetes. Diabetes Disease Management Program

STATE PUBLIC HEALTH ACTIONS FOR PREVENTION GRANT OVERVIEW


Population Health Analytics. Ruth Rose Vice President, Clinical Technology Cigna

V. Infrastructure, Administration, and Management

Promising Practices for Rural Community Health Worker Programs

Corporate Wellness Programs A Guide to Strategic Design

Guide to Health Promotion and Disease Prevention

Prevention and Public Health Fund: Community Transformation Grants to Reduce Chronic Disease

Building Our Understanding: Key Concepts of Evaluation What is it and how do you do it?

Collaborative Onsite Medical Care in the Workplace

Commit To Be Fit at Microtek Medical: A Worksite Health Education Program

Community Health Needs Assessment

HealthCare Partners of Nevada. Heart Failure

2003 FIRST MINISTERS ACCORD

Monitoring and Evaluation Plan Primer for DRL Grantees

Concept Series Paper on Disease Management

PUBLIC HEALTH SURVEILLANCE FOR DISEASE PREVENTION: LESSONS FROM THE BEHAVIORAL RISK FACTOR SURVEILLANCE SYSTEM

Neal Rouzier responds to the JAMA article on Men and Testosterone

Best Practices in Managing Patients With Chronic Obstructive Pulmonary Disease (COPD)

Mississippi Delta Health Collaborative Mississippi State Department of Health 1

To achieve this aim the specific objectives of this PhD will include:

Zanie Leroy M.D., M.P.H. School Health Branch Division of Population Health

Glossary Monitoring and Evaluation Terms

Alameda County Behavioral Health Care Services

Strategic Evaluation Plan Ohio Asthma Program

Host Site: County of San Diego, Public Health Services, Health and Human Services Agency

TOBACCO CESSATION WORKS: AN OVERVIEW OF BEST PRACTICES AND STATE EXPERIENCES

New Jersey Department of Health. Office of Tobacco Control, Nutrition and Fitness. Request for Applications (RFA)#2

WHAT IS THE CORE RECOMMENDATION OF THE ACSM/AHA PHYSICAL ACTIVITY GUIDELINES?

Community Readiness for Childhood Obesity Prevention: Findings From a Statewide Assessment in Georgia

Listen to your heart: Good Cardiovascular Health for Life

A social marketing approach to behaviour change

Managing Patients with Multiple Chronic Conditions

PIPC: Hepatitis Roundtable Summary and Recommendations on Dissemination and Implementation of Clinical Evidence

Alabama Department of Public Health Strategic Plan

Graduate Program Objective #1 Course Objectives

Guide for Performance Evaluation of Health Department Director/ Administrator

Interim Guidance for Health Risk Assessments and their Modes of Provision for Medicare Beneficiaries

Butler Memorial Hospital Community Health Needs Assessment 2013

Using Public Health Evaluation Models to Assess Health IT Implementations

PPA Educational Foundation Grant Report 2013 Kristin M. Franks

Overarching Logic Model: Treating Tobacco Addiction. Outcomes-Impact Inputs. Activities. Long

CONTENTS. Logic Models and Program Planning 2. Basic Logic Model Components 4. Approaches to Developing a Logic Model 7

Following are detailed competencies which are addressed to various extents in coursework, field training and the integrative project.

3.b.i Evidence-Based Strategies for Disease Management in High Risk/Affected Populations (Adults Only)

University of Maryland School of Medicine Master of Public Health Program. Evaluation of Public Health Competencies

HSE Transformation Programme. to enable people live healthier and more fulfilled lives. Easy Access-public confidence- staff pride

Strategic Plan

Improving Care for Chronic Obstructive Pulmonary Disease (COPD) Coalition Grant Program PROJECT OVERVIEW & RECOMMENDATIONS

CHAPTER 17: HEALTH PROMOTION AND DISEASE MANAGEMENT

Tobacco Cessation Update. Teleconference September 29, 2015

Guide for the Development of Results-based Management and Accountability Frameworks

WICOMICO COUNTY LOCAL HEALTH IMPROVEMENT COALITION (LHIC) LOCAL HEALTH DIABETES ACTION PLAN

Section C Implement One (1) Clinical Decision Support Rule

University of Colorado REACH 2012 REACH OVERVIEW. Tim Noe Principal Investigator

ADVANCING POPULATION HEALTH: NEW MODELS AND THE ROLE OF RESEARCH

MN-NP GRADUATE COURSES Course Descriptions & Objectives

practitioners and physician assistants.advanceweb.com/features/articles/alcohol Abuse.aspx

How to Start a Worksite Wellness Program. A Wellness Tool Kit. Designed by. PeaceHealth St. John Medical Center Wellness Program

Bonnie Dunton RN COHC OHN DuPont NA Region IHS Consultant

Principles on Health Care Reform

Prevention Status Report 2013

Test Content Outline Effective Date: February 9, Family Nurse Practitioner Board Certification Examination

Best Practices in Implementation of Public Health Information Systems Initiatives to Improve Public Health Performance: The New York City Experience

PUBLIC HEALTH RESEARCH AND EVALUATION

To provide standardized Supervised Exercise Programs across the province.

Community Care Collaborative Integrated Behavioral Health Intervention for Chronic Disease Management Pass 3

Transcription:

Using Indicators for Program Planning and Evaluation EVALUATION GUIDE enter for Chronic Disease Prevention and Health Promotion r Heart Disease and Stroke Prevention National Center for Chronic Disease Prevention and Health Promotion Division for Heart Disease and Stroke Prevention

Acknowledgments This guide was developed by the Centers for Disease Control and Prevention, Division for Heart Disease and Stroke Prevention. State Heart Disease and Stroke Prevention programs were invaluable in the development of this evaluation guide. We extend special thanks to: Katrina D Amore, Massachusetts Department of Public Health Ruth Dufresne, Maine Department of Health and Human Services Bonnie Eisenberg, New York State Department of Health Suggested Citation Rogers T, Chappelle EF, Wall HK, Barron-Simpson R. Using DHDSP Outcome Indicators for Policy and Systems Change for Program Planning and Evaluation. Atlanta, GA: Centers for Disease Control and Prevention; 2011.

INTRODUCTION Purpose This guide is one in a series of Program Evaluation Guides developed by the Centers for Disease Control and Prevention (CDC), Division for Heart Disease and Stroke Prevention (DHDSP), to assist in the evaluation of heart disease and stroke prevention activities within states. It is intended for use by National Heart Disease and Stroke Prevention (NHDSP) program grantees, state health departments, advocates, evaluators, and researchers in program and evaluation planning using the DHDSP Outcome Indicators for Policy and Systems Change. This evaluation technical assistance tool is best used in conjunction with other Program Evaluation Guides in the series: u Writing SMART Objectives 1 u Developing and Using a Logic Model 2 u Developing an Evaluation Plan 3 Background Over the past decade, various evaluation groups within CDC have developed and disseminated health indicator systems intended to influence program planning, measure performance relative to resource investment, and assess progress toward program objectives and goals for state and national chronic disease prevention and control initiatives. 4 7 Although development and implementation efforts may vary, CDC focuses on the central role indicators play in program planning and evaluation. These efforts also share a general understanding of the definition of outcome indicator as a: Specific, observable, and measurable characteristics or change that will represent achievement of the outcome. 8 In 2005, DHDSP began its process to identify, develop, and disseminate evidence-based outcome indicators for policy and systems changes that NHDSP-funded state programs and the Division could use for planning and evaluating prevention and control initiatives in the DHDSP priority areas at that time: u Increase control of high blood pressure. u Increase control of high cholesterol. u Improve emergency response, which includes awareness of signs and symptoms of heart attack and stroke and the need to call 9-1-1. u Improve quality of care. Development of DHDSP outcome indicator materials followed a process based on the methods used by the CDC Offce on Smoking and Health (OSH) to develop its Key Outcome Indicators for Evaluating Comprehensive Tobacco Control Programs guide. 5 INTRODUCTION 1

Candidate indicators in each DHDSP priority area were identified through extensive literature reviews within each area and setting (i.e., health care, worksite, community). Indicators were nested within components (boxes) of logic models constructed for each DHDSP priority area, such as the Controlling High Blood Pressure Logic Model displayed in Figure 1. Figure 1: HDSP Controlling High Blood Pressure Logic Model Inputs Short-term Outcomes Intermediate Outcomes Long-term Outcomes Activities Outputs Box 1 Health Care System Changes: Adherence Efficiency Policies/protocols/tools Box 2 Provider Changes: Awareness Adherence to guidelines Box 6 Risk Factor Reduction through Lifestyle and Therapeutic Intervention Box 9 Reduced Mortality and Morbidity Due to Heart Disease and Stroke Box 3 Worksite Changes: Policies/protocols/tools Environmental changes Box 4 Community Changes: Environmental changes Policy/legislative changes Box 5 Barriers and Facilitators to Individual Change: Awareness Motivation Satisfaction Treatment costs Box 7 Reduced Levels of Blood Pressure Box 8 Increased Control of Blood Pressure Levels among Adults with High Blood Pressure Box 10 Reduced Disparities in Heart Disease and Stroke Box 11 Reduced Costs Associated with Heart Disease and Stroke: Health care Employer Societal Contextual Factors Socioeconomic and demographic characteristics of the target population Participating organizations policies and practices Health care industry practice trends and policies Partnerships among patients, providers, health care organizations, and worksites Logic model boxes were arranged in a manner that demonstrates a clear, conceptual link between short-, intermediate, and long-term outcomes. In other words, if a change occurred in one box under short-term outcomes, then one would expect an impact or change in consequent linked boxes under the intermediate and long-term outcomes. This concept reflects the DHDSP theory of change as described in the Public Health Action Plan to Prevent Heart Disease and Stroke, 9 Developing an Evaluation Plan, 3 and other NHDSP program guidance materials. 10,11 A panel of experts representing academic research, NHDSP-funded state programs, and DHDSP was recruited to review potential indicators for each cardiovascular health priority area. Information on the development and criteria 2 USING INDICATORS FOR PROGRAM PLANNING AND EVALUATION

used for selecting the DHDSP Outcome Indicators for Policy and Systems Change can be found in the indicator summary books. 12 14 Although this guide will focus primarily on the use of heart disease and stroke prevention (HDSP) outcome indicators in evaluation of policy, systems, and environmental approaches to heart disease and stroke prevention, we will also note how indicators may be used in planning such interventions. While there are many excellent evaluation planning guides, including the CDC Framework for Program Evaluation in Public Health, 15 we have chosen to organize this guide in the context of HDSP logic model pathways and associated outcome indicators. To begin, we will describe the benefits of identifying a desired achievable outcome. USING INDICATORS FOR PROGRAM PLANNING AND EVALUATION Using Indicators for Program Planning Indicators can be used in program planning by first selecting an achievable outcome and then determining the logic model pathway for the selected outcome. Although it is understood that NHDSP-funded state programs are designed to reduce morbidity and mortality from heart disease and stroke, the policy, systems, and environmental change strategies implemented under DHSDP funding address primarily short-term outcomes within HDSP logical models. The social-ecological theory of change 16 18 explicit in HDSP logic models suggests that HDSP policy and systems change approaches that successfully impact short-term outcomes will with suffcient time and sustained effort also impact intermediate outcomes related to behavioral and physiological risk factors among individuals affected by the environmental changes (Figure 2). Interventions designed to have an impact on individual-level health behavior and risk factors directly, bypassing policy and systems changes, may be expensive, ineffcient, and unsustainable at the population level. Program-planning policy and systems change strategies should correspond to short-term outcome indicators. Figure 2: Outcome Relationships for Controlling High Blood Pressure and Controlling High Cholesterol Policy and Systems Change Short-term Outcomes Health care systems changes Health care provider changes Worksite changes Community changes Behavior Change and Risk Factor Reduction Intermediate Outcomes Individual behavior changes Medication adherence Average BP/chol reductions Increased control of BP/chol Reduced Death and Disability Long-term Outcomes Longer-term reduction of morbidity, death, and cost USING INDICATORS FOR PROGRAM PLANNING AND EVALUATION 3

Outcome Indicators along the Logic Pathway Measuring short-term outcomes along a logic model pathway permits programs to identify gaps in program implementation, before completing a comprehensive evaluation that focuses on long-term outcomes. Identification of these gaps allows programs to modify interventions early on, thereby improving the chance of having an impact on the ultimate outcome while also saving time, resources, and opportunity costs. Once the logic model pathway has been developed, programs may then select one or more indicators from each logic model box along the pathway. A pathway through a logic model implies a series of if-then statements. For example, if a project works with health care systems to implement policies supporting electronic health records appropriate for treating patients with high blood pressure (e.g., with clinical decision supports and registry capability), then over time, systemic changes should affect various aspects of provider behavior. Given sustained changes in provider behavior, eventually there will be an impact on individual-level patient attitudes, behaviors, risk factors, average blood pressure, and, ultimately, blood pressure control. Table 1 shows example indicators that could be chosen for monitoring and evaluation along this pathway. This example illustrates how, over time, aggregate changes in upstream outcomes along the hypothetical pathway will lead to measurable changes in the ultimate outcomes. Table 1. Example Pathway of Indicators for an Electronic Health Record Initiative Pathway Box 1: Health Care Systems Changes Indicators 1.1.3 Proportion of health care systems with electronic health records appropriate for treating patients with high blood pressure 1.1.4 Proportion of health care systems with computer-based clinical decision support systems appropriate for treating adults with high blood pressure Box 2: Health Care Provider Changes 1.2.5 Proportion of providers who follow current evidence-based guidelines algorithms for pharmacological therapies to treat high blood pressure Box 5: Individual Changes 1.5.6 Proportion of adults who have visited a health care provider according to current evidence-based guidelines for treatment of high blood pressure Box 6: Risk Factor Reduction 1.6.9 Proportion of adults with high blood pressure in adherence to blood pressurelowering medication regimens Box 7: Reduced Levels of Blood Pressure 1.7.1 Average blood pressure levels among adults with known high blood pressure Box 8: Increased Blood Pressure Control 1.8.1 Proportion of adults with known high blood pressure who have achieved blood pressure control as defined by current evidence-based guidelines 4 DEVELOPING AND USING A LOGIC MODEL

Indicators and SMART Objectives Indicators can also serve as the basis for development of HDSP program work plan objectives. Program objective development is covered in more detail in two DHDSP Program Evaluation Guides: Developing an Evaluation Plan 3 and Writing SMART Objectives. 1 These guides will help in constructing clear, concise program objectives in collaboration with stakeholders and partners using the SMART approach. When developing a SMART objective using selected indicators, programs may shape the indicator to the local context, but should avoid changing the intent of the indicator. Adding specificity to the objective defines various aspects of the indicator, such as who is being affected by the intervention, by when, and where; however, the how or what of the intervention should match the intent of the indicator. Selected Outcome Indicator Indicator 1.1.1 Proportion of health care systems with policies or systems to encourage a multidisciplinary team approach to enhance blood pressure control. SMART Objective By June 2012, increase from 25% to 50% the percentage of outpatient clinics in the Gesundheit Hospital System serving Blackwell, Brown, and Zapata counties that have written policies to encourage and support a multi-disciplinary team approach to enhance high blood pressure control among patients with high blood pressure. After establishing program objectives using the indicators, programs are ready to evaluate their intervention activities. Using Indicators for Evaluation As noted earlier, the CDC Framework for Program Evaluation in Public Health is an excellent evaluation planning guide that has been used by a wide range of state and local public health promotion and disease prevention programs. In addition, DHDSP has provided evaluation guidance to HDSP-funded state programs; it may be found at www.cdc.gov/ dhdsp/programs/nhdsp_program/evaluation_guides. This section is not intended to duplicate what has already been published, but rather to provide an overview of some fundamental considerations for HDSP programs and partners as they incorporate DHDSP outcome indicators into evaluation planning and implementation. We will focus especially on the evaluation issues that need to be addressed by evaluators and relevant stakeholders, as illustrated in an adaptation from the CDC Framework for Program Evaluation in Public Health. Work with Evaluation Stakeholders Evaluation stakeholders are individuals or groups who will be affected by or benefit from the evaluation. They can assist by helping to determine the focus of the evaluation, USING INDICATORS FOR PROGRAM PLANNING AND EVALUATION 5

develop key evaluation questions, identify potential data sources, and use evaluation results. Evaluation stakeholders may aid in the process of selecting appropriate indicators and indicators for use in the evaluation. Example HDSP Evaluation Stakeholders u Epidemiologists u HDSP Program Managers u State Chronic Disease Directors u Clinical Staff u Worksite Coordinators u Advocates Keep the End in Mind During the planning stage, the scope of an evaluation can easily expand as ideas are generated and new input is obtained from stakeholders. By staying focused on how evaluation results will be used by the program and its stakeholders, evaluators will be better able to distinguish between information that must be known and that which would be nice to know. Being aware of the intended use of the evaluation findings will help to ensure that essential information is obtained without extraneous, costly distractions and overly burdensome evaluation methods. Even during the planning stage, it is not too early to consider a dissemination plan for the evaluation results. Generate Key Evaluation Questions A critical step in planning is identification of key questions that will be answered by the evaluation. The intended use of evaluation results often dictates the type of key evaluation questions selected. Incorporating multiple perspectives when developing key evaluation questions will help to generate buy-in for the evaluation. If, for example, stakeholders are primarily interested in health impacts, it will be important to identify key evaluation questions focused on longer-term program outcomes. If, on the other hand, stakeholders are especially interested in the fidelity of the program implementation, evaluation questions should address program processes. Helping to create evaluation questions that can be readily understood by all audiences is an excellent role for program stakeholders involved in evaluation planning. Sample Evaluation Questions u What seems to be working? u What needs to be changed? u How do we know that we ve been successful? u What have we accomplished? u Are we making a difference? u What s the evidence that things are working? u How should we do it differently next time? Incorporate Indicators into the Evaluation Plan As noted earlier, DHDSP outcome indicators may be used to shape program plans and to generate SMART program objectives. Indicators also aid in the selection of evaluation 6 DEVELOPING AND USING A LOGIC MODEL

methods and measures and can inform data analysis and reporting efforts. The DHDSP Program Evaluation Guide Developing an Evaluation Plan includes a template that may be used to organize the evaluation questions, indicators, data-collection methods, measures, timing, and data analysis plan for each program SMART objective. Table 2 is a partially completed evaluation template relevant to the example objective we have been using. Table 2. Sample Evaluation Plan Using DHDSP Outcome Indicators Objective: By June 2012, increase from 25% to 50% the percentage of health care systems in Blackwell, Brown, and Zapata counties with policies to encourage a multi-disciplinary team approach to enhance high blood pressure control. Evaluation Questions Indicator(s) Data collection Source Method Timing Data Analysis What do you want to know? Which measure(s) will answer the question? How will you collect the data? What data collection technique(s) will you use? When and how often will you collect the data? What type of analysis will you perform? To what extent have clinics involved in our initiative implemented electronic health records appropriate for treating patients with HBP? 1.1.3 a Survey 1.1.4 b Survey Self-report from a clinic administrator or HDSP staff site visits Self-report from a clinic administrator or HDSP staff site visits; screen shot verification Baseline and annually Baseline and annually Quantitative analysis using statistical software; basic frequencies and means Qualitative analysis using qualitative data analysis software; themes, depth to support quantitative analysis Quantitative analysis using statistical software; basic frequencies and means Qualitative analysis using document review Has the initiative changed provider behavior? 1.2.5 c EHR data extraction; physician survey Changes in e-prescription patterns (e.g., dosage increases, medication changes or additions); self-reported physician attitudes and behaviors Baseline and annually Quantitative analysis using statistical software; basic frequencies and means a 1.1.3 Proportion of health care systems with electronic health records appropriate for treating patients with high blood pressure. b 1.1.4 Proportion of health care systems with computer-based clinical decision support systems appropriate for treating adults with high blood pressure. c 1.2.5 Proportion of providers who follow current evidence-based guidelines algorithms for pharmacological therapies to treat high blood pressure. USING INDICATORS FOR PROGRAM PLANNING AND EVALUATION 7

Although indicator-driven evaluation planning is being promoted in this Evaluation Guide, collecting information only on DHDSP outcome indicators is not suffcient for a complete evaluation of an NHDSP-funded state program. A full program evaluation would likely also include a process component designed to gather and compile lessons learned, program implementation challenges, and unexpected findings, among other features. 19 Create Systems and Processes As systems are put in place to support the intervention, consider what processes may be included to assist with the evaluation. u Are there any forms that need to be completed for the evaluation? u Will interagency agreements be needed to access necessary evaluation data? u Will contact information and informed consent be needed from participants to allow for follow-up data collection? u Is it possible to incorporate the collection of indicator data within the administrative aspects of a program or intervention? u Can implementation site reporting requirements include data elements that may be useful for the evaluation? Answering these and related questions can help forge synergies among systems supporting program implementation and evaluation. Revisit the Evaluation Plan: Stop. Consider. Decide. Evaluation plans are expected to be dynamic documents, not to be ignored or forgotten once implementation begins. As program and evaluation efforts unfold, state HDSP staff and stakeholders might need to revisit the evaluation plan to add, delete, or modify components or make other necessary changes. Once data are collected, analyzed, and interpreted, fresh perspectives may trigger a cascade of new evaluation questions. When this occurs, consider whether the new information needs to be added to the current evaluation or if further exploration can be included in a future program evaluation; keep the focus on what needs to be known now instead of what would be nice to know later. Reflect on Experiences and Share Findings Often, evaluation results are compiled into a report and submitted to the funding agency without suffcient attention being paid to understanding the meaning of the findings and their implications for future work. If the results obtained fall short of expectations (e.g., no statistically significant changes were seen in the pre-post assessment of Indicator 1.5.6, Proportion of adults who have visited a health care provider according to current evidence-based guidelines for treatment of high blood pressure ), possible explanations include: 8 DEVELOPING AND USING A LOGIC MODEL

a. The precursor indicator(s) did not change suffciently, suggesting that the policy, system or environmental change intervention was not powerful enough, or raising the possibility of program implementation failure; b. The evaluation design was poorly constructed or improperly followed; c. Measurement methods and variables were biased or insuffciently precise to detect any actual changes in outcomes; or d. The program logic model (theory of change) was incorrect. A carefully conducted process evaluation is an excellent way to shed light on outcome evaluation results and explore various interpretations of the findings. Evaluation takes time and resources. If evaluation results are not used, then the efforts that went into their planning and implementation may be wasted. To maximize utility of the results, we suggest that a dissemination plan be developed in collaboration with key stakeholders as the evaluation plan is being developed. Programs are encouraged to submit the results of their evaluations to CDC in their interim and annual progress reports. Keep in mind that some information is better than no information. Don t let the perfect be the enemy of the good. 20 References 1. Centers for Disease Control and Prevention. Writing SMART Objectives. Atlanta, GA: US Department of Health and Human Services; 2006. 2. Centers for Disease Control and Prevention. Developing and Using a Logic Model. Atlanta, GA: US Department of Health and Human Services; 2006. 3. Centers for Disease Control and Prevention. Developing an Evaluation Plan. Atlanta, GA: US Department of Health and Human Services; 2006. 4. US Department of Health and Human Services. Healthy People 2020. Available from http://www.healthypeople.gov/2020/default.aspx. 5. Starr G, Rogers T, Schooley M, Porter S, Wiesen E, Jamison N. Key Outcome Indicators for Evaluating Comprehensive Tobacco Control Programs. Atlanta, GA: Centers for Disease Control and Prevention; 2005. 6. Kettel Khan L, Sobush J, Keener D, Goodman K, Lowry A, Kakietek J, et al. Recommended community strategies and measurements to prevent obesity in the United States. MMWR. 2009;58(RR07):1 26. 7. Metzler M, Kanarek N, Highsmith K, Straw R, Bialek R, Stanley J, et al. Community health status indicators project: The development of a national approach to community health. Prev Chronic Dis. 2008;5(3):A94. 8. United Way of America. Measuring Program Outcomes: A Practical Approach. Alexandria, VA: United Way of America; 1996. To maximize utility of the results, develop a dissemination plan at the same time as the evaluation plan and in collaboration with key stakeholders. USING INDICATORS FOR PROGRAM PLANNING AND EVALUATION 9

9. Centers for Disease Control and Prevention. A Public Health Action Plan to Prevent Heart Disease and Stroke: Executive Summary and Overview. Atlanta, GA: US Department of Health and Human Services; 2003. 10. Centers for Disease Control and Prevention. National Heart Disease and Stroke Prevention Program: Staff Orientation Guide. Atlanta, GA: US Department of Health and Human Services; 2010. 11. CDC-RFA-DP07-704, 2007. 12. Ladd S, Wall HK, Rogers T, Fulmer E, Lim S, Leeks K, et al. Outcome Indicators for Policy and Systems Change: Controlling High Cholesterol. Atlanta, GA: Centers for Disease Control and Prevention; 2010. 13. Wall HK, Ladd S, Rogers T, Fulmer E, Lim S, Leeks K, et al. Outcome Indicators for Policy and Systems Change: Improving Emergency Response. Atlanta, GA: Centers for Disease Control and Prevention; 2010. 14. Wall HK, Ladd S, Rogers T, Fulmer E, Lim S, Leeks K, et al. Outcome Indicators for Policy and Systems Change: Controlling High Blood Pressure. Atlanta, GA: Centers for Disease Control and Prevention; 2011. 15. Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR. 1999;48(RR-11):1 40. 16. McLeroy KR, Bibeau D, Steckler A, Glanz K. An ecological perspective on health promotion programs. Health Educ Quart. 1988;15(4):351 77. 17. Stokols D. Translating social ecological theory into guidelines for community health promotion. Am J Health Promot. 1996;10:282 98. 18. Gregson J, Foerster SB, Orr R, Jone L, Benedict J, Clarke B, et al. System, environmental, and policy changes: Using the social-ecological model as a framework for evaluating nutrition education and social marketing programs with low-income audiences. J Nutr Educ. 2001;33(Suppl 1):S4 15. 19. DeGroff A, Schooley M, Chapel T, Poster TH. Challenges and strategies in applying performance measurement to federal public health programs. Eval Program Plann. 2010;33:365 72. 20. Voltaire. La Bégueule, Conte Moral et Stances Sur le Jour de la St. Barthelemy. 1772. 10 DEVELOPING AND USING A LOGIC MODEL