1 Building Our Understanding: Key Concepts of Evaluation What is it and how do you do it? Imagine that you or your research team has just completed a communication intervention designed to reduce smoking among adolescents. Wouldn t you want to know if the intervention worked? That is where evaluation comes in. In this case, we would be conducting a summative evaluation (after the intervention) to answer questions such as: (1) Did rates of smoking among adolescents decrease?; (2) Did the radio ads reach enough teens to have statistical power?; (3) Did the ads affect norms about smoking in that age group?; (4) Was the cigarette tax increase during the evaluation period the real reason that smoking decreased?; (5) Was the implementation of a nosmoking policy in the schools the reason that smoking decreased?; and (6) Was the referral to tobacco quitlines the reason that smoking decreased? If the research team conducted a formative evaluation (before and during the communication intervention), your team would be able to make any necessary changes such as edits to radio ads or when they are played before continuing forward. If you re still feeling confused, don t worry; the purpose of this introductory section is to provide you with some useful background information on evaluation planning. What is evaluation? Evaluations are, in a broad sense, concerned with the effectiveness of programs. While common sense evaluation has a very long history, evaluation research which relies on scientific methods is a young discipline that has grown massively in recent years (Spiel, 2001). Evaluation is a systematic process to understand what a program does and how well the program does it. Evaluation results can be used to maintain or improve program quality and to ensure that future planning can be more evidence-based. Evaluation constitutes part of an ongoing cycle of program planning, implementation, and improvement (Patton, 1987). Make evaluation part of your health communication program from the beginning; don t tack it on at the end! The evaluation experience is likely to be more positive and its results are likely to be more useful if you build evaluation in from the start and make it an on going activity. This includes planning the summative evaluation before the intervention begins as part of the planning process, which helps to clarify program goals and reasonable outcomes. Taken together, you and your research team should know why the evaluation is being undertaken (i.e., performance measurement or improvement) and identify the type of evidence that would be sufficient for your program and stakeholders. By evidence, we generally mean information helpful in forming a conclusion or judgment. In other words, evidence means information bearing on whether a belief or proposition is true or false, valid or invalid, warranted or unsupported (Schwandt,
2 2009). Recently there has been some confusion with understanding the term evidence in evaluation because it is often taken to be synonymous with the term evidence-based. Evidence-based, however, has two shortcomings: (1) it is narrowly interpreted to mean that only a specific kind of scientific finding, that is, evidence of causal efficacy, counts as evidence; (2) the idea of an evidence base suggests that evidence is the literal foundation for action because it provides secure knowledge (Upshur, 2002). What type of evaluation should I conduct? Evaluation falls into one of two broad categories: formative and summative. Formative evaluations are conducted during program development and implementation and are useful if you want direction on how to best achieve your goals or improve your program. Summative evaluations should be completed once your programs are well established and will tell you to what extent the program is achieving its goals. Table 1 The types of evaluation within formative and summative evaluation: Type of Evaluation Formative Needs Assessment Purpose Determines who needs the communication program/intervention, how great the need is, and what can be done to best meet the need. Involves audience research and informs audience segmentation and marketing mix (4 P s) strategies. Process Evaluation Measures effort and the direct outputs of programs/interventions what and how much was accomplished (i.e., exposure, reach, knowledge, attitudes, etc.). Examines the process of implementing the communication program/intervention and determines whether it is operating as planned. It can be done continuously or as a one-time assessment. Results are used to improve the program/intervention. Summative Outcome Evaluation Impact Evaluation Measures effect and changes that result from the campaign. Investigates to what extent the communication program/intervention is achieving its outcomes in the target populations. These outcomes are the short-term and medium-term changes in program participants that result directly from the program such as new knowledge and awareness, attitude change, beliefs, social norms, and behavior change, etc. Also measures policy changes. Measures community-level change or longer-term results (i.e., changes in disease risk status, morbidity, and mortality) that have occurred as a result of the communication program/intervention. These impacts are the net effects, typically on the entire school, community, organization, society, or environment.
3 Table 2--Which of these evaluations is most appropriate depends on the stage of your program: If you are not clear on what you want to evaluate, consider doing a best practices review of your program before proceeding with your evaluation. A best practices review determines the most efficient (least amount of effort) and effective (best results) way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large numbers of people. This review is likely to identify program strengths and weaknesses, giving you important insight into what to focus your evaluation on. How do I conduct an evaluation? The following six steps are a starting point for tailoring an evaluation to a particular public health effort at a particular time. In addition, the steps represent an ongoing cycle, rather than a linear sequence, and addressing each of the steps is an iterative process. For additional guidance, consult the Evaluation Planning Worksheet in the Appendix of this document. As each step is discussed, examples from the Violence Against Women campaign implemented in Western Australia will be included. The Violence Against Women campaign was the first of its kind to target violent and potentially violent men. The campaign taught men that domestic violence is a problem which has negative effects on children and that specific help is available. Program coordinators decided not to apply traditional interventions often used in domestic violence cases because they have not been successful at changing behavior. As a result, the communication intervention included: Publications, including self-help booklets providing tips on how to control violence and how to contact service providers; mass media advertising; public relations activities with stakeholders, including women s groups, police, counseling professions and other government departments; and posters and mailing to worksites (Hausman & Becker, 2000).
4 1. Engage stakeholders This first step involves identifying and engaging stakeholders. These individuals have a vested interest in the evaluation. Find out what they want to know and how they will use the information. Involve them in designing and/or conducting the evaluation. For less involved stakeholders, keep them informed about activities through meetings, reports and other means of communication (CDC, 1999, 2008; McDonald et al., 2001). EX: The program planners of the Violence Against Women campaign included internal and external partners as stakeholders. Internal partners were the Director of the Domestic Violence Prevention Unit and the Family and Domestic Violence Task Force. External partners were experts in the field of social marketing/behavior change, health promotions, communication, and women s issues; the Department of Family and Children s Services; service providers including trained counselors, therapists and social workers; and the police. The program planners kept in touch with stakeholders and got input from them throughout the campaign (Turning Point Social Marketing Collaborative, Centers for Disease Control and Prevention, and Academy for Educational Development, 2005). 2. Identify program elements to monitor In this step you and/or the team decides what s worth monitoring. To decide which components of the program to oversee, ask yourself who will use the information and how, what resources are available, and whether the data can be collected in a technically sound and ethical manner. Monitoring, also called process evaluation, is an ongoing effort that tracks variables such as funding received, products and services delivered, payments made, other resources contributed to and expended by the program, program activities, and adherence to timelines. Monitoring during program implementation will let you know whether the program is being implemented as planned and how well the program is reaching your target audience. If staff and representative participants see problems, you are able to make mid-course program corrections (CDC, 1999, 2008). EX: A needs assessment was conducted using focus groups of general population males and perpetrators. It identified the need for a prevention focus targeting both violent and potentially violent men. The messages would need to avoid an accusatory or blaming tone because that would cause the target audiences to reject the information. Process evaluation would be implemented to monitor the campaign s reach, the messages effectiveness, the audiences awareness of the Men s Domestic Violence Helpline, and changes in attitudes toward domestic violence (Turning Point Social Marketing Collaborative et al., 2005). 3. Select the key evaluation questions Basic evaluation questions which should be adapted to your program content include: What will be evaluated? (i.e., What is the program and in what context does it exist?) Was fidelity to the intervention plan maintained? Were exposure levels adequate to make a measurable difference? What aspects of the program will be considered when judging performance?
5 What standards (type or level of performance) must be reached for the program to be considered successful? What evidence will be used to indicate how the program has performed? How will the lessons learned from the inquiry be used to improve public health effectiveness? (CDC, 1999, 2008). EX: The evaluation measured the following: (1) General awareness of, attitudes towards, and professed behaviors relating to domestic violence; (2) awareness of how to get help, such as knowledge about available support services and where to telephone for help; (3) inclination to advise others to telephone the Helpline; and (4) advertising reach and impact, message take-away, attitudes toward the campaign, calls to the Helpline, and acceptance of referrals to counseling (Turning Point Social Marketing Collaborative et al., 2005). 4. Determine how the information will be gathered In this step, you and/or the team must decide how to gather the information. Decide which information sources and data collection methods will be used. Develop the right research design for the situation at hand. Although there are many options, typical choices include: (1) Experimental designs (use random assignment to create intervention and control groups, intervention is administered to only one group, and then compare the groups on some measure of interest to see if the intervention had an effect); (2) quasi-experimental designs (same as experimental but does not necessarily involve random assignment of participants to groups); (3) Surveys (a quick cross-sectional snapshot of an individual or a group of people on some measure via telephone, Internet, face-to-face, etc.); and (4) case study designs (an individual or a situation is investigated deeply and considered substantially unique). The choice of design will determine what will count as evidence, how that evidence will be gathered and processed, and what kinds of claims can be made on the basis of the evidence (CDC, 1999, 2008; Yin, 2003). EX: In the first seven months of the campaign, a three-wave statewide random telephone survey was conducted. In each wave, approximately 400 males, years old who were in a heterosexual relationship were interviewed. The three surveys took place (1) prior to the campaign to serve as a baseline; (2) four weeks into the campaign to assess initial impact, including advertising reach so that any deficiencies could be detected and modified; and (3) seven months into the campaign to identify any significant changes in awareness of sources of assistance, particularly the Men s Domestic Violence Helpline as well as any early changes in beliefs and attitudes (Turning Point Social Marketing Collaborative et al., 2005). 5. Develop a data analysis and reporting plan During this step, you and/or the team will determine how the data will be analyzed and how the results will be summarized, interpreted, disseminated, and used to improve program implementation (CDC, 1999, 2008). EX: Standard research techniques were used to analyze the data and develop a report on the findings. The report was disseminated to the program managers as well as to all partners/stakeholders. Feedback was collected from stakeholders and, as appropriate, used to modify the strategies, messages and interventions. For example, findings from evaluating the first two sets of commercials were used to identify the timing of a third set of ads and their messages.
6 The evaluation results also were used in developing Phase 2 of the campaign (Turning Point Social Marketing Collaborative et al., 2005). 6. Ensure use and share lessons learned Effective evaluation requires time, effort, and resources. Given these investments, it is critical that the evaluation findings be disseminated appropriately and used to inform decision making and action. Once again, key stakeholders can provide critical information about the form, function, and distribution of evaluation findings to maximize their use (CDC, 1999, 2008). EX: Awareness of the Men s Domestic Violence Helpline increased significantly from none before the campaign to 53% in Wave 2. The research also showed that a number of positive belief and attitude effects began to emerge: By Wave 2, 21% of respondents exposed to the campaign stated that the campaign had changed the way they thought about domestic violence and 58% of all respondents agreed that domestic violence affects the whole family rather than just the children of the female victim. These results and their implications provided guidance for revising future activities. Phase 2 utilized lessons learned from the first phase and was designed to establish additional distribution channels for counseling services such as Employee Assistance Programs and rural/remote areas (Turning Point Social Marketing Collaborative et al., 2005). Bottom Line: Why should I conduct an evaluation? Experts stress that evaluation can: 1. Improve program design and implementation It is important to periodically assess and adapt your activities to ensure they are as effective as they can be. Evaluation can help you identify areas for improvement and ultimately help you realize your goals more efficiently (Hornik, 2002; Noar, 2006). 2. Demonstrate program impact Evaluation enables you to demonstrate your program s success or progress. The information you collect allows you to better communicate your program s impact to others, which is critical for public relations, staff morale, and attracting and retaining support from current and potential funders (Hornik & Yanovitzky, 2003). For additional information and resources, contact Dr. Stephanie Sargent Weaver with the CDC s Healthy Communities Program at References Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health. Morbidity & Mortality Weekly Report, 48, Centers for Disease Control and Prevention. (2008). Introduction to Process Evaluation in Tobacco Use Prevention and Control. Atlanta, GA: U. S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health. Hausman, A. J. & J. Becker (2000). Using participatory research to plan evaluation in violence prevention. Health Promotion Practice, 1(4),
7 Hornik, R. C. (2002). Epilogue: Evaluation design for public health communication programs. In Robert C. Hornik (Ed.), Public Health Communication: Evidence for Behavior Change. Mahwah, NJ: Lawrence Erlbaum Associates. Hornik, R. C. & Yanovitzky, I. (2003). Using theory to design evaluations of communication campaigns: The case of the National Youth Anti-Drug Media Campaign. Communication Theory, 13(2), McDonald et al. (2001). Chapter 1: Engage stakeholders. Introduction to Program Evaluation for Comprehensive Tobacco Control. Retrieved February 25, 2009 at Noar, S. M. (2006). A 10-year retrospective of research in health mass media campaigns: Where do we go from here? Journal of Health Communication, 11, Norland, E. (2004, Sept.). From education theory to conservation practice. Presented at the Annual Meeting of the International Association for Fish & Wildlife Agencies. Atlantic City, New Jersey. Pancer, S. M. & Westhues, A. (1989). A developmental stage approach to program planning and evaluation. Evaluation Review, 13, Patton, M. Q. (1987). Qualitative Research Evaluation Methods. Thousand Oaks, CA: Sage Publishers. Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach. Thousand Oaks, CA: Sage Publications. Schwandt, T. A. (2009). Toward a practical theory of evidence for evaluation. In Stewart I. Donaldson, Christina A. Christie & Melvin M. Mark (Eds.), What counts as credible evidence in applied research and evaluation practice? Thousand Oaks, CA: Sage Publications, Inc. Spiel, C. (2001). Program Evaluation. In Neil J. Smelser & Paul B. Baltes (Eds.) International Encyclopedia of the Social & Behavioral Sciences. Oxford: Elsevier Science Ltd. Turning Point Social Marketing Collaborative, Centers for Disease Control and Prevention, Academy for Educational Development (2005). CDCynergy: social marketing edition, version 2.0 [CD ROM] Atlanta (GA): CDC, Office of Communication. Upshur, R. E. G. (2002). If not evidence, then what? Or does medicine really need an evidence base? Journal of Evaluation in Clinical Practice, 8(2), Yin, R. (2003). Case Study Research: Design and Methods. Thousand Oaks, CA: Sage Publications.
8 Appendix: Evaluation Plan Worksheet Title: Date: Prepared by: Step 1: Identify and Engage Stakeholders Who can we identify as stakeholders? How do we engage stakeholders? List of stakeholders Step 2: Identify program elements to monitor Which program elements will you monitor? What is the justification for monitoring these elements? List of program elements to monitor Step 3: Select the key evaluation questions. What evaluation questions will you address? List of evaluation questions. Step 4: Determine how the information will be gathered. What information sources and data collection methods will you use for monitoring and evaluation? What evaluation research design will be used? Description of information sources, data collection methods and research design.
9 Step 5: Develop a data analysis and reporting plan. How will the data for each monitoring and evaluation question be coded, summarized and analyzed? How will conclusions be justified? How will stakeholders both inside and outside the agency be kept informed about the monitoring and evaluation activities? When will the monitoring and evaluation activities be implemented and how will they be timed in relation to program implementation? How will the costs of monitoring and evaluation be presented? How will the monitoring and evaluation data be reported? What are your monitoring and evaluation timelines and budgets? A data analysis and reporting plan Step 6: Ensure use and share lessons learned. What feedback was received concerning the intervention/program? What is the evaluation implementation summary? How can we use this information to revise intervention/program? How will this information impact internal and external communication plans? What are the lessons learned? Final summary report that is circulated among evaluation workgroup and stakeholders.
Glossary Monitoring and Evaluation Terms This glossary includes terms typically used in the area of monitoring and evaluation (M&E) and provides the basis for facilitating a common understanding of M&E.
Developing an Effective Evaluation Plan Setting the course for effective program evaluation Acknowledgments This workbook was developed by the Centers for Disease Control and Prevention s (CDC s) Office
New Jersey Department of Health Office of Tobacco Control, Nutrition and Fitness Request for Applications (RFA)#2 Smoke-Free Housing / Worksite Wellness Smoke Free Housing/Worksite Wellness Grant The funding
February 2016 Public Health s Approach to Youth Marijuana Prevention HB 3400 Legislative Report PUBLIC HEALTH DIVISION Executive Summary In November 2014, Oregon voters legalized retail marijuana sales.
Child Abuse Prevention and Permanency Plan Implementation Report Reporting Entity: 3.1 Public Awareness and Education: Florida at the Ounce of Prevention Fund of Florida Reporting Period: From: 1 To: 6/30/2011
The P ProcessTM Five Steps to Strategic Communication 3 Create & Test PARTICIPATION 4 Mobilize & Monitor T H E O RY 2 Design Strategy 5 Evaluate & Evolve C A P A C I T Y 1 Inquire TM Suggested citation:
Assessing an Organization s in Health Communication: A Six Cs Approach [11/12 Version] Name & location of organization: Date: Scoring: 0 = no capacity, 5 = full capacity Category Assessment Indicator As
Centers for Disease Control and Prevention Supplemental Cooperative Agreement Patient Protection and Affordable Care Act Funding Opportunity Number: CDC-RFA- DP09-9010201PPHF11 Project Narrative: I. Executive
16 Basic Steps for Developing a VAW Social Marketing Campaign Facilitate. Educate. Collaborate. Page 2 of 6 The opinions expressed here are those of the authors and do not necessarily reflect the views
FRAMEWORK FOR PUBLIC HEALTH GENETICS POLICIES AND PRACTICES IN STATE AND LOCAL PUBLIC HEALTH AGENCIES 1 The mission of public health is to fulfill society s interest in assuring conditions in which people
Upon completion of their degree in Health Sciences - Public Health, students will demonstrate their knowledge and ability to apply the following processes, concepts, and principles to promote and protect
Building Our Understanding: Key Concepts of Evaluation Applying Theory in the Evaluation of Communication Campaigns Evaluations of communication campaigns provide an opportunity to improve interventions
TOBACCO CESSATION WORKS: AN OVERVIEW OF BEST PRACTICES AND STATE EXPERIENCES Despite reductions in smoking prevalence since the first Surgeon General s report on smoking in 1964, approximately 46 million
Improving Evaluation Data Quality by Addressing Nuances in Commonly-Used Tobacco Prevention and Cessation Indicators Molly Aldridge, MPH 1, Laura McCormick, Dr PH 2, Carol Ripley-Moffitt, M.Div. 1, Kathryn
Non-Researcher s Guide to Evidence-Based Program Evaluation July 2012 Table of Contents Table of Contents... 2 Course Overview... 4 About This Course... 4 Intended Audience... 4 Course Topics... 4 Learning
NORTH CAROLINA Preconception Health Strategic Plan S U P P L E M E N T 2014-2019 Contributing Partners: Special thanks to Christie Adams in the Graphics Arts Unit with the N.C. Department of Health and
Barcelona Declaration of Measurement Principles: Revised for Public Comment, June 21, 2010 Global Alliance ICCO Institute for Public Relations Public Relations Society of America AMEC U.S. & Agency Leaders
Clinical Settings Break In Wilson/Roosevelt Room Boston Medical Center Delivering Clinic Based Tailored Environmental Interventions Dr. Megan Sandel Boston Medical Center Private, not for profit academic
School Bullying 1 Running head: ACTION RESEARCH AND SCHOOL BULLYING APA 2003 Poster Session Proposal Preventing Bullying in Schools: A Community Action Research Approach School Bullying 2 In recent years,
RUTGERS SCHOOL PSYCHOLOGY PROGRAM PRACTICUM HANDBOOK Introduction School Psychology is a general practice and health service provider specialty of professional psychology that is concerned with the science
Prepared by Mary Michaud, MPP State Coordinator for Local Program Evaluation University of Wisconsin Extension May 2003 How is evaluation built into program design? Several key steps will help build evaluation
6. A framework to plan monitoring and evaluation Key ideas Monitoring and evaluation should be an integral part of (and therefore affect) all stages of the project cycle. As mentioned earlier, you need
The Own Your Future Long-Term Care Awareness Campaign: Implications for CLASS By Eileen J. Tell Spring 2011 No. 13 The Community Living Assistance Services and Supports (CLASS) Plan a groundbreaking component
Domestic Violence and the Workplace PARTNERSHIP for PREVENTION Domestic Violence and the Workplace Domestic violence is a safety and health issue with medical, emotional, personal, economic, and professional
Pennsylvania Commission on Crime and Delinquency (PCCD) Research Based Programs Initiative Grantee Outcomes Report Template Guidelines for Reporting on Big Brothers Big Sisters As a requirement of funding
Evaluating health promotion programs Region of Peel September 27, 2013 Learning objectives By the end of this session, you will be able to: Understand the basic content of each step in our evaluation model
Interpersonal Violence and Teen Pregnancy: Implication and Strategies for Community-Based Interventions Mission: To engage, educate and empower youth to build lives and communities free from domestic and
ANNEX E: GLOSSARY OF KEY TERMS IN M&E Source: Development Assistance Committee (DAC). 2002. Glossary of Terms in Evaluation and Results-Based Management. Paris: OECD. This glossary is available in English,
Process Evaluation Tb Tobacco Control Paul W. Mattessich Benefits of Process Evaluation Understand the first three boxes of the logic model Inputs Activities Outputs Relate process to outcomes Obtain other
Evaluation Kerry Thomson, MPH, CHES Program Evaluator Colorado Department of Public Health and Environment Presentation Goal, Objective, & Activity Goal: Increase knowledge of public health program evaluation
Community Action Plan Template Major Activities Checklist in Needs Assessment, Capacity Building, and Planning: Establish coalition Conduct regularly scheduled coalition meetings Develop a Coalition mission
Safe and Healthy Minnesota Students Planning and Evaluation Toolkit AUGUST 2008 Safe and Healthy Minnesota Students Planning and Evaluation Toolkit Tips, techniques, and resources for planning and evaluating
81939_NMAC_cvr 8/28/03 12:38 PM Page 1 ORGANIZATIONAL EFFECTIVENESS SERIES Needs Assessment Fiscal Management Program Development Strategic Planning Grant Writing Volunteer Management Technology Development
1.Benefits and rationale for establishing quit-line services 1.1 WHAT IS A QUIT LINE? Quit lines provide a variety of tobacco cessation services predominately via telephones. These usually include: initial
Program Evaluation and the Work Plan for Applicants to STD AAPPS Marion Carter, Dayne Collins, Bruce Heath, and Kimberly Thomas Program Evaluation Team (HSREB), Program Support Team (PDQIB) Division of
Using Evaluation to Improve Programs Strategic Planning www.cdc.gov/healthyyouth/evaluation PROGRAM STRATEGIC PLANNING KIT Table of Contents Introduction Part 1: What is strategic planning? Part 2: What
SALEM AREA ONE COMMUNITY INITIATIVE Mission Statement To develop a safe, respectful, all-inclusive community through education, advocacy, and coordinated actions that include an effective response to prejudice
Activity: A specific action or process undertaken over a specific period of time by an organization to convert resources to products or services to achieve results. Related term: Project. Appraisal: An
Slide 1 Orientation to Violence Prevention Moving Upstream: The Story of Prevention Part 1: Story of Prevention 1 Slide 2 Slide 3 Lesson Objectives Describe what it means to move upstream and prevent violence
Peer-led intervention for reducing risk-taking behaviour in young drug users Introduction UNODC Regional Office for South Asia s Project RAS/G23, has designed this peer led intervention that focuses primarily
UCLA CENTER FOR HEALTH POLICY RESEARCH Section 4: Key Informant Interviews Purpose Key informant interviews are qualitative in-depth interviews with people who know what is going on in the community. The
Monitoring HIV/AIDS Programs: Participant Guide A USAID Resource for Prevention, Care and Treatment Core Module 1: September 2004 Family Health International In July 2011, FHI became FHI 360. FHI 360 is
Name of Program/Strategy: Brief Alcohol Screening and Intervention for College Students (BASICS) Report Contents 1. Overview and description 2. Implementation considerations (if available) 3. Descriptive
Professional School Counselor Effectiveness Rubric 2011 I. Overview II. Effectiveness Rubric a. Domain 1: Academic Achievement b. Domain 2: Student Assistance Services c. Domain 3: Career Development d.
Rationale Pennsylvania General Core Competencies for Relationship-Based Career Advising This document contains general core competencies that are essential for all effective relationship-based career advising
Every year 9 million people get sick with TB. 3 MILLION DON T GET THE CARE THEY NEED. HELP US TO REACH THEM. World TB Day 2015 WORLD TB DAY 24 MARCH 2015 2 the missed three million TB is curable, but our
Evaluating Public Information and Advocacy Campaigns Glenn O Neil Evaluation consultant - Owl RE, Switzerland Lecturer - International University in Geneva, Switzerland PhD candidate - Methodology Institute,
Criminal Justice Research Department of Premier and Cabinet Criminal Justice Evaluation Framework (CJEF): Evaluating process and implementation THE CRIMINAL JUSTICE EVALUATION FRAMEWORK (CJEF) The Criminal
Quality Assurance in Higher Education: A Literature Review ANDREA WILGER National Center for Postsecondary Improvement 508 CERAS School of Education Stanford University, Stanford, CA 94305-3084 1997 National
: Evaluator and Planning Team Job Descriptions I. Overview II. Sample Evaluator Job Description III. Evaluator Competencies IV. Recruiting members of your strategic evaluation planning team V. Recruiting
Arkansas s Systems Training Outreach Program: Using Academic Detailing to Reach Health Care Providers National Center for Chronic Disease Prevention and Health Promotion Office on Smoking and Health CS246699
PUBLIC EDUCATION CAMPAIGNS REDUCE TOBACCO USE The scientific evidence is substantial and clear: public education campaigns reduce the number of youth who start smoking, increase the number of smokers who
Monitoring and Evaluation Plan Primer for DRL Grantees I. What is a monitoring and evaluation plan? A monitoring and evaluation plan (M&E plan), sometimes also referred to as a performance monitoring or
STEP B LOGIC MODELS CONTENTS Logic Models and Program Planning 2 Basic Logic Model Components 4 Approaches to Developing a Logic Model 7 Steps for Constructing a Logic Model 9 The Logic Model Narrative
Proyecto JOVEN: A Community Driven Teen Dating Violence Prevention Program for Hispanic Families Rosa M. Gonzalez-Guarda, PhD, MPH, RN, CPH RWJF Nurse Faculty Scholar School of Nursing and Health Studies
Copenhagen School of Global Health Community based approaches to preventing diabetes and heart disease Maximilian de Courten, Brian Oldenburg Monash University University of Copenhagen Outline: 1. What
DEEPER LEARNING COMPETENCIES April 2013 Deeper learning is an umbrella term for the skills and knowledge that students must possess to succeed in 21 st century jobs and civic life. At its heart is a set
Quantitative Prepared by: Amanda J. Rockinson-Szapkiw Liberty University A research design is a plan that guides the decision as to: when and how often to collect data what data to gather and from whom
Training Program Quality Assurance and Evaluation Best Practices for Worker Training Foundations ±Minimum Criteria Document (1910.120 Appendix E) ±Cooperative Agreement Requirements ±Awardee Evaluations
ASPH Education Committee Master s Degree in Public Health Core Competency Development Project Version 2.3 Word Format Domains and Competencies Only May 2007 Discipline-specific Competencies A. BIOSTATISTICS
ROLLINS SCHOOL OF PUBLIC HEALTH OF EMORY UNIVERSITY Core Competencies Upon graduation, a student with an MPH/MSPH should be able to: Use analytic reasoning and quantitative methods to address questions
Glossary for the Arizona Professional School Counselor Evaluation Accountability: Responsibility for one s actions, particularly for objectives, procedures and results of one s work and program; involves
Using Performance Measures to Assist Legislators NCSL Legislative Summit August 5, 2015 Why should legislators care? Many organizations (including state and local governments) use some form of performance
Women s Way Direct Mail Project: 2005 Summary Issued: December 2005 Prepared for: Women s Way, the North Dakota Breast and Cervical Cancer Early Detection Program Prepared by: North Dakota State Data Center
9 Mississippi Youth Tobacco Survey Office of Health Data and Research Office of Tobacco Control Mississippi State Department of Health Acknowledgements... 1 Glossary... 2 Introduction... 3 Sample Design
Chapter 7: Monitoring and Evaluation The Training Program is relatively new and is still developing. Careful monitoring and evaluation of the Program by the people most closely involved in it will help
SMART Objectives & Logic Modeling HIV Prevention Section Bureau of HIV/AIDS Typing a Question in the Chat Box Type question in here Completing the Webinar Evaluation (opened at end of webinar) SMART Objectives
1 COMMUNITY NEEDS ASSESSMENT Community Community Needs Assessment for Winnebago County Improvement of Nursing Practice Project Susan Tews & LaDonna Zanin University of Wisconsin Oshkosh College of Nursing
Online MPH Program Supplemental Application Handbook Table of Contents Online MPH Supplemental Application Handbook Completing & Submitting the Supplemental Application 2 Eligible Previous Public Health
This document was peer reviewed through the NWI. Supporting Wraparound Implementation: Chapter 5a.3 Choosing a Consultant to Support Your Wraparound Project Patricia Miles, Consultant National Wraparound
Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide August 2005 Suggested Citation: U.S. Department of Health and Human Services. Centers for Disease Control and Prevention.
Breastfeeding in Ontario Evaluating Breastfeeding Programs and Initiatives What is evaluation? Evaluation means different things to different people. Fundamentally, evaluation means getting feedback about
1 What are research, evaluation and audit? Barbara Sen, Maria J. Grant and Hannah Spring I don t do research. I don t have the time. I am too busy with the day to day running of the library. I do evaluations
Center for Prevention and Health ISSUEBrief Services Volume I, Number 5 Reducing the Burden of Smoking on Employee Health and Productivity This issue brief summarizes information presented during the fifth
Evaluating e-learning Programs Using Best Practices Kim E. Dooley, Kathleen Kelsey, James R. Lindner & Larry M. Dooley The process of determining the merit, worth, or value of something, or the product
Detailed Knowledge, Skills and Abilities Tested on the Computerbased Examination for Accreditation in Public Relations (effective January 2016) Objective 1 Researching, Planning, Implementing and Evaluating
Abt Associates Inc. Building Non-Profit Capacity: Findings from the Compassion Capital Fund Evaluation R E S E A R C H June 2010 B R I E F Non-profit organizations (NPOs) play a critical role in the provision
CSL 501 Evaluation and Assessment This course is designed to provide students with an understanding of individual, couple, family, group and environmental/community approaches to assessment and evaluation.
Developing an Effective Evaluation Report Setting the course for effective program evaluation Acknowledgments This workbook was developed by the Centers for Disease Control and Prevention s (CDC s) Office
We would like to hear from you. Please send your comments about this guide to email@example.com. Thank you. 2006 Imagine Canada Copyright for Project Evaluation Guide For Nonprofit Organizations:
Media Campaigns, Social Marketing, and Social Norms Campaigns: What is the Difference? Center for the Application of Prevention Technologies West Resource Team Anne Rogers, M.Ed., CHES CAPT Associate 2
Step 6: Report Your PHDS-PLUS Findings to Stimulate System Change Sharing information from the PHDS-PLUS can help you launch or strengthen partnerships and efforts to improve services, policies, and programs
Articles Recommendations Regarding Interventions to Reduce Tobacco Use and Exposure to Environmental Tobacco Smoke Task Force on Community Preventive Services Medical Subject Headings (MeSH): community
National Commission for Academic Accreditation & Assessment Standards for Quality Assurance and Accreditation of Higher Education Programs November 2009 Standards for Quality Assurance and Accreditation
Evaluating health promotion programs November 1, 2012 V4.12 Five minute exercise in evaluation design In pairs, carefully observe one another Turn your backs to each other Both of you change three things