PERFORMANCE MANAGEMENT AND EVALUATION: WHAT S THE DIFFERENCE? Karen E. Walker, Ph.D., and Kristin Anderson Moore, Ph.D.
|
|
|
- Olivia Booth
- 10 years ago
- Views:
Transcription
1 Publication # information for program providers and funders who seek to understand what performance management is, how it differs from evaluation, and why both are critical. January 2011 PERFORMANCE MANAGEMENT AND EVALUATION: WHAT S THE DIFFERENCE? Karen E. Walker, Ph.D., and Kristin Anderson Moore, Ph.D. OVERVIEW In previous research briefs, Child Trends has provided overviews of various forms of program evaluations: experimental, quasi-experimental, and process. 1 This brief provides information on performance management the ongoing process of collecting and analyzing information to monitor program performance and its relationship to other forms of evaluation. BACKGROUND Funders increasingly require managers of social programs to provide information about their programs operations, including services provided, participation levels, outcomes, and effectiveness. As a result, organizations need the capacity to collect significant amounts of information. How managers use information once it has been collected varies across organizations. For some managers, the usefulness of information ends once they have met their funder s requirements. Such managers may be collecting performance measures, but they are not engaging in performance management as we describe it below. Other managers regard data collection and analysis as ongoing parts of their jobs that enable them to assess aspects of their programs and modify activities as needed. Recently, the federal government has emphasized tying funding levels to evidence-based programs, a strategy that is likely to benefit programs in which managers use varied types of data effectively to monitor and improve their programs. Thus, it has become more and more essential to use information for program improvement an undertaking that is often called performance management. In addition, outcomes evaluation is increasingly expected. However, many nonprofit programs, unaccustomed to extensive data collection and analysis, are unsure what data to collect, how to use their data, and how performance management differs from evaluation. WHAT IS PERFORMANCE MANAGEMENT? The U.S. Office of Personnel Management defines performance management as the systematic process by which an agency involves its employees, as individuals and members of a group, in improving organizational effectiveness in the accomplishment of agency mission and goals. 2 The process is used for program or organizational improvement and supported by program managers who have the authority to modify program components. To carry it out, managers and other staff members collect information about the frequency, type, and (ideally) quality of services, along with information on participant satisfaction (or dissatisfaction) with and use of services and participants outcomes. Managers then use the information to develop a better
2 understanding of their programs strengths and challenges, which helps in identifying solutions to improve program operations. WHAT IS EVALUATION? In this brief, the term evaluation refers to one of several types of assessments of social programs. For example, implementation evaluations are used to assess a program s operations, and various types of outcomes evaluations are used to assess the effect of a program on participants and the program s cost-effectiveness. HOW IS PERFORMANCE MANAGEMENT SIMILAR TO EVALUATION? Performance management and program evaluation share key features. Both rely on systematically collected and analyzed information to examine how well programs deliver services, attract the desired participants, and improve outcomes. Both use a mix of quantitative and qualitative information. Quantitative information is data displayed in numbers that are analyzed to generate figures, such as averages, medians, and increases and decreases. Qualitative data include information obtained through interviews, observations, and documents that describe how events unfold and the meanings that people give to them. In both performance management and program evaluation, qualitative and quantitative data are used to examine relationships among people, program services, and outcomes. In general, performance management is most similar to formative evaluation, a particular type of evaluation that is intended specifically to provide information to programs about operations (such as whether the program meets high standards and whether targeted groups enroll). Although this brief Information collected in Performance Management and Evaluation is similar and includes: Participant characteristics Services and activities provided Attendance and retention distinguishes between performance management and evaluation in general, it focuses predominately on the distinctions between performance management and formative evaluation, sometimes called implementation evaluation. HOW IS PERFORMANCE MANAGEMENT DIFFERENT FROM EVALUATION? Despite their similarities, performance management and evaluation differ from each other with respect to the purposes of collecting information, the timing of data collection, the people primarily responsible for the investigation, and how benchmarks are derived and used. Performance Management Evaluation To ensure that the program is operating as intended To plan and guide improvements if the program is not operating as intended or not producing desired outcomes Why is program information collected? To understand current program performance and provide recommendations for program improvement To describe program operations 2
3 Performance Management Evaluation To assess program effectiveness or impact To assess the costs and benefits of the program To explain findings To understand the processes through which a program operates To assess fidelity of implementation Program managers and staff Throughout the program s life Program staff Who is the intended audience? Funders, policy makers, external practitioners When is information collected? Once or periodically Who leads the investigation? External evaluator Benchmarks are established for key measures and program progress is measured against them. How is progress measured? Progress is measured by increases and decreases in desired outcomes. WHY IS PROGRAM INFORMATION COLLECTED? In performance management, programs collect information to make sure that the program is operating as intended. If it is not, then the information helps managers decide among possible explanations for the program s challenges in meeting expectations. For example, information collected at enrollment about participant characteristics, such as school grades, indicates if an after-school program that targets academically at-risk youth is recruiting its intended population, but it does not explain why the program is or is not doing so. For that, the manager must collect additional information about the reasons that the targeted population is or is not participating. Taking a close look at how the achievement of youth who expressed interest in the program initially compares to the achievement of those who actually enrolled might provide some important clues. Perhaps the program s initial recruitment efforts resulted in attracting fewer high-risk youth than desired, or perhaps the program attracts both at-risk and low-risk youth, but it has trouble maintaining enrollment among the at-risk population. Managers need information about program participants, enrollments, and program use to monitor progress and make informed decisions. In performance management, managers may collect information on participant outcomes, but such outcome data cannot be used to make a strong case for a program s effectiveness. Instead, 3
4 this information should be used in conjunction with information about participants and services to assess program operations. For example, if one group of participants is achieving desired outcomes, a program manager might ask such questions as, How do these groups differ? Is one group already doing better when it begins the program? Do different staff members work with this group? Such questions can help to identify ways to strengthen recruitment efforts or staff training. To make a strong case for a program s effectiveness, evaluation is necessary. In evaluation, programs collect information for multiple reasons, depending on the type of evaluation. Outcomes evaluations summarize participant outcomes without trying to establish whether the program caused observed changes, whereas impact evaluations try to establish cause-and-effect through high-quality randomized controlled studies. 3 Various types of implementation evaluations provide information that describes features of effective programs, and explain why programs may not reach their intended goals. Formative evaluation, one type of implementation evaluation, may be used to identify operational challenges that need improvement, just as performance management does. However, formative evaluation often has a broader goal, which is to identify common challenges and opportunities that programs face to prepare funders, policy makers, and practitioners for future work. WHO IS THE INTENDED AUDIENCE? The audiences for performance management and evaluation differ. The results of performance management are intended for the people within an organization who manage and run the program. In contrast, the results of evaluation are usually intended for external audiences, including funders, policy makers, and practitioners. This difference in intended audience means that information collected solely for program improvement is not typically subject to approval by an Institutional Review Board (IRB). These boards evaluate proposed research to ensure that it meets ethical standards. These standards include respect for persons (by upholding the requirement that participation in research is voluntary, for example); beneficence (i.e., protecting research participants from harm and ensuring that the benefits of the research outweigh its risks); and justice (spreading the burden of research across different types of groups instead of selecting those who are vulnerable). 4 IRB review, however, becomes important when an individual s information is collected with the intent of incorporating it into more generalized knowledge that is disseminated to the broader community. WHEN IS PROGRAM INFORMATION COLLECTED? As indicated in Figure 1, information for performance management is collected throughout a program s life on an ongoing basis. When a program is first developed and implemented, information such as participant characteristics, participation rates, and participant outcomes provide useful insights into critical questions. Among these critical questions are whether the program is serving its targeted population; whether participation rates are at the desired levels; and whether the participants outcomes are moving in the desired direction. Knowing the answers to these questions allows program staff to ask follow-up questions, such as Why aren t our participation rates as high as we would like? and What can we do to increase them? 4
5 Even mature and stable programs need to monitor their progress. Changes in participant characteristics may result in falling attendance rates in a program, suggesting the need to make corresponding changes in the program. For example, if a program is seeing an increase in the ratio of immigrant to non-immigrant participants, hiring staff who can work effectively with the immigrant population may serve to boost program attendance and retention. Also, changes to public funding may lead managers to make small changes to their program that, over time, may add up to big and unintended changes in the experiences that the program participants have in the program. Unlike performance management, evaluation is conducted periodically, and its timing depends on program maturity, the type of evaluation desired, and the organization s need to have an outsider s perspective. When programs are young or undergoing significant changes, formative evaluation may be appropriate after program staff members and managers have gone as far as they can in assessing and improving their own performance. Once evaluators are assured that the program is ready for an investigation of its outcomes, then impact evaluations may be conducted along with implementation evaluation to explain findings. Figure 1: Becoming Performance Driven Targeting Collect Data on Performance & Outcome Measures Conduct Resource & Needs Assessment Identify Your Population Select Intervention, Develop Logic Model & Implementation Benchmarks, & Identify Indicators Implement Program/Approach & Conduct Ongoing Performance Management Conduct Implementation Evaluation(s) Conduct a Quasi-Experimental Outcomes Evaluation [once implementation issues are addressed] Conduct a Randomized-Controlled Impact Evaluation [if appropriate and feasible] 5
6 WHO LEADS THE INVESTIGATION? Program staff or outside consultants generally conduct the research intended for performance management. To gain a better understanding of how to use information to improve program operations, the organization running the program should collect and analyze the data. This approach permits the organization s managers to ask questions and collect information about operational challenges in a timely manner. Evaluations led by independent researchers whose fees are paid by funders (such as a government agency or a foundation) may not be as useful to program staff as efforts that are internally funded and led because the program has relatively little authority to determine the research questions or the methods used to collect and analyze data. Also, program staff may be anxious about the role of external evaluators, fearful that funders will make decisions to continue or discontinue support for a program based on evaluation findings. This anxiety may lead to distrust about the evaluation s findings, particularly if some of the findings are negative. Program staff members may focus on defending their program instead of asking what they could do to improve it. And, finally, program staff may distrust evaluators conclusions because the staff does not fully understand how these conclusions were reached. In contrast to program managers and staff, funders and policy makers usually think that independent evaluators provide more credible information than program staff or evaluators hired by programs do because there is no obvious conflict of interest. Thus, many evaluations are conducted by independent evaluators. HOW IS PROGRESS MEASURED? A fundamental difference between performance management and evaluation involves setting benchmarks to assess progress on indicators. A benchmark establishes a standard against which implementation progress is measured. Typically, benchmarks are expressed as numbers or proportions. For example, a summer literacy program may determine that its participants require six weeks of involvement to prevent summer learning loss. Six weeks, therefore, becomes the benchmark against which the program measures its progress in delivering services. To a very large degree, the evidence that programs use for establishing benchmarks may be sparse or weak because it is based on limited research and program experience. For example, if six-week summer reading programs have been found to be effective in preventing summer learning loss in some situations, then a program manager may use six weeks as a benchmark, even if further research might determine that five weeks would be sufficient. Alternatively, program staff may set a benchmark based on assumptions about what participants need to benefit from the program. Such was the case with Big Brothers/Big Sisters mentoring program affiliates, which set initial program standards prior to a rigorous evaluation in order to communicate expectations to mentors and ensure that they had sufficient time to devote to the relationship. 5 After experimental evaluations were completed, the benchmarks and standards were updated. Setting benchmarks for performance serves a critical operational purpose: It allows program managers to determine if activities are meeting standards and make decisions accordingly. 6 Whereas program staff and managers usually are willing to use both experience and previous research to set benchmarks to assess their programs progress, evaluators hesitate to set a benchmark without sufficient and strong data. An evaluator s work is judged according to
7 standards for data collection and analysis set by other evaluators and researchers. From an evaluator s perspective, carefully drawn conclusions are critical to advancing knowledge, which is developed incrementally and over time. Often, the most an evaluator can say is that the program is or is not effective based on a statistically significant minimum detectable effect. There are rarely opportunities to compare one benchmark with another. However, once an evaluation is complete, its findings may provide program operators with information that helps them refine their benchmarks for more effective performance management. The practical result of these different approaches to setting benchmarks can be tension between program managers, who want evaluators to tell them what the standards should be, and evaluators, who say they do not have that information. If available at all, that type of information tends to be scattered unevenly across the evaluation literature. CONCLUSION Both performance management and evaluation can provide very useful information, but they are not interchangeable. Each has its own goals, standards, and criteria for meeting them. Performance management aims to ensure that social programs operate as intended. It requires ongoing, internal data collection and analysis, flexibility to ask a variety of questions, and the capacity to use experience and the literature to set program standards and benchmarks. Evaluation is intended to provide information to a broad set of stakeholders funders, other practitioners, and policy makers to advance knowledge in the field. It requires a clear set of research questions, the most rigorous design possible given programmatic constraints, and careful, time-consuming data analysis. Both are critical to operating successful social programs, but it is important to note that, while performance management is a necessary task for every operating social program, evaluation is not. If a program model has evidence of effectiveness through a rigorous previous evaluation and program standards have been defined (either through the evaluation or by the program developers), then further evaluation may not be necessary. Performance management, however, remains a critical ongoing task to ensure that the program is meeting its standards. And if a program has never been evaluated, ensuring that the program is operating as intended is a good way to prepare for any future evaluation. ACKNOWLEDGEMENTS We would like to thank Ingvild Bjornvold for insightful comments that strengthened this brief. Editor: Harriet J. Scarupa 7
8 REFERENCES 1 Bowie, L., & Bronte-Tinkew, J. (2008, January). Process evaluations: A guide for out-of-school time practitioners. (Research-to-Results Brief) Washington, DC: Child Trends. Moore, K.A., & Metz, A. (2008, January). Random assignment evaluation studies: A guide for out-of-school time practitioners. (Research-to-Results Brief). Washington, DC: Child Trends. Moore, K.A. (2008, January). Quasi-experimental evaluations. (Research-to- Results Brief). Washington, DC: Child Trends. 2 Performance Management Overview, downloaded 7/19/10 from 3 Allen T., & Bronte-Tinkew, J. (2008, January). Outcome evaluations: A guide for out-of-school time practitioners. (Research-to-Results Brief) Washington, DC: Child Trends. Bowie, & Bronte-Tinkew (2008). Moore, K.A., & Metz, A. (2008, January). Random assignment evaluation studies: A guide for out-of-school time practitioners. (Research-to-Results Brief). Washington, DC: Child Trends. Moore, K.A. (2008, January). Quasi-experimental evaluations. (Research-to-Results Brief). Washington, DC: Child Trends.. 4 Bronte-Tinkew, J., Allen, T., & Joyner, K. (2008, February). Institutional review boards (IRBs): What are they, and why are they important? (Research-to-Results Brief). Washington DC: Child Trends. 5 Tierney, J.P., & Grossman, J.B., with Reisch, N.L. (1995). Making a difference: An impact study of Big Brothers/Big Sisters. Philadelphia: Public/Private Ventures. 6 Hunter, D. E.K., & Koopmans, M. (2006) Calculating program capacity using the concept of active service slot. Evaluation and Program Planning, 29, SPONSORED BY: The Atlantic Philanthropies 2011 Child Trends. May be reprinted with citation Connecticut Ave, NW, Suite 350, Washington, DC 20008, Child Trends is a nonprofit, nonpartisan research center that studies children at all stages of development. Our mission is to improve outcomes for children by providing research, data, and analysis to the people and institutions whose decisions and actions affect children. For additional information on Child Trends, including publications available to download, visit our Web site at For the latest information on more than 100 key indicators of child and youth well-being, visit the Child Trends DataBank at For summaries of more than 500 experimental evaluations of social interventions for children, visit 8
Performance Management: How to Use Data to Drive Programmatic Efforts
Performance Management: How to Use Data to Drive Programmatic Efforts Presented to Pregnancy Assistance Fund Grantees : Webinar held November 30, 2011 Kristine Andrews, PhD Research Scientist, Child Trends
Information about types of evidence-based programs and online resources for out-of-school time funders, administrators, and practitioners.
Publication #2009-36 Information about types of evidence-based programs and online resources for out-of-school time funders, administrators, and practitioners. July 2009 ONLINE RESOURCES FOR IDENTIFYING
Effective intervention practices + Effective implementation practices = Increased likelihood for positive youth outcomes
Publication #2007-29 information for practitioners seeking to use evidence-based practices to enhance program outcomes. October 2007 IMPLEMENTING EVIDENCE-BASED PRACTICES: SIX DRIVERS OF SUCCESS Part 3
CHILDREN S ACCESS TO HEALTH INSURANCE AND HEALTH STATUS IN WASHINGTON STATE: INFLUENTIAL FACTORS
Publication #2009-21 4301 Connecticut Avenue, NW, Suite 350, Washington, DC 20008 Phone 202-572-6000 Fax 202-362-8420 www.childtrends.org CHILDREN S ACCESS TO HEALTH INSURANCE AND HEALTH STATUS IN WASHINGTON
Finding the Right People for Your Program Evaluation Team: Evaluator and Planning Team Job Descriptions
: Evaluator and Planning Team Job Descriptions I. Overview II. Sample Evaluator Job Description III. Evaluator Competencies IV. Recruiting members of your strategic evaluation planning team V. Recruiting
Excellence in Prevention descriptions of the prevention programs and strategies with the greatest evidence of success
Name of Program/Strategy: Project Towards No Drug Abuse Report Contents 1. Overview and description 2. Implementation considerations (if available) 3. Descriptive information 4. Outcomes 5. Cost effectiveness
How to Develop a Logic Model for Districtwide Family Engagement Strategies
How to Develop a Logic Model for Districtwide Family Engagement Strategies Helen Westmoreland, Consultant M. Elena Lopez, Senior Consultant Heidi Rosenberg, Senior Research Analyst November, 2009 For questions
How To Be A Health Care Provider
Program Competency & Learning Objectives Rubric (Student Version) Program Competency #1 Prepare Community Data for Public Health Analyses and Assessments - Student 1A1. Identifies the health status of
Wyman s National Network: Bringing Evidence-Based Practice to Scale. COA Innovative Practices Award Case Study Submission. Emily Black.
Running Head: WYMAN S NATIONAL NETWORK Wyman s National Network: Bringing Evidence-Based Practice to Scale COA Innovative Practices Award Case Study Submission Emily Black Wyman Center Abstract Wyman Center
Since the 1990s, accountability in higher education has
The Balanced Scorecard Beyond Reports and Rankings More commonly used in the commercial sector, this approach to strategic assessment can be adapted to higher education. by Alice C. Stewart and Julie Carpenter-Hubin
Research and evaluation often start with a desire to get evidence that. Chapter 9 Research and Evaluation: Creating a Cycle of Improvement
Transforming Education for the Next Generation 99 Chapter 9 Research and Evaluation: Creating a Cycle of Improvement Chapter Introduction by Saul Rockman President, Rockman et al @rockmaneval Research
By Jack Phillips and Patti Phillips How to measure the return on your HR investment
By Jack Phillips and Patti Phillips How to measure the return on your HR investment Using ROI to demonstrate your business impact The demand for HR s accountability through measurement continues to increase.
WHAT IS PROGRAM QUALITY? The After School Corporation (TASC) has identified ten essential elements that define quality in an after-school program:
Publication #2008-10 information for program designers and program funders on why and how to assess program quality. February 2008 HOW CAN I ASSESS THE QUALITY OF MY PROGRAM? TOOLS FOR OUT-OF-SCHOOL TIME
Research Brief. Do Parents Feel More Aggravated These Days? MARCH 2014 Publication #2014-14 PARENTAL AGGRAVATION IN THE U.S., 1997 TO 2012 OVERVIEW
MARCH 2014 Publication #2014-14 Do Parents Feel More Aggravated These Days? PARENTAL AGGRAVATION IN THE U.S., 1997 TO 2012 David Murphey, Ph.D., Tawana Bandy, B.S., Kristin Anderson Moore, Ph.D., & P.
White Paper from Global Process Innovation. Fourteen Metrics for a BPM Program
White Paper from Global Process Innovation by Jim Boots Fourteen Metrics for a BPM Program This white paper presents 14 metrics which may be useful for monitoring progress on a BPM program or initiative.
Below is a suggested framework and outline for the detailed evaluation plans with column names defined as follows:
Trade Adjustment Assistance Community College and Career Training Grant Program Round 4 Grant Third-Party Evaluations Technical Assistance for Developing Your Detailed Evaluation Plan The Trade Adjustment
Quality Assurance Initiatives in Literacy and Essential Skills: A Pan-Canadian Perspective
Quality Assurance Initiatives in Literacy and Essential Skills: A Pan-Canadian Perspective Summary of Key Informant Interviews Final Report November 2014 Acknowledgments Thank you to the following organizations
Instructional Technology Capstone Project Standards and Guidelines
Instructional Technology Capstone Project Standards and Guidelines The Committee recognizes the fact that each EdD program is likely to articulate some requirements that are unique. What follows are a
Non-Researcher s Guide to Evidence-Based Program Evaluation
Non-Researcher s Guide to Evidence-Based Program Evaluation July 2012 Table of Contents Table of Contents... 2 Course Overview... 4 About This Course... 4 Intended Audience... 4 Course Topics... 4 Learning
School of Public Health and Health Services. Doctor of Public Health Health Behavior Department of Prevention and Community Health.
School of Public Health and Health Services Doctor of Public Health Health Behavior Department of Prevention and Community Health 2014 Note: All curriculum revisions will be updated immediately on the
Model for Comprehensive and Integrated School Psychological Services
Model for Comprehensive and Integrated School Psychological Services 2010 INTRODUCTION The mission of the National Association of School Psychologists (NASP) is to represent school psychology and support
Social Performance Rating System
Social Performance Rating System methodology report Inclusion [ Social Ratings ] Inclusion [ Africa ] MFI Social Performance Rating System Introduction Inclusion [Social Ratings] has designed its Social
UAF-UAA Joint PhD Program in Clinical-Community Psychology with Rural, Indigenous Emphasis Outcomes Assessment Goals, Objectives, and Benchmarks
Page 1 of 16 UAF-UAA Joint PhD Program in Clinical-Community Psychology with Rural, Indigenous Emphasis Outcomes Assessment Goals,, and Benchmarks Mission The PhD Program in Clinical-Community Psychology
DOCTOR OF PHILOSOPHY DEGREE. Educational Leadership Doctor of Philosophy Degree Major Course Requirements. EDU721 (3.
DOCTOR OF PHILOSOPHY DEGREE Educational Leadership Doctor of Philosophy Degree Major Course Requirements EDU710 (3.0 credit hours) Ethical and Legal Issues in Education/Leadership This course is an intensive
SERVE Philadelphia 2013-2014 AmeriCorps VISTA Information and Application
SERVE Philadelphia 2013-2014 AmeriCorps VISTA Information and Application Philadelphia Mayor Michael A. Nutter s Office of Civic Engagement and Volunteer Service is accepting applications for 12 full time
The MPH. ability to. areas. program. planning, the following. Competencies: research. 4. Distinguish. among the for selection
MPH COMPETENCIES The MPH competencies are based on those approved by the Association of Schools of Public Health (ASPH) in 2006. By graduation, all MPH students must demonstrate the ability to apply public
Evidence-Based Enrichment Programs. Summary of Child Trends Research Briefs by Laura Deaton Managing Director December 15, 2010
Evidence-Based Enrichment Programs Summary of Child Trends Research Briefs by Laura Deaton Managing Director December 15, 2010 Highest Rigor What is an Evidence-Based Program? Replicated and Experimental
Safe and Healthy Minnesota Students Planning and Evaluation Toolkit
Safe and Healthy Minnesota Students Planning and Evaluation Toolkit AUGUST 2008 Safe and Healthy Minnesota Students Planning and Evaluation Toolkit Tips, techniques, and resources for planning and evaluating
Dear Colleagues, Best Regards, Pamela L. Quinones, RDH, BS
A Letter from the 2011-2012 ADHA President Dear Colleagues, Partnering with ADEA to develop the Core Competencies for Graduate Dental Hygiene Education was a very positive and rewarding experience for
FRONTIER COLLEGE : Strategic Plan 2011-2014
FRONTIER COLLEGE : Strategic Plan 2011-2014 1 As a pioneer in Canadian literacy, we will reach more learners by 2014 through bold leadership. We will achieve this by expanding our network of supporters
Department of Accounting, Finance, & Economics
Department of Accounting, Finance, & Economics Assessment Report 2010-2011 Mission/Purpose The mission of the Department of Accounting, Finance and Economics is to provide a quality education in accounting,
Measuring Success: A Guide to Becoming an Evidence-Based Practice
Measuring Success: A Guide to Becoming an Evidence-Based Practice by Jennifer Fratello, Tarika Daftary Kapur, and Alice Chasan Vera Institute, Center on Youth Justice Models for Change Every young person
Youth Career Development
GUIDE TO GIVING Youth Career Development HOW TO USE THIS GUIDE STEP 1 EDUCATION AND YOUTH DEVELOPMENT Understanding the Recommended Approach to Youth Career Development Are you interested in youth career
Writing Proposals for Capacity Building. Funders want to improve organizations not rescue them. Make sure you're starting from a position of strength.
Writing Proposals for Capacity Building Funders want to improve organizations not rescue them. Make sure you're starting from a position of strength. By Susan Chandler Capacity building grants are a growing
Domain #1: Analytic Assessment Skills
Domain #1: Analytic Assessment Skills 1. Defines a problem 2. Determines appropriate uses and limitations of both quantitative and qualitative data 3. Selects and defines variables relevant to defined
Annex 1. Call for proposals EACEA No 34/2015 Erasmus+ Key Action 3: Support for policy reform - Initiatives for policy innovation
Annex 1 Call for proposals EACEA No 34/2015 Erasmus+ Key Action 3: Support for policy reform - Initiatives for policy innovation European policy experimentations in the fields of Education, Training and
Strategic Communications Audits
Strategic Communications Audits Prepared for the Communications Consortium Media Center Julia Coffman October 2004 Nonprofit organizations are now continuously being challenged to be more strategic in
Introduction. RGF Research Guide Page 2/7
RGF Research Guide Introduction The Responsible Gaming Principles state that WLA members will develop their practices concerning responsible gaming-related issues on the fullest possible understanding
Following are detailed competencies which are addressed to various extents in coursework, field training and the integrative project.
MPH Epidemiology Following are detailed competencies which are addressed to various extents in coursework, field training and the integrative project. Biostatistics Describe the roles biostatistics serves
What Makes an Effective Nonprofit
What Makes an Effective Nonprofit The nonprofit sector now encompasses 1.5 million organizations, from universities to hospitals to environmental groups. Human service organizations alone account for nearly
PRO-NET 2000. A Publication of Building Professional Development Partnerships for Adult Educators Project. April 2002
Professional Development Coordinator Competencies and Sample Indicators for the Improvement of Adult Education Programs A Publication of Building Professional Development Partnerships for Adult Educators
Evaluation Plan: Process Evaluation for Hypothetical AmeriCorps Program
Evaluation Plan: Process Evaluation for Hypothetical AmeriCorps Program Introduction: This evaluation plan describes a process evaluation for Financial Empowerment Corps (FEC), an AmeriCorps State and
Section Three: Ohio Standards for Principals
Section Three: Ohio Standards for Principals 1 Principals help create a shared vision and clear goals for their schools and ensure continuous progress toward achieving the goals. Principals lead the process
ASPH Education Committee Master s Degree in Public Health Core Competency Development Project
ASPH Education Committee Master s Degree in Public Health Core Competency Development Project Version 2.3 Word Format Domains and Competencies Only May 2007 Discipline-specific Competencies A. BIOSTATISTICS
CURRICULAR MAP FOR THE MASTER OF PUBLIC HEALTH
CUICULA MA FO THE MASTE OF UBLIC HEALTH The accrediting agency requires that assessment be based on public health competencies, so we have chosen those competencies that best match the generic SLOs most
Logic Models, Human Service Programs, and Performance Measurement
Three Logic Models, Human Service Programs, and Performance Measurement Introduction Although the literature on ment has been around for over two decades now, scholars and practitioners still continue
An introduction to impact measurement
An introduction to impact measurement Contents 1 Introduction 2 Some definitions 3 Impact measurement at BIG 4 Setting impact measures for programmes APPENDICES A External Resources (separate document)
By the end of the MPH program, students in the Health Promotion and Community Health concentration program should be able to:
Faculty of Health Sciences Graduate Public Health Program Master of Public Health (MPH) Health Promotion and Community Health () Learning Objectives mapped to Competencies May 2012 Each MPH concentration
How To Become A Criminal Justice Professional
Mission The mission of the Curry College Master of Arts in (MACJ) program is to provide students with the intellectual and pragmatic skills needed to become effective leaders, reflective practitioners,
Graduate. scholars to. developing. meet the. scholarly learning. The inten establish. curriculum 1. programs. There are concisely
Note: This document was developed as a collaboration between ADEA and the American Dental Hygienists Association. Introduction ADEA Core Competencies for Graduate Dental Hygiene Education (As approved
Monitoring and Evaluation Plan Primer for DRL Grantees
Monitoring and Evaluation Plan Primer for DRL Grantees I. What is a monitoring and evaluation plan? A monitoring and evaluation plan (M&E plan), sometimes also referred to as a performance monitoring or
The Communications Audit NEVER MORE RELEVANT, NEVER MORE VALUABLE:
WHITE PAPER The Communications Audit NEVER MORE RELEVANT, NEVER MORE VALUABLE: VALUE PROPOSITION OBJECTIVES METHODOLOGY BY GARY DOLZALL CHIEF COMMUNICATIONS OFFICER I. INTRODUCTION: THE VALUE PROPOSITION
Master of Arts in Criminal Justice
Mission The mission of the Curry College Master of Arts in (MACJ) program is to provide students with the intellectual and pragmatic skills needed to become effective leaders, reflective practitioners,
GLOSSARY OF EVALUATION TERMS
Planning and Performance Management Unit Office of the Director of U.S. Foreign Assistance Final Version: March 25, 2009 INTRODUCTION This Glossary of Evaluation and Related Terms was jointly prepared
Department of Global Health Global Health Communication MPH. Program Guide 2014-2015. Program Directors:
Department of Global Health Global Health Communication MPH Program Guide 2014-2015 Note: All curriculum revisions will be updated immediately on the website: http://www.publichealth.gwu.edu Program Directors:
Consumer s Guide to Research on STEM Education. March 2012. Iris R. Weiss
Consumer s Guide to Research on STEM Education March 2012 Iris R. Weiss Consumer s Guide to Research on STEM Education was prepared with support from the National Science Foundation under grant number
Assessing Outcomes in Child and Youth Programs: A Practical Handbook. Revised Edition. Ronald M. Sabatelli, Ph.D. Stephen A. Anderson, Ph.D.
Assessing Outcomes in Child and Youth Programs: A Practical Handbook Revised Edition Ronald M. Sabatelli, Ph.D. Stephen A. Anderson, Ph.D. University of Connecticut School of Family Studies Center for
Outcomes Assessment for School and Program Effectiveness: Linking Planning and Evaluation to Mission, Goals and Objectives
Outcomes Assessment for School and Program Effectiveness: Linking Planning and Evaluation to Mission, Goals and Objectives The Council on Education for Public Health (CEPH) views the planning and evaluation
Crosswalk: Core Competencies for Public Health Professionals & the Essential Public Health Services
Crosswalk: Core Competencies for Public Health Professionals & the Essential Public Health Services Introduction View crosswalks of the Tier, Tier and Tier Core Competencies for Public Health Professionals
STRATEGIC PLANNING: A TEN-STEP GUIDE *
STRATEGIC PLANNING: A TEN-STEP GUIDE * I. IMPORTANCE OF PLANNING There is broad agreement among nonprofit leaders and experts that planning is a critical component of good management and governance. Planning
Why Evaluate? This chapter covers:
Why Evaluate? This chapter covers: Why evaluate? Conducting your own process and outcome evaluation The benefits and challenges of a well-planned outcome evaluation A step-by-step approach for evaluating
A brief review of approaches to oral language development Summary 2015
A brief review of approaches to oral language development Summary 2015 context Introduction Speech and language skills are fundamental to learning, development and communication and predict educational
HIGHER EDUCATION PRACTICE RESEARCH ABSTRACTS
1750 H Street NW, Suite 200, Washington, DC 20006 P 202.756.2971 F 866.808.6585 www.hanoverresearch.com HIGHER EDUCATION PRACTICE RESEARCH ABSTRACTS January 2012 The following abstracts describe a sampling
Qatari K-12 Education - Problem Solving Climate Crisis
SESRI Policy & Program Evaluation Workshop Doha, Qatar January 19-22, 2015 Outline: Session 5 Measurement: Benchmarking in Evaluation Design Quasi-experimental research designs: Combining evaluation with
Establishing Accountability for Employee Survey Results
CORPORATE LEADERSHIP COUNCIL FACT BRIEF Establishing Accountability for Employee Survey Results February 2000 Table of Contents Research Methodology Background Information Report Missions and Imperatives
Academic Program Review Handbook
Handbook Continuously Improving Programs and Student Learning Revised July 2014 Original Issue: December 6, 2010 Approved: Derry Connolly, President Current Issue: July 3, 2014 Effective: July 3, 2014
Building Non-Profit Capacity: Findings from the Compassion Capital Fund Evaluation
Abt Associates Inc. Building Non-Profit Capacity: Findings from the Compassion Capital Fund Evaluation R E S E A R C H June 2010 B R I E F Non-profit organizations (NPOs) play a critical role in the provision
Pamplin College of Business Strategic Plan 2014-2019
Pamplin College of Business Strategic Plan 2014-2019 Adopted: 5-13-2014 Revised: 7-3-2014 1. Introduction Pamplin is a nationally recognized, integral part of Virginia Tech the premier research university
Appendix A. Educational Policy and Accreditation Standards
Appendix A Educational Policy and Accreditation Standards A new Educational Policy and Accreditation Standards has been approved by the CSWE Board of Directors in April 2008. Preamble Social work practice
ROAD MAP FOR IMPLEMENTING AND MONITORING POLICY AND ADVOCACY INTERVENTIONS
ROAD MAP FOR IMPLEMENTING AND MONITORING POLICY AND ADVOCACY INTERVENTIONS This suite of tools was designed to strengthen the capacity of key stakeholders to engage in and monitor health policy development
PERFORMANCE MONITORING & EVALUATION TIPS CONDUCTING DATA QUALITY ASSESSMENTS ABOUT TIPS
NUMBER 18 1 ST EDITION, 2010 PERFORMANCE MONITORING & EVALUATION TIPS CONDUCTING DATA QUALITY ASSESSMENTS ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues related
Preparing Early Childhood Professionals
Preparing Early Childhood Professionals NAEYC s Standards for Programs Marilou Hyson, Editor NAEYC s Standards for Initial Licensure, Advanced, and Associate Degree Programs Also includes standards material
Code of Ethical Conduct
Ethics Supplement for Administrators 1 naeyc Code of Ethical Conduct Supplement for Early Childhood Program Administrators A Position Statement Supplement of the National Association for the Education
2. Educational Policy and Accreditation Standards
2. Educational Policy and Accreditation Standards Preamble Social work practice promotes human well-being by strengthening opportunities, resources, and capacities of people in their environments and by
Standards for Advanced Programs in Educational Leadership
Standards for Advanced Programs in Educational Leadership for Principals, Superintendents, Curriculum Directors, and Supervisors NATIONAL POLICY BOARD FOR EDUCATIONAL ADMINISTRATION Published January,
Educational Policy and Accreditation Standards
Educational Policy and Accreditation Standards Copyright 2001, Council on Social Work Education, Inc. All rights reserved. Sections renumbered December 2001, released April 2002, corrected May 2002, July
