USING THE EVALUABILITY ASSESSMENT TOOL



Similar documents
Results-based Management in the ILO. A Guidebook. Version 2

Guidance Note on Developing Terms of Reference (ToR) for Evaluations

GLOSSARY OF EVALUATION TERMS

Identification. Preparation and formulation. Evaluation. Review and approval. Implementation. A. Phase 1: Project identification

OUTLINE OF PRINCIPLES OF IMPACT EVALUATION

Logical Framework Analysis and Problem tree. 21 st March 2013

PROJECT MANAGEMENT TRAINING MODULES

Checklist for Operational Risk Management

Governing Body 310th Session, Geneva, March 2011 TC FOR DECISION

New JICA Guidelines for Project Evaluation First Edition. Japan International Cooperation Agency (JICA) Evaluation Department

pm4dev, 2016 management for development series Project Scope Management PROJECT MANAGEMENT FOR DEVELOPMENT ORGANIZATIONS

Evaluability Assessment Template

Guidelines for the mid term evaluation of rural development programmes supported from the European Agricultural Guidance and Guarantee Fund

MONITORING AND EVALUATION WORKSHEETS SUPPLEMENTAL INFORMATION

UNICEF Evaluation Report Standards

Equal Rights and Treatment for Roma in Moldova and Ukraine. Manual

Chapter 3 Data Interpretation and Reporting Evaluation

Monitoring and Evaluation Plan Primer for DRL Grantees

UNICEF Global Evaluation Report Oversight System (GEROS) Review Template

QUALITY ASSURANCE POLICY

UNDP Programming Manual December Chapter 7: MONITORING, REPORTING AND EVALUATION Page 1

The Transport Business Cases

DAC Guidelines and Reference Series. Quality Standards for Development Evaluation

Checklist for Reviewing Scopes of Work for Performance Evaluations

Competencies for Canadian Evaluation Practice

Effective objective setting provides structure and direction to the University/Faculties/Schools/Departments and teams as well as people development.

Purpose: Content: Definition: Benefits: outputs outcomes benefits Business Case dis-benefit Key Responsibilities: Approach: Executive Developed

Risk Management Strategy EEA & Norway Grants Adopted by the Financial Mechanism Committee on 27 February 2013.

Project Management Guidebook

Part B1: Business case developing the business case

FOREIGN AFFAIRS PROGRAM EVALUATION GLOSSARY CORE TERMS

ILO Decent Work Country Programmes A Guidebook

THE SOUTH AFRICAN HERITAGE RESOURCES AGENCY MANAGEMENT OF PERFORMANCE INFORMATION POLICY AND PROCEDURES DOCUMENT

Glossary Monitoring and Evaluation Terms

Solvency II Own Risk and Solvency Assessment (ORSA)

Part 1. MfDR Concepts, Tools and Principles

Sales & Operations Planning Process Excellence Program

Guidance for the development of gender responsive Joint Programmes. MDG Achievement Fund

Managing for Results: Monitoring and Evaluation in UNDP

Terms of Reference for LEAP II Final Evaluation Consultant

1. Background and Overview Why the addendum? How should this addendum be used? Organization of the addendum 2

1. Background and business case

Overview. EU Energy Focus. Key aspects of Horizon EU Energy Focus. Funding Actions eligibility and funding rates

Level: 3 Credit value: 5 GLH: 28 Relationship to NOS:

Manual. The South African Management Development Institute (SAMDI) M&E Orientation Course Manual

KEY PERFORMANCE INFORMATION CONCEPTS

Data Communications Company (DCC) price control guidance: process and procedures

TOOL D14 Monitoring and evaluation: a framework

INTERNSHIP ANNOUNCEMENT

United Nations Development Group Results-Based Management Handbook Strengthening RBM harmonization for improved development results

Using the logical framework matrix

Governance, Risk and Best Value Committee

HUMANITARIAN. Food 11. Health 4 Shelter 4 Other 7 OECD/DAC

Having undertaken a general discussion on the basis of Report IV, Small and medium-sized enterprises and decent and productive employment creation,

UNICEF Global Evaluation Report Oversight System (GEROS) Review Template

evaluation outcome-level

Development of a coherent monitoring framework for post-2015 water targets

TIPS SELECTING PERFORMANCE INDICATORS ABOUT TIPS

Frequently Asked Questions in Project Management

VACANCY ANNOUNCEMENT

Stiftung Zewo Schweizerische Zertifizierungsstelle für gemeinnützige, Spenden sammelnde Organisationen

OPERATIONAL PROJECT MANAGEMENT (USING MS PROJECT)

ownership We increase return on investment by We deliver reliable results by engaging

March Version 2.0

METHODOLOGY FOR ASSESSMENT OF NON-FORMAL AND INFORMAL LEARNING ACHIEVEMENTS IN A STUDY PROCESS

INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK. Date: June 3, 2011

Risk Workshop Overview. MOX Safety Fuels the Future

Il Project Cycle Management :A Technical Guide The Logical Framework Approach

Scheduling and Review of Project Activities

SOFTWARE REQUIREMENTS

Technology and Cyber Resilience Benchmarking Report December 2013

Procurement Programmes & Projects P3M3 v2.1 Self-Assessment Instructions and Questionnaire. P3M3 Project Management Self-Assessment

Concept Note on Business Operations Strategy (BoS) Operations Management Team (OMT) December 2012

Submission by India. On the work of the Ad-hoc Working Group on the Durban Platform for Enhanced Action. Work-stream I

PORTFOLIO, PROGRAMME & PROJECT MANAGEMENT MATURITY MODEL (P3M3)

Space project management

PROJECT AUDIT METHODOLOGY

PROJECT MANAGEMENT PLAN CHECKLIST

Data quality and metadata

Guidelines for Gender Sensitive Programming Prepared by Brigitte Leduc and Farid Ahmad

EIOPACP 13/09. Guidelines on Forward Looking assessment of own risks (based on the ORSA principles)

Islamic Republic of Afghanistan, Post Disaster Need Assessment (PDNA) Training Manual

PROGRESS REPORT MANAGING FOR DEVELOPMENT RESULTS AT THE ASIAN DEVELOPMENT BANK

GEF/LDCF.SCCF.17/05/Rev.01 October 15, th LDCF/SCCF Council Meeting October 30, 2014 Washington, DC. Agenda Item 5

Solvency II Own risk and solvency assessment (ORSA)

Framework for Managing Programme Performance Information

Lessons Learned from MDG Monitoring From A Statistical Perspective

PLAN OF ACTION PARTNERSHIP FOR SUSTAINABLE TEXTILES

Transcription:

USING THE EVALUABILITY ASSESSMENT TOOL INTRODUCTION The ILO is committed to strengthening the Office-wide application of results-based management (RBM) in order to more clearly demonstrate its results in the world of work. The Office s capacity to manage for results depends to a great extent on having programmes and projects that feature the minimum characteristics needed in order for outcomes to be measured, as well as an understanding of the process by which they are generated. This can be determined by a set of design-specific aspects that allow these projects and programmes to be evaluated, and is therefore defined as evaluability. OECD DAC Evaluability definition: the extent to which an activity or a program can be evaluated in a reliable and credible fashion 1 Evaluability assessments are an established means for evaluators to review the coherence and logic of a project or programme, as well to clarify data availability and adequacy of this data in reflecting progress towards results. The assessments makes a judgment on whether interventions are designed such that, once they are complete, they will be able to demonstrate their effectiveness in achieving established outcomes. In 2007, a methodology was developed by EVAL based on best practices among OECD/DAC members, especially those of multilateral development assistance agencies, and with the participation of ILO staff. This document explains the methodology and sets down the guidelines for the effective application of evaluability assessment tool developed by EVAL to assess the evaluability of DWCP and Technical Cooperation projects. It should be used in conjunction with the evaluability concept note available on request from EVAL. EVALUABILITY METHODOLOGY EVAL designed a conceptual framework and an evaluability instrument which can be applied to both ILO Decent Work Country Programmes (DWCPs) and projects. The evaluability instrument scores individual projects and programmes based on the following six criteria: Objectives Indicators Baselines Milestones Risks and assumptions Monitoring and Evaluation 1 OECD/DAC. Glossary of Key Terms in Evaluation and Results Based Management Available at: http://www.oecd.org/dataoecd/29/21/2754804.pdf DECEMBER 20, 2011 1

HOW DOES THE EVALUABILITY TOOL WORK? Each of the tool s criterions is composed of a number of criteria questions that sets quality standards requirements. Each of the six criteria listed above are therefore assigned a specific weight 2 in the overall rating based on the relative priorities given to each criterion. In order to determine the level of performance, the evaluability tool also assigns criterion value on the basis of the following score: Fully Evaluable (Score 3.5) Mostly Evaluable: can improve ( 2. 5 Score <3.5); Limited Evaluability: needs substantial improvement (1. 5 Score <2.5) and; Not Evaluable (Score <1.5). 2 The weight/ratio defined by the tool is based on the expertise, experiences, and best practices of EVAL. DECEMBER 20, 2011 2

APPLICATION OF TOOL Scoring The application of the tool consists of a scoring process of the defined criteria. The set of questions under each evaluability criterion are given a raw score based on the following assessment: Raw Score Performance level Performance requirements 4 Very good content Criteria are fully met with a degree of details that outmatches the criteria requirements. 3 Good content Criteria are fully met. 2 Relatively good content Corresponds to an identification that partly meets the corresponding criteria and that can be subject to further improvements. 1 Poor content Corresponds to an insufficient identification of a criteria 0 No content Corresponds to the non- identification of the criteria assessed The scoring process requires careful examination of the overall structure and component of the programme/project. The exercise can become complex when the information sought is not well organized in the programme/project document being assessed. It should also be noted that the quality of the assessment is strongly correlated to the expertise and experiences of the user. Although the tool attempts to set a level of standard to limit subjectivity, acknowledgment is necessary to the level of expertise and experience that could influence the outcome of the evaluability exercise. To mitigate this, the tool requires that justification be provided to the raw score assigned for each criterion question in the comments section of the tool. Additionally, the triangulation of results is strongly recommended in order to address potential subjective incidences in the overall assessment. The weighted score section of the tool simply multiplies the raw score and the weight for each criterion. The sum total of the weighted score gives us the composite score. The composite score which is linked to the criterion value (performance rating) provides information on whether a decent work country programme and its components (TC projects, technical advice, and advocacy) are defined such that, once they are complete, they can be evaluated and demonstrate their effectiveness in achieving the established goals and outcome objectives. DECEMBER 20, 2011 3

Example: The assessment table below has a project that has a composite weighted score of 2.04. Since this score is greater than or equal to 1.5, but less than 2.5, the project is given a performance rating of Limited Evaluability: needs substantial improvement Raw score Weight Weighted score Objectives Score 1.75 0.25 0.44 Indicators Score 3.00 0.25 0.75 Baselines Score 1.00 0.20 0.20 Milestones Score 1.00 0.10 0.10 Risk Assumptions Scores 3.00 0.15 0.45 M&E plans 2.00 0.05 0.10 Composite Score 1.96 1.00 2.04 Score Limited Evaluability: needs substantial improvement ELEMENTS TO BE TAKEN INTO CONSIDERATION FOR EACH CRITERIA This section explains the various criteria the assessment process takes into consideration. It should be noted that, due to the diverse nature of programme and project document contents, the criteria questions listed highlight essential information the document must contain in order to determine the level of performance. The six criteria are associated with a set of questions and elements related to these criteria questions. The assessment process verifies the presence or absence of these elements in the programme/project documents and determines the level of performance by allocating a raw score. Criteria 1: Objectives/Outcomes Clarity of the definition of objectives, including outcomes that can be comprehended as a major focus of management for results Criteria questions 1. Are the long-term ILO priorities and outcomes clearly identified and are the proposals and actions towards achieving outcomes through chosen strategy clearly defined? Elements related to criteria question Recognizes and addresses tripartism, social dialogue, and international standards, Contributes towards achieving these priorities. Identifies ILO capacity to carry out programme objectives. DECEMBER 20, 2011 4

2. Have the areas of agreement and disagreement with the Constituents priorities and strategy clearly been identified? 3. Is there consistency with the objectives of the international development frameworks such as poverty reduction strategies, the United Nations Development Assistance Framework (UNDAF), national MDG strategies and other integrated development plans? 4. Established partnerships with national and international actors and institutions. Are there established partnerships with national and international actors and institutions? Or: Were partnerships established... Evidence of consultation with constituents in the process of establishing CP priorities and outcomes. Description of areas of agreements and disagreements on priorities and outcomes among constituents Clear alignment to national development framework, UN country programmes, UNDAF, MDGs, or PRS, and identify areas in which ILO has a given advantage. Provides means to collaborate with national and international actors in engaging ILO constituents. Criteria 2: Indicators The selection of SMART indicators that are quantitative or qualitative and include comparison points of levels, quality and grade. Outcome indicators effectively facilitate the observation of change, while output indicators measure whether the right outputs are produced 1. Are indicators specific? 2. Are indictors measurable? 3. Are indicators attainable? 4. Are indicators relevant? 5. Are indicators time-bound? 6. Do indicators have a means of verification? Clear definition of what is being measured. Indicators measure the intended result Data is disaggregated where appropriate Does it measure the result it is intended to measure; data disaggregated where appropriate? Ability to count or otherwise quantify data in an accurate manner. Availability of adequate mechanisms to document verifiable changes. Indicator s target must be feasible with the available resources given a reasonable timescale, and that it is within the project s control and influence. There is an obvious relationship between the indicator and the objective and goals it is seeking to measure. Data can be collected frequently enough to inform the progress and influence the decisions. Data sources are known. Data is available at reasonable cost and effort. DECEMBER 20, 2011 5

Criteria 3: Baseline The existence of sufficient baseline data to establish a starting point for comparisons and future measurements of outputs and outcomes 1. Are baselines explicitly stated for each indicator? Are they implicit in the stated objectives? 2. Are baselines specific to the programme/project? 3. Are baselines unambiguous and do they clearly describe the situation prior to the intervention? 4. Will baselines permit to compare and measure results? Data is available for performance to be tracked relative to baseline. Meet the needs and interests of key stakeholders. The level of quality of data primary/secondary provides reasons for intervention. Provide an adequate basis for judging development results. Measure the degree and quality of change during implementation. Criteria 4: Milestones A set of time-bound milestones that provide a clear sense of the intended path towards achieving established outputs and outcomes 1. Do milestones provide clear sense of the time frame for achievement of results? 2. Do milestones help identify the path toward outputs or outcomes? 3. Do milestones provide a clear sense of progress towards development goal? Indicates expected time frame for deliverables. Provide the means to validate project is progressing as planned. Indicates completion of a set of deliverables. Criteria 5: Risks and assumptions Assessment of factors, namely risks and assumptions, likely to affect the achievement of an intervention s objectives, and related contingency measures 1. Have the principal restrictions to achieving outcomes been identified? The quality of the analysis of the identification of the assumptions and risks. Conditions necessary for the execution of programme and its project and the achievement of the objectives are identified. DECEMBER 20, 2011 6

2. Have the risks associated with each strategy option /with achieving project outcomes been identified? 3. Are the risk mitigation measures clearly defined, and are they supported by theory, logic, empirical evidence and/or past ILO experience? The presence or not of risk evaluation, meaning the quantification and gradation of the risks. Articulate for each strategy/ outcome fundamental risks that will pose a threat to overall programme success. The adoption or not of risk mitigation or incentive measures including the action that are required to carry this out. Criteria 6: Monitoring and Evaluation M&E system to identify problems during project and programme implementation and facilitate the measurement of progress 1. Is the results framework clearly defined (complete with objectives, indicators, baselines and targets), including actions to be undertaken to achieve appropriate evaluation and monitoring? 2. Has a progress monitoring system been defined for objectives and strategy, including actions to be undertaken to record progress? Logical framework complete with all key elements. A data gathering system to generate information on indicators has been defined. Resources have been identified and committed to ensure that predefined data will be collected and analysed. Sources of information are specified for all indicators. Social partners and beneficiaries are expected to participate in monitoring and evaluation. 3. Risks monitoring system defined, including actions to be undertaken to achieve this. Has a risks monitoring system been defined, including the actions to be undertaken to achieve this? Follow-up actions for mitigating the impact of the risks and for monitoring the validity of the assumptions and risks are identified DECEMBER 20, 2011 7