Using Evaluation to Support a Results- Based Management System
The E in M & E
Evaluation Evaluation is the systematic and objective assessment of an ongoing or completed project, program or policy, including its design, implementation, and results.
Difference Between Monitoring and Evaluation Monitoring continuously tracks performance against what was planned by collecting and analyzing data on the indicators established for monitoring and evaluation purposes. Evaluation is a periodic, in-depth analysis of programme performance. It relies on monitoring data and information generated from other sources.
Characteristics of Monitoring and Evaluation Timing Main purpose Focus Typical questions answered Managerial use Who conducts Continuous Monitoring Tracks, analyzes and documents progress Inputs, activities, outputs, processes, likely results at outcome level What activities were implemented and results achieved Alerts managers to problems and provides options for corrective actions Self-assessment by managers, supervisors, community stakeholders, and funders Evaluation Periodic; at important milestones, at the end or substantial period after implementation In-depth analysis; compares plan with actual achievement Outputs in relation to inputs; results in relation to cost; processes used to achieve results; overall relevance; impact and Why sustainability and how results were achieved Provides managers with strategy and policy options Internal and/or external evaluators
Complemention of Monitoring and Evaluation Sequential complementarity Information complementarity Interactional complementarity
What is Evaluated? Process: refers to key aspects of the project, programme or policy implementation Results: effects of a project, programme or policy; a describable or measurable change in state that is derived from a cause and effect relationship.
Three Types of Results Outputs: Products and services that result from the completion of activities of an intervention Outcomes: The intended or achieved short and medium-term effects of an intervention s s outputs Impacts: Long-term positive and negative effects on identifiable groups produced by a development intervention
Uses of Evaluation
Main Purpose of Evaluation Compare planned to actual achievements Determine why intended results were or were not achieved Analyze specific causal contribution of activities to results Explore unintended results Highlight significant accomplishments Distill lessons on what works and what doesn t Offer recommendations for improvement
Management Questions Answered by Evaluation Descriptive Normative or compliance Correlational Impact or cause and effect Program logic Implementation or process Performance Appropriate use of policy tools
Pragmatic Uses of Evaluation Making decisions on resource allocation Rethinking the causes of a problem Detecting emerging problems Aiding decisions on competing alternatives Maintaining support to public sector reform and innovation Building consensus on the causes of a problem and how to respond
Summary of Uses of Evaluation Political strategy and design Are we doing the right things? Operational and implementation Are we doing the right things? Learning Are there better ways?
Timing of Evaluation During situation analysis During programme design During programme implementation At programme completion
Instances that Warrant Evaluation Divergence between planned and actual performance Contribution of design and implementation outcomes Competition on allocation of resources Conflicting evidence of outcomes