Results Framework and M&E Guidance Note
|
|
|
- Brittney Bond
- 9 years ago
- Views:
Transcription
1 2013 Results Framework and M&E Guidance Note This guidance notes are intended for internal use by Bank staff. The notes will be updated and complemented from time to time. OPSPQ April 9, 2013
2 Contents Section I Managing for Results... 1 Section II - Results Frameworks... 1 A. Project Results Framework Definition and Use... 1 B. Designing a Results Framework... 3 Section III - Project Development Objectives... 5 Section IV- Performance Indicators... 6 A. PDO Level Result Indicators and Intermediate Result Indicators... 6 B. Baselines, Intermediate Targets and Final Target... 9 Section V - Monitoring & Evaluation Arrangements Annex A. Results Check List Annex B. Results and M&E in Fragile and Conflict-Affected Situations... 16
3 Abbreviations and Acronyms CAS CDD CPS FCS GPS IBRD ICR ICT IDA IEG IPF ISR M&E ORAF PAD PCN PDO PEFA Country Assistance Strategy Community-driven development Country Partnership Strategy Fragile and conflict-affected situation Global positioning system International Bank for Reconstruction and Development Implementation Completion Report Information and Communication technology International Development Association Internal Evaluation Group Investment project financing Implementation Status and Results (Report) Monitoring and evaluation Operations Results Assessment Framework Project Appraisal Document Project Concept Note Project development objective Public Expenditure and Financial Accountability
4 Section I. Managing for Results 1. The World Bank is committed to improving its development effectiveness and achieving the development outcomes supported by the operations that it finances. In Investment Project Financing (IPF), which mobilizes inputs to support activities/outputs in line with a specific development outcome, development effectiveness is measured in terms of how the use of scarce resources leads to the achievement of specific results. This measurement allows demonstrating whether specifically designed interventions put in place to improve welfare in a developing environment have succeeded, which in turn helps to maintain the support and commitment from shareholders, borrowers, donors, and other stakeholders. 2. Result-based monitoring and evaluation (M&E) is a management tool used to systematically track progress of project implementation, demonstrate results on the ground, and assess whether changes to the project design are needed to take into account evolving circumstances. 1 Designing the project results framework and using it adequately along with other management tools during implementation (for instance, the ORAF risk-assessment tool) is critical. Most of the decisions and proactive measures that can be taken to improve the likelihood of the project achieving the expected results will be derived from observations coming from these tools. 3. The objective of this guidance note is to help task teams to better support borrowers (a) in the design of results frameworks and M&E arrangements in the context of an IPF operation 2 and (b) to adequately use this tool during project implementation. Section II. What is a Results Framework? A. Definition and Use 4. A results framework represents the underlying logic that explains how the development objective of a project is to be achieved. This is achieved by translating the results chain (see Figure 1) of an intervention into indicators that measure the degree to which inputs are being transformed into specific activities and outputs, and the degree to which a relevant target population is using those outputs as the anticipated outcomes of the project This approach differs from traditional monitoring approaches that focus on whether or not a project is being executed as planned looking at whether or not agreed activities/milestones have been completed, but do not provide managers with an understanding of the success or failure of the project. For more information and practical examples on this topic, teams are encouraged to review the Operational Core Curriculum (OCC) e-learning module on results. Inputs: Resources (financial, human, etc.) mobilized to support the project activities; Activities: Actions taken or work performed to produce outputs (work contract, staff trained); Outputs: Products and services provided (road, primary health care services, water connections); Outcomes: A result after the use of outputs; expected benefits or changes in behavior as a result of the outputs (e.g., reduced travel times, increased use of preventive services, availability of clean drinking water in the village). Impact is the final result of the outcome, which most likely will become evident several years after the project activities have been completed.. It may change the living condition of project participants. For example, lower diarrhea (outcome) results in lower infant mortality rate and healthier children (impact). Page 1
5 Figure 1: Results Chain OUTCOMES INPUTS Resources - Money - Staff - Facilities - Equipment ACTIVITIES Tasks undertaken to produce outputs OUTPUTS Products and services delivered - Staff trained -Infrastructure constructed - Legal Framework revised SHORT TERM MEDIUM TERM LONG TERM Modified behavior, conditions, situation for population, communities, businesses, or organizations resulting from program outputs -Improved performance of utilities -Increased water production capacity - Improved access to quality water services -Improved health outcomes - Increased economic growth Adapted from Results Agenda Demystified Workshop 5. The results framework has three main elements: (a) a statement of the project development objectives (PDO); (b) a set of indicators to measure outcomes that are linked to the PDO and a set of intermediate results to track progress toward achieving outcomes; and (c) M&E arrangements specifying clear units of measurement for each indicator, baselines, annual and final targets for each indicator as well as the roles and responsibilities for collecting, reporting, and analyzing data on those indicators. 6. Use of results frameworks. A results framework is intended to serve as a management tool for both the borrower and the World Bank. It is intended to be designed and used with the borrower and other stakeholders. Used effectively, it can serve as a tool at the different stages of the project cycle. 7. Preparation stage. The results framework serves as a tool for strategic planning and better project design during the preparation stage. It helps the borrower, the Bank, and other development partners set the objectives and necessary arrangements to measure progress toward the achievement of those objectives. The results framework also helps build consensus and ownership around shared objectives and the arrangements to be used in achieving these objectives. 8. Implementation stage. The results framework becomes a management tool that helps the borrower and the Bank and other relevant stakeholders to assess progress toward the development objective and to adjust the course when necessary during implementation. The results framework is a dynamic tool that needs to be updated during project implementation, and that may be Page 2
6 adjusted to reflect changes in the original project circumstances (through a project restructuring). The results framework is of particular relevance for conducting mid-term reviews to assess the overall project performance and take appropriate actions regarding the future of the operation, including, as needed, through significant project restructuring Completion. Upon project completion, the results framework helps to evaluate the project performance and is used for feedback purposes. It is the main tool available to identify the project accomplishments, problems faced, and lessons learned. Annex A provides a checklist of relevant considerations regarding the results framework at different stages of the project cycle. 10. Correlation between objectives and indicators. Different reviews have shown a correlation between well-defined objectives and indicators and achievement of the expected development outcomes. Also many projects with poorly articulated objectives and/or indicators find it hard to explain successes at the time of completion when the Implementation Completion Report (ICR) is prepared. For this reason it is important to allocate adequate resources and efforts during project preparation to prepare the results framework and to take proactive measures during implementation to adjust the results framework to take into account any evolving circumstances, if necessary. B. Designing a Results Framework 11. Borrower role and responsibility. As owners of the results supported by the Bankfinanced project, the borrowers are responsible for project design and implementation. This includes the design, monitoring, and updating of the results framework, and the establishment of adequate M&E arrangements. Borrowers need to start early in the preparation phase of an operation with thinking, planning, and managing toward achieving results. 12. Task team s role and responsibilities. The role of the task team is to support the borrower through this process, assess the adequacy of the proposed arrangements, identify possible constraints (or risks) in terms of the capacity for results monitoring and evaluation, and support the borrower in the identification of measures to address those constraints. 13. The process. Designing the results framework is a dynamic and iterative process that starts during project identification and is completed by appraisal. At least four items of information are required to develop a fully-fledged results framework: (a) a shared understanding of the problem the project is trying to solve; (b) a series of hypotheses of how the project inputs will lead to the desired outcomes; (c) knowledge of the type of evidence required to assess progress toward results; and (d) an understanding of the existing data sources and instruments available in the country. Although at the concept stage there will not be enough information for a fully-fledged results framework, when discussing the project concept some key definitions start giving shape to the framework such as the proposed project development objectives, key performance indicators, possible risks, and project s potential contribution to the Country Assistance Strategy (CAS) and Country Partnership Strategy (CPS) and country outcomes. 4 For additional guidance on mid-term reviews refer to Implementation Support Guidance Note. Page 3
7 14. The task team should introduce results into the discussions agenda as early as possible with the borrower; this enables an early start to developing the intervention logic/results chain that will be the basis for formulating the results framework and monitoring and evaluation. Full ownership and buy-in are key to the implementation and use of the results framework. 15. When designing a results framework and determining the intervention logic, the borrower identifies the development challenges the intervention is trying to solve (the problem statement) and the key development hypothesis (what the desired objectives are, the outcomes that are critical for achieving the objectives, the implementation process to most effectively attain the outcomes, the required arrangements to measure progress toward those objectives). Outputs and outcomes represent the links in the results chain that bridge the gap between the current challenge (prior to the project intervention) and the desired results (after the project). Starting with the highest-level goal, task teams can backtrack to establish a hierarchy of cause and-effect linkages between activities and expected outcomes. Expected outcomes mirror the problem statement and PDOs are formulated around the expected outcomes for the target group. 16. In doing so, it is important to clearly define the project boundaries, particularly to (a) be realistic in terms of how much the project can really achieve through the proposed activities, with the resources available and within the project s time-frame; and (b) take into account the possible constraints or risks that could jeopardize the achievement of expected results, so that those risks can be better managed through the design of the project and during implementation. 17. Indicators with baselines and annual targets to measure the key project outcomes and results are identified once the intervention logic has been established from inputs to activities/outputs to outcomes. In addition, the arrangements for monitoring results and evaluating project impacts are defined by taking into account the results and recommendations that may arise from the capacity assessments of relevant government institutions and using and building on existing data collection mechanisms and systems to the extent possible. By appraisal stage, the results chain is finalized with project objectives clearly defined, inputs and activities clearly spelled out, outputs clearly linked to specific activities, and outcomes made clear and reflected in the project objectives. 18. Task teams are encouraged to refer to the result chains developed by the different sector and anchors and use similar logic and/or indicators when advising the borrower in the design of a project s results framework. 19. From the task team s perspective, the above-mentioned process is reflected in the project preparation documents when addressing results monitoring and evaluation at concept (PCN) and appraisal (PAD). In particular, task teams concentrate their efforts around three specific elements: (a) the definition of results captured in the PDO and relevant PDO-level results indicators; (b) assessing and understanding the client s capacity to manage and monitor the results supported by the project; and (c) identifying the support to be provided through the project to strengthen the client s capacity in results-based M&E and its cost. Table 1 provides specific guidance of what needs to be addressed by the task team in the project concept and appraisal documents. Page 4
8 PCN Table 1. Addressing Results/M&E in PCN and PAD PAD Brief description of the problem statement and what success would look like as a result of the project (in the introduction section); A proposed PDO statement covering key aspects; PDO-level proposed results indicators; Scope of the challenge or the result; and Institutional responsibilities for results and M&E and any critical capacity gaps. Final PDO statement (consistent throughout the document) and covering key aspects; Final PDO-level indicators that measure progress toward the aspects covered in the PDO; Brief assessment of implementing agency(ies) capacity for results monitoring and evaluation; Identification of support to be provided by the project to fill capacity gaps; and A comprehensive results framework inclusive of PDO, PDO-level indicators, and intermediate results indicators. 20. The PCN and PAD succinctly discuss the key elements of the results framework and M&E arrangements while the Operational Manual is the tool to be used to provide all the relevant details for implementation. 21. Taking into account risks to results. The building blocks of an IPF operation are the results that the operation is trying to attain. When defining those results it is critical to identify the risk or constraints that the project may confront that could jeopardize the realization of such results. Carrying out a project s risk assessment would help guide the process of identifying the key risks to results, discuss management measures to address those risks, and if necessary, adjust the project s design (including the scope of the development objectives) to improve the likelihood of success of the project in achieving the intended results. 5 Section III. Project Development Objectives 22. The PDO is the outcome that a project is expected to achieve for its primary target group, given its scope, duration, and resources. A strong and clear PDO statement is an essential aspect of good project design. This is the main element that would be used at project closing to evaluate whether a project has been successful or not in contributing to change in the country/target area group. 23. Formulating the PDO statement. The PDO statement clearly identifies who the primary target group is, the specific and measurable benefits that the target group will receive from this project, and the expected change in behavior, situation, or performance of the primary beneficiaries (what will the target group be doing better or differently as a consequence of the project?). 5 For additional guidance refer to Operational Risk Assessment Framework Guidance Note. Page 5
9 24. Taking this into account the PDO statement would reflect outcomes that are achievable over the life of the project and for which the project can reasonably be held accountable. The PDO statement should be aligned with the project interventions and activities, without being a restatement of the project s components or outputs. In this regard is it good practice to avoid including several levels in the results chains (e.g., through, by, in order to, etc.). 25. When formulating the PDO statement, it is also important that there be clear alignment between the project and the higher-order strategic, country program or sector outcomes to which the operation contributes and be related to higher-level (CAS and CPS) outcomes that support the country s strategy. The challenge here is to find the right balance by avoidance of expressing the PDOs either too high (i.e., at the CAS level) or too low (i.e., at the activity level) and check that it is well articulated (e.g., not mixing goals, objectives, and strategies). In practice, this process implies a trade-off. The closer the PDO is set to a final higher-level result or impact the less likely it is that a strong causal link can be established between the inputs of the project and that PDO. The shorter the chain from inputs to project outcome, then the more easily the causal chain is established but, most likely, the further this outcome will be from a final higher-level result. 26. The PDO can be focused on a single issue (e.g., increase access to primary education for disadvantaged students in all public schools in 10 districts) or on more than one issue (e.g., increase enrollment of primary students and improve teaching performance in all schools in 10 districts). Each of these aspects of the PDO is an objective and should be supported by the project activities following the same principles discussed before. 27. In sum, keep the following in mind when formulating the PDO: (a) keep it short and simple, be concise; (b) be realistic, focus on outcomes for which the project can reasonably be held accountable; (c) include a clearly identified target group; and (d) do not include higher-level objectives in the PDO. Section IV. Performance Indicators A. PDO-level Result Indicators and Intermediate-level Result Indicators 28. When designing a results framework, there are two types of indicators to consider: (a) PDOlevel result indicators (outcome indicators) that are intended to measure the uptake, adoption, and use of outputs by the target group within the project period; and (b) intermediate-level results indicators that serve to track progress toward achieving the development objectives until the final project outcomes are attained (these may also measure progress in project outputs). 29. Each aspect of the PDO should be measured by at least one type of indicator. Include Core Sector Indicators (see discussion below) when applicable either at the PDO level or intermediate level. 30. Use of SMART indicators. It is important that indicators used with the results framework are Specific, Measurable, Attributable, Realistic, and Timebound if they are to accurately measure results achieved through the project: Page 6
10 Specific means that the indicator measures only the design element (output or outcome), which is intended for measurement, and not any other elements in the project. For example, if the target output is to construct 20 wells, the specific indicator to be measured will be the number of wells constructed. Measurable means that there are practical ways of measuring the indicator, being clear and unambiguous in terms of what is being measured (e.g., avoid words like sucessful unless it is possible to define exactly what it would mean in the project context). For quantitative proportions or percentages, this means that both the numerator and the denominator must be clearly defined. For quantitative whole numbers and qualitative data, it means defining each term within the indicator such that there can be no misunderstanding as to the meaning of that indicator. This is critical for ensuring that the data collected by different people at different times are consistent and comparable. Attributable means that the indicator is a valid measure of the targeted developmental issue and the project can be credited for the changes in that developmental issue. Realistic means that indicators selected must be realistic in terms of their ability to collect the data with the available resources. Some indicators present major problems for data collection owing to the cost or skills required (e.g., large-scale sample surveys). Being realistic in planning and identifying collectable information ensures that it will, in fact, be collected. This is an important factor to consider and may lead to compromises on other criteria. Timebound has several connotations. First, indicators must be time-bound in terms of the time spent in data collection. Second, indicators must reflect the timing of collection, being cognizant of seasonal differences. Third, the time-lag between activities and output and outcomes must also be reflected in the indicators that are chosen. 31. Proxy indicators. In some cases, task teams can consider using proxy indicators. Cost, complexity, and/or the timeliness of data collection may prevent a result from being measured directly. A proxy indicator is an indicator that is substituted for another indicator that would be hard to measure directly. In this case, proxy indicators may reveal performance trends and make managers aware of potential problems or areas of success. For example, the effectiveness of a child health program may best be measured by mortality rates. These rates are difficult to determine over short periods of time. For this reason, a proxy indicator, such as the percentage of births that are attended by trained health personnel and the availability and frequency of use of health facilities, may be used instead. 32. Core sector indicators. A core sector indicator is an outcome or output indicator that can be measured and monitored at the project level, and can be aggregated across projects and countries for corporate reporting. Core sector indicators only cover some of the most recurrent results of Bank operations. Check the following link on the World Bank Intranet to determine which core indicators apply to the specific project. The use of core sector indicators in project results frameworks is mandatory for IBRD and IDA operations and highly recommended for recipient-executed trust fund projects. Teams need to ensure that the mandatory indicator direct project beneficiaries (number), of which female (percentage) is included in all projects. In addition, task teams must use other core indicators that are relevant to track Page 7
11 results in the project as applicable. In the Operations Portal, core sector indicators are selected from a drop-down menu. The use of core sector indicators does not prevent task teams from adding other relevant, project-customized indicators. 33. Some guidance on what principles to follow when selecting indicators follows: Identify relevant experts to guide the selection of indicators. Seek guidance from key experts to determine which indicators are the most useful (e.g., for technicalspecific indicators the team s technical expert could provide relevant advice). Less is better. Avoid too many indicators. If possible, limit the number of outcome indicators to five or fewer and the overall number of indicators to not more than 15. Make it easy. Indicator data should be easy to collect. If possible, select indicators for which data collection mechanisms and systems already exist in the country. Use cost-effective indicators. One of the challenges in designing a results framework is to select indicators that are appropriate to the conditions on the ground and that can be collected with a reasonable amount of resources and within a reasonable period of time. When possible, avoid indicators that are too expensive to monitor, particularly if they involve the use of baselines that are not easily obtained (see discussion on baselines below). 34. Sources of data, frequency of reporting, responsibility for collecting information. In most situations, it is preferable to anchor the measurement of results on existing data sources. Basic monitoring information is usually available through a combination of administrative databases and sample or census-based surveys. Selecting the data sources is a fundamental decision that should be made together with the selection of indicators. Where existing data cannot adequately measure the desired changes, the lessons of other similar interventions should inform plans for data collection. It is useful to recognize upfront that the decision to include project activities to improve the availability and reliability of data is a welcome byproduct of the process to design a results framework. 35. For each indicator, the results framework specifies where the information will come from, who will gather the information, and how often it will be reported on. Answering the following questions is important when selecting indicators: What are the sources of data (administrative collection, regular survey data, bespoke studies, project files, audit reports, beneficiary feedback, etc.)? What are the data collection methods? Who will collect the data? How often will the data be collected? What is the cost and difficulty to collect the data? How will the data collection be funded? Who will analyze the data? Who will report the data and in what form and forum should data be reported? Who will use the data? What capacity strengthening measures are needed? Page 8
12 36. Units of measurement. Once the indicators have been identified, the next step would be to identify units of measurement for indicators of outputs and intermediate results, and where feasible and meaningful, measures of impact. Selecting the appropriate unit of measurement for an indicator is critical to its usefulness. Units of measurement may be quantitative (i.e., kilometers, numbers of people, percentages) or qualitative (i.e., yes/no). In either case, both the indicator and the definition of the unit of measurement need to be clear. B. Baselines, Intermediate Targets and Final Target 37. The baseline is the first critical measurement of the performance indicators and is used as a starting point against which to monitor future performance of the project. 6 In some circumstances the baseline can be zero. Sometimes ranges of approximate values could be used if they make sense and there is good evidence to support the validity of such a range (i.e., less than 30 percent of roads in good condition). Baseline and targets must be expressed in the same units of measurement. 38. Selecting baselines. Realistic timeframes are needed to allow for the establishment of necessary baselines. If it is too difficult to establish a baseline within a reasonable period of time, this is a good indication that alternative indicators should be considered. Investing significant time and resources in baseline studies may not be relevant and cause delays that have a negative effect on other project activities. At the same time, the absence of a baseline often reflects an underlying weakness in defining the problem to be addressed, showing at a minimum that the design is not grounded in evidence but in conventional wisdom. Therefore, pragmatic and practically viable solutions need to be found. A critical question to ask is will the baseline be used only by the project, or is there a commitment to use and continue measuring the indicator in the future? Similarly, when developing innovative and new indicators, teams should analyze existing data collection instruments and suggest changes, if needed. Defining targets for these new indicators for which no baselines exist may pose a challenge as well. 39. Selecting target values. Target values are to be identified for each outcome or intermediate indicators. These estimates are usually determined on the basis of existing technical expertise (on the borrower side or elsewhere), past trends, and careful assessment of what is likely to be achieved. Targets provide benchmarks against which performance can be judged. They vary according to the indicator for which they are set, and to the level or certainty and predictability of the dimension measured. Each indicator will have intermediate target values (i.e., values to be attained during each year of project implementation) to facilitate project monitoring and a final target value (the value that the indicators should attain by the end of the project). 40. Setting target values is critical since comparing the actual values achieved with the target values will assess the success of a project; if the project has been restructured and original targets have been modified, project results will be compared against the revised targets. One method to establish targets is to start with the baseline value and then use historical trends or another 6 If in exceptional cases, baselines are not available (i.e. survey data), an explanation is provided, accompanied by detailed plans, of how they will be established during the first year of implementation Page 9
13 estimate of the rate of change to set the desired targets over the implementation period, taking into account available funding and other resources. It is important to include information in project documentation on how target indicator values were set (i.e., whether or not they were based on special calculations, drawn from projections, or created using other criteria). Striking the right balance when establishing the target values (neither too low nor very ambitious), it is also an important consideration that requires judgment. Section V. Monitoring & Evaluation Arrangements 41. Monitoring and evaluation (M&E) are two complementary but distinct processes. Monitoring consists of tracking inputs, activities, outputs, outcomes, and other aspects of the project on an ongoing basis during the implementation period, as an integral part of the project management function. Evaluation, on the other hand, is a process by which project results, impacts, and implementation performance are assessed. Projects are evaluated at discrete points in time (usually at the project s mid-point and completion) along some key dimensions (i.e., relevance, efficiency, efficacy, impact, performance). Evaluations often seek an outside perspective from relevant experts. 42. Information produced by M&E systems is normally used to report to different stakeholders (Government, World Bank, civil society, IEG, other donors) on progress and performance of a project, becoming a means to facilitate public awareness and promote transparency and accountability. 43. The borrower, normally through its implementing agency is responsible for gathering data, reporting and using the information for monitoring purposes during implementation. Taking this into account, the M&E arrangements proposed during preparation need to reflect the borrower s institutional capacity and address any issues related to staffing, processes, accountabilities and responsibilities, equipment, knowledge skills, and budget required to carry out this M&E function. It is also important to map out the client s own project and program monitoring cycle to evaluate when relevant data will become available. Certain indicators are collected only once a year or less frequently, and therefore the systematic reporting, for instance in ISR reports, may not be able to reflect trends or changes in the short term. Actual values for indicators are updated and entered in the ISR result section after every implementation support official review. 44. On the Bank side, the task team advises the borrower and makes an assessment of the borrower s arrangements so that adequate arrangements are in place to monitor results during implementation since task teams will rely on the borrower s reports to monitor progress and inform management. Candid and timely monitoring of the results framework is essential to alert borrower, task teams, managers, and reviewers about any issues that may arise during implementation and take proactive measures if needed (for instance restructuring) to enhance the project s likelihood of meeting its objectives. 45. The results framework is a tool available to task team members to guide the M&E function during implementation support; its use during key implementation support reviews is essential to Page 10
14 update the project progress toward the achievement of its PDOs. It is good practice to include the results framework with updated progress as part of the aide-memoire. This will also facilitate the task team s internal reporting as monitoring is incorporated into the ISR. 46. Assessing institutional arrangements for results M&E. To assess the borrower s proposed M&E arrangements, task teams usually carry out a situation analysis to include the following aspects: (a) Identify current sources of data, issues, gaps, ongoing initiatives to strengthen databases, etc., for either the sector as a whole or a subsector, whichever is applicable to the project; (b) Identify the nature of the implementation entities (existing government agencies, decentralized government agencies, project beneficiaries, civil society organizations, and other entities) to be engaged in the project s M&E activities; (c) Identify, assess and build upon existing M&E systems (government, community-based monitoring, community scorecards, or citizen scorecards); (d) Identify M&E capacity weaknesses (such as staffing, equipment, knowledge/skills, roles and responsibilities, setup/processes) of those entities that will be involved in project M&E; and (f) Provide resources to strengthen capacity. 47. Focus is placed on developing the country/sector capacity to monitor and evaluate its projects and programs as well as use information for decision-making. 48. The PAD reflects on the above aspects and clearly identifies the institutional arrangements for implementing the M&E activities. When drafting the PAD, task teams usually identify the following: (a) all key entities to be engaged in the project s M&E activities with clear roles and responsibilities; (b) type of M&E system to be used, for example, an existing system, communitybased system, a new system, etc.; (c) key lessons learned from past experiences by the Bank or other development partners in the country/sector; and (d) M&E weaknesses and the resources required to address those weaknesses. 49. The M&E assessment is expected to bring about common and shared understanding of apparent problems and collective efforts to solve them. When carrying out the M&E capacity assessments, the task team will be helped by some guiding questions: Are institutional arrangements for data collection (responsible staff/units and time frames), the capacity of the responsible agency, and the cost for the results-based M&E well understood and thorough? If the project is drawing on data collected by government statistical offices or line agencies, which statistics would be used, what is the reliability of this information, and are there arrangements in place to validate the data? Where information is to be derived specifically for measurement of project results and outcomes, what are the associated costs and institutional responsibilities? Page 11
15 How do the proposed results framework and monitoring requirements fit with the borrower s current and future M&E system? How will monitoring and evaluation complement project management? If M&E capacity of the responsible agency is assessed to be weak, would there be adequate resources provided to strengthen capacity? Where there is limited capacity in the country to derive the necessary information, how will local capacity be supplemented through the project, and what will be the costs of doing this? Can participatory M&E arrangements be included into management and capacitybuilding initiatives to leverage the efforts involving affected communities? 50. Participatory M&E approaches: Roles and responsibilities. Participatory M&E can be used to reveal the degree of effectiveness and efficiency in the achievement of objectives according to the perspectives of stakeholders. Participatory M&E brings together diverse stakeholders, giving them an opportunity to take part in the decision of what success should look like and what indicators should be used to evaluate success. In this process, all stakeholders discuss and plan the project together from the outset, jointly setting the objectives, targets, indicators, and work process. It is a process that leads to knowledge generation, collaborative problem-solving, and corrective action by involving all levels of stakeholders in shared decisionmaking Strengthening M&E capacity. If necessary, a project could include an explicit component for improving the client s M&E capacity, possibly as part of a wider institutional capacity development initiative. Production of statistical information is essential, but it is equally important to develop the capacity to use this data in planning and decision-making. While country-specific M&E systems are increasingly required to produce more complex information, there is also a need to develop management capacity and systems for using the information generated by these systems. Demand for M&E can be achieved only if users have the skills, incentives, and authority to use the information created by the process. 52. Deciding whether a specialized M&E unit or specialist is needed. A critical question faced during project preparation is does M&E systems require specialized staff, institutional arrangements, and resources or can this function be handled as part of the overall management function of the borrower? The answer depends on a range of factors including (a) the nature and complexity of the project; (b) the current status of M&E at the sector or project level; (c) the entities in the country that are involved in M&E and their roles and responsibilities; (d) the existing human capacity to perform effective M&E, including data collection and reporting; (e) the complexity of reporting requirements; (f) the availability and reliability of the required data; and (g) the presence of management information systems. 53. Dissemination of M&E information. When designing M&E arrangements, it is important to consider who the potential users of the information produced are. This will help design better 7 For additional guidance on the subject go to: Participatory and Third Party Monitoring in World Bank-financed Projects. Additional reference materials can be found in: Page 12
16 communication and dissemination strategies. Three core elements of a M&E dissemination plan are: Analysis of the information needs of the user, Dissemination via the user s preferred media, and Tailoring content to the user s information needs. 54. Key elements of good M&E arrangements. The following criteria describe some elements of good M&E arrangements: Beneficiaries and partners are actively involved in planning, conducting, reviewing, and interpreting performance information whenever possible. Project monitoring arrangements are integrated into the government s existing management systems (at central and local levels depending on the reach of the project). Information produced is responsive to the needs of the different users (Government, Bank Management, policymakers, beneficiaries, etc.). M&E capacity for collecting, analyzing, and reporting performance information is in place. New indicators are introduced only when there is a need for such indicators and a commitment and resources on the part of the client to continue tracking them. Costs for M&E are budgeted for. Usually M&E costs represent between 3-5% of the project cost and are budgeted as part of the borrower s administrative. If borrower requires so, said costs can be paid for by the project. 55. M&E estimated costs. Usually The following criteria describe some elements of good M&E arrangements: 56. Impact Evaluations. During project preparation, teams face the decision whether to include impact evaluations in the project design. The purpose of impact evaluations is to establish causality between the project s activities and its outcomes, and can be a useful tool for measuring results and attribution. In other words, impact evaluation is the only tool that enables the attribution of results to the project in question. Moreover, it tells the team how much of the change is caused by the project intervention. This can inform the decision to scale up or down financing, or increase or modify the project scope to ensure better results. In its most rigorous form, an impact evaluation compares welfare outcomes of the intervention during the period being evaluated with an explicit counterfactual the hypothetical situation that would have prevailed in the absence of the intervention. However, they tend to be costly and must be designed and well implemented to be meaningful. It is not necessary for every IPF operation to include an impact evaluation in the design Some considerations that can help guide the decision of whether or not an impact evaluation would be desirable are (a) when the project is considered to be of strategic relevance for poverty 8 For available resources, refer to the World Bank Development Impact Evaluation Initiative webpage. Page 13
17 reduction; (b) when the project or some of the interventions financed by the project are testing an innovative approach to poverty reduction; (c) when there is not sufficient evidence that the type of project/intervention proposed works well in a number of different contexts; (d) if outcomes are expected to materialize within the project s lifetime. If a project includes an impact evaluation as a tool for measuring project results/outcomes (or government s program supported by the project), the quality of the plan for such impact evaluation should be considered in the project documents as well as the efforts to develop country capacity. The World Bank may share the costs of impact evaluation with clients. While the Bank uses internal funds and trust funds, the client uses project financing Annex B includes some specific considerations to guide task teams supporting borrowers under fragile and conflict-affected situations (FCS). 9 Supporting donor trust funds include, but are not limited to, BNPP, BPRP, LPRP, DFID, RSB, EPDF-FTI, GAP, HRBF, IDF, KCP, SIEF, TFESSD, and UNAIDS/UBW TF. Page 14
18 Results Framework and M&E Guidance Note Annexes 2013 Annex A. Results Checklist Results Framework PDO Project Identification (PCN) Project Appraisal (PAD) Implementation Support (ISR) Project Completion (ICR) The PDO should be identified with a clear definition of what the principal outcome would be for the target group if the project is successful. The PDO should be: As concise and short as possible. Mention the primary target group. Mention the response from the primary target group. Focus on outcomes for which the project can reasonably be held accountable. Report on progress made toward achievement of the PDO. Assign a performance rating to the PDO (based on the project likelihood of being able to achieve its objectives). Discuss the extent to which the operation achieved its objectives based on the original results framework or current one if restructured. Include evidence to justify assertions. Provide references to the evidence used. Results Indicators Formulate preliminary outcomelevel indicators. The team should begin to consider what Intermediate level results indicators and sources are available. A limited set of indicators (less than 5) for the PDO should be available. Indicators should be: Specific, Measurable, Attributable, Realistic and Time-bound ( SMART indicators). Indicators should measure all PDO aspects. Each project components should also have indicators to measure progress. All ISR results data is made public and is being linked to corporate level results monitoring dashboard, so please confirm (and re-check) progress data for all indicators before submitting the ISR. Make an extra effort to correctly report on the Core Sector Indicators. Any changes in the PDO, level-1 restructuring. Articulate how the intermediate results (outputs) have contributed to the desired outcomes. When reporting on outputs, make sure that units conform to Network anchor guidance. Baseline values & targets Baselines and tentative targets are not required, but can help make the case for the intervention stronger. If not available, consideration should be given to how to develop them and to the use of secondary data sources. The indicators have to contain baselines and targets. If baselines will be established during the first year of implementation, detailed plans for how this will be done should be included in the Annex on Results Framework. By the first ISR, all indicators are expected to have baseline values and targets. If not, it should specify the actions taken and to be taken to collect baselines and develop reasonable targets. Assess availability and quality of baselines and targets to support the self-evaluation. Baselines, targets and values at completion are to be provided for all indicators Institutional arrangements for monitoring and evaluation Examine the institutional capacity on the part of the borrower to handle project preparation and implementation. Analyze and explain: Institutional arrangements for data collection: responsible staff/units and time frames. The capacity of the responsible agency and the cost for the results based M&E. If M&E capacity of the responsible agency is assessed to be weak, explain how the project expects to strengthen capacity. Include information on: Data collection methods used. Analysis of data over time & space. Use of information to identify obstacles and designing solutions, and actions agreed. Use of information to identify obstacles and designing solutions, and actions agreed. Management s comments on all the above issues. Explain the role that country M&E systems played to support project and sector M&E. Highlight any support that the project provided to strengthen national M&E capacity and systems. Use client systems for M&E Discuss whether information for the project is available from country systems. Where possible the project should draw upon monitoring indicators from the borrower s M&E system. Monitor and help to promote the use of client systems in decision-making. Explain the use of M&E systems during implementation. Page 15
19 Results Framework and M&E Guidance Note Annexes 2013 Annex B. Results and M&E in Fragile and Conflict-Affected Situations 1. Results chain. Some specific considerations to bear in mind when establishing the results chain in an FCS environment are as follows: Identify any links with the drivers of fragility or sources of resilience within the results chain. This should be based on available country fragility analysis; Regularly monitor and update the intervention logic of the project design as the FCS environment is likely to change; Clearly distinguish between higher-level goals (e.g., improved security situation) and the project s objective (e.g., reintegration of ex-combatants). 2. Formulation of the PDO. Projects in an FCS environment are often designed to address multiple problems (e.g., classical development problems related to access to basic services and problems related to fragility and resilience). However, experience has shown that it is common in FCS-related projects to find descriptions about the objective that is expected to be achieved (e.g., increase access to health services, access to income-generating activities) but not the impact of these interventions on the drivers of fragility or on the sources of resilience although the latter may be even more important for the project rationale. 3. Teams are encouraged to make an effort to define an inclusive PDO that describes all the envisioned objectives. This will allow for a better definition of the indicators in the results framework and ultimately to set the right balance between development outcomes and FCSrelated outcomes when it comes to project evaluation. Small changes in the PDO wording can help to put the project in the right context given the special FCS environment. 4. Teams working in an FCS context also need to pay particular attention when formulating the PDO to find the right balance and avoid establishing high-level objectives beyond the project s possibilities. Following are examples of high-level objectives observed from project statements proposed by different teams and how they have been restated to better reflect what realistically could be achieved through the projects: From Improve security situation in the country to Demobilize and reintegrate excombatants. In this example, the project aim was to demobilize combatants and reintegrate them into civil society. The team s assumption was that if combatants are demobilized and reintegrated the overall security situation will be improved. However what the project can realistically achieve and be held accountable for is the demobilization part, while many other factors beyond the influence of the project contribute to improvements in the country s security situation. Bring peace and stability to the target areas was rephrased into Improve presence and functioning of basic local government services, which was the actual focus and results that the specific project was able to deliver. Page 16
20 Results Framework and M&E Guidance Note Annexes 2013 From Reduce poverty in conflict-affected populations to Increase access to income-generating activities for IDP and host communities. A concept stage PDO stated as Bring social cohesion to the conflict-affected population was redefined into Establish and improve access to local conflict resolution mechanisms after discussions with the Government and an assessment of similar project completion reports were carried out. 5. FCS indicators. Identifying appropriate indicators on FCS-related issues that can be used in a project results framework could be a challenging task. First, indicators usually refer to broad concepts such as fragility, resilience, peace, and/or stability that are difficult to measure. Second, indicators are influenced by numerous elements in the project context. Third, observable changes in FCS context are normally a long-term undertaking. Nevertheless, project teams should explore the possibility to find and identify indicators relevant to their projects and whether changes in those indicators may be attributable to the project interventions. 6. Some general recommendations when defining indicators for an FCS project follow: Most indicators should be disaggregated by gender and any other relevant fragility dimension (e.g., by ethnic group, religion, or geographic location). Since perception indicators/surveys play a key role in an FCS environment, teams should explore options to include beneficiary surveys in the project design. Use good enough proxy indicators relevant to the project when measuring complex issues (e.g., state presence could be measured by communities in which administrative offices are functioning). Use the World Bank core indicators on conflict prevention, social inclusion, and civic participation as a starting point ( Refer to the community-driven development (CDD) indicators database to look for indicators on community development A selection of the Fragility Indicators that could be adjusted to World Bank projects include the following: Diversity in representation (by gender, region, and social groups) in key decisionmaking bodies (legislature, government, military, justice); Number of intra-group disputes that produce violence; Number of internal displaced people plus refugees due to conflict; Public confidence in the performance of justice systems (formal/customary), including human rights mechanisms; Proximity to formal and customary justice institutions to the public (basket of indicators); 10 CDD Resources: Indicators for Demand for Good Governance and Social Accountability Page 17
21 Results Framework and M&E Guidance Note Annexes 2013 State monopoly and capacity to collect and administer tax, customs, and fees across its territory; Quality of public financial management and internal and public oversight mechanisms (multiple indicators from PEFA); Percentage of population that reports paying a bribe when obtaining a public service or when interacting with a public official; and Distribution of services (by region and social groups). 8. In addition to the Fragility Indicators, there are a many sources that can serve to identify specific indicators around fragility and conflict to be used in FCS-related projects. When using these sources project teams are advised to customize them to the project context, make sure that data collection is possible, and find clear attribution links between the project s intervention and its impact. A selection of potential sources is presented below; more information can be found on each through a simple Internet search: Afrobarometer. Perception survey data from several African states; Bertelsmann Transformation Index. Political legitimacy, democratic transitions, etc.; Global Peace Index. Selection of indicators around peace and conflict that is translated into an index and a country ranking; Corruption Perceptions Index. Transparency International s global perception survey of corruption; Failed States Index. Social, political, and economic pressures, and state legitimacy; Freedom in the World. Assessments of global political rights and civil liberties; Gallup World Poll. Perception surveys from a range of countries on political and social issues; Ibrahim Index of African Governance. Includes indicators on safety; rule of law; participation; human rights; sustainable economic opportunity; human development; Minorities at Risk. analyzes the status and conflicts of politically-active communal groups; Open Budget Index. Measures budget transparency and accountability; State Fragility Index. includes measures of state effectiveness and legitimacy; UN Security Council Resolution Tracks the participation of women and integration of gender issues into peace building and post-conflict recovery (in development); Uppsala Conflict Data. Rigorous data on numbers of conflict deaths; World Development Indicators. Over 400 indicators that (in some cases) can be disaggregated for conflict and fragility monitoring purposes. 9. FCS M&E arrangements. When building the M&E system for a FCS-contextual operation, special considerations would include: Page 18
22 Results Framework and M&E Guidance Note Annexes 2013 Being pragmatic. Go for good enough monitoring arrangements as opposed to perfect systems (e.g., use available data initially even if the quality might be mixed). Aim to improve data quality during implementation. Building country M&E capacity and systems as part of project design. This is a longer process that may not yield results in the short run but will have mid- and longterm payback. An example of this approach can be found in education- or healthrelated projects that include components for building the education/health management information systems. The system may not provide data during the first years, and additional data collection efforts are needed just for the specific project purposes. However, after some years the system will start working and providing data. Building national M&E systems from the beginning is one of the key features a FCS-related project should focus on. Planning for M&E capacity-building activities and budgeting for it. The M&E arrangements in FCS are just more expensive. Budget for it! Specific examples of capacity-building activities include study tours to neighboring country with good M&E systems (e.g., Rwanda, Uganda, South Africa), mentoring programs, and ondemand consultants. The M&E capacity building should be part of the design and the implementation stage of the project. Making the M&E system work for the Government. It is important to internalize and mainstream M&E initiatives in the Government s decision-making processes, fundraising initiatives, communication strategies, etc. This will contribute to increase the Government s commitment and efforts into making the M&E system work. Phasing Government M&E responsibilities in line with the existing capacity. Initially, M&E work might need to be done mostly by consultants. However, at later stages as local capacity is built, the Government will be able to gradually take over more responsibilities. Including risk monitoring elements in the M&E system. Elements identified in the ORAF should be translated into concrete elements in the M&E system. Also, include real-time monitoring tools to allow course corrections in a rapid-changing project environment (e.g. use UN security updates to plan and adjust project implementation plans on the ground) Using innovative data collection tools (IT based, third party monitoring) as much as possible as they are normally cheaper and useful for projects operating in unsecure areas. Innovative approaches using technology for M&E may help to improve governance and accountability in public sector service delivery and affect development outcomes (see Box B1). 11 M&E can play a key role regarding accountability; explore options to link both elements in the M&E system. 11 See Staff Connections release South Asia Team Successfully Pilots Remote Supervision (May 6, 2010). Page 19
23 Results Framework and M&E Guidance Note Annexes 2013 Box B1. Remote M&E through ICT with a Focus on Compliance Oversight Afghanistan Pilot In 2010, the South Asia Region pioneered several remote compliance oversight-focused strategies to aid M&E, funded by a Governance Partnership Facility grant and using information and communication technology (ICT) solutions. In Afghanistan, one strategy was piloted to allow remote supervision and asset verification on the Afghanistan Emergency Irrigation Rehabilitation Project. GPS-enabled cameras were used to capture georeferenced photos of irrigation project assets, conditions of canals, and other data that were able to be sent to the project coordination unit in the Ministry of Energy and Water in Kabul for inclusion in the project database. At the project coordination unit, the images can be viewed through web browsers and/or Google Earth, verifying the exact time and location of the image and the name of the observer. The same principles were applied in the health and education sectors. To make sure project beneficiaries receive public services, teams have been developing Beneficiary Tracking and Verification Systems, which use devices such as mobile phones, smart cards, and interactive voice response to gather data on service delivery for health and education projects. Data collected can be linked to performance indicators for systematic monitoring of project progress toward sectorwide goals. In addition, an external component accessible to the public can improve transparency, accountability, and demand for good governance. 10. Baseline data. Missing baseline data is one of the most common problems found in FCSrelated projects. Teams may want to explore some of the following ideas when confronted with this situation: Explore options to do a quick data survey (using ICT-based tools). If available, use project preparation advances to do a baseline assessment. Use data that might be available for a specific geographical area and use it as a reference point/benchmark the baseline can be updated with more comprehensive data at a later stage. Inquire if NGOs or other donors have available information; there is more data out there than imagined! Explore the use of Social and Environmental Assessment studies to gather basic data on project indicators and/ or beneficiaries. Explore the use of internationally available indicators (e.g., peace indicators, failed states index, etc.) as a starting point to get baseline data. Page 20
Monitoring and Evaluation Plan Primer for DRL Grantees
Monitoring and Evaluation Plan Primer for DRL Grantees I. What is a monitoring and evaluation plan? A monitoring and evaluation plan (M&E plan), sometimes also referred to as a performance monitoring or
GLOSSARY OF EVALUATION TERMS
Planning and Performance Management Unit Office of the Director of U.S. Foreign Assistance Final Version: March 25, 2009 INTRODUCTION This Glossary of Evaluation and Related Terms was jointly prepared
Guidance Note on Developing Terms of Reference (ToR) for Evaluations
Evaluation Guidance Note Series UNIFEM Evaluation Unit October 2009 Guidance Note on Developing Terms of Reference (ToR) for Evaluations Terms of Reference (ToR) What? Why? And How? These guidelines aim
USAID PROGRAM CYCLE OVERVIEW
USAID PROGRAM CYCLE OVERVIEW USAID AUGUST 2012 CONTENTS Background... 3 Overview Summary... 3 Program Cycle Components:... 3 Agency Policies and Strategies:... 4 Country Development Cooperation Strategies:...
FOREIGN AFFAIRS PROGRAM EVALUATION GLOSSARY CORE TERMS
Activity: A specific action or process undertaken over a specific period of time by an organization to convert resources to products or services to achieve results. Related term: Project. Appraisal: An
QUALITY ASSURANCE POLICY
QUALITY ASSURANCE POLICY ACADEMIC DEVELOPMENT & QUALITY ASSURANCE OFFICE ALPHA UNIVERSITY COLLEGE 1. BACKGROUND The Strategic Plan of 2003-2005 E.C of Alpha University s defines the direction Alpha University
PROJECT MANAGEMENT TRAINING MODULES
PROJECT MANAGEMENT TRAINING MODULES KENYA PROJECTS ORGANIZATION < Projects Solutions Provider > Macjo Arcade, 4 th Flr., Suite 15E P.O Box, 3029 00200, Nairobi - Kenya Tel: 254 (0)202319748 Mob: 0725 788
Investment Project Financing Economic Analysis Guidance Note
2013 Investment Project Financing Economic Analysis Guidance Note OPSPQ This guidance notes are intended for internal use by Bank staff. The notes will be updated and complemented from time to time. April
Monitoring and Evaluation Framework and Strategy. GAVI Alliance 2011-2015
Monitoring and Evaluation Framework and Strategy GAVI Alliance 2011-2015 NOTE TO READERS The 2011-2015 Monitoring and Evaluation Framework and Strategy will continue to be used through the end of 2016.
Guide for the Development of Results-based Management and Accountability Frameworks
Guide for the Development of Results-based Management and Accountability Frameworks August, 2001 Treasury Board Secretariat TABLE OF CONTENTS Section 1. Introduction to the Results-based Management and
PROGRAM-FOR-RESULTS FINANCING INTERIM GUIDANCE NOTES TO STAFF ON ASSESSMENTS. Operations Policy and Country Services
PROGRAM-FOR-RESULTS FINANCING INTERIM GUIDANCE NOTES TO STAFF ON ASSESSMENTS These interim guidance notes are intended for internal use by Bank staff to provide a framework to conduct assessments required
An introduction to impact measurement
An introduction to impact measurement Contents 1 Introduction 2 Some definitions 3 Impact measurement at BIG 4 Setting impact measures for programmes APPENDICES A External Resources (separate document)
USING THE EVALUABILITY ASSESSMENT TOOL
USING THE EVALUABILITY ASSESSMENT TOOL INTRODUCTION The ILO is committed to strengthening the Office-wide application of results-based management (RBM) in order to more clearly demonstrate its results
UNDP Programming Manual December 2000. Chapter 7: MONITORING, REPORTING AND EVALUATION Page 1
Chapter 7: MONITORING, REPORTING AND EVALUATION Page 1 Contents Page 7.0 MONITORING, REPORTING AND EVALUATION See the New M & E Framework. 7.1 Policy framework 7.1.1 General policy statements 7.1.2 Coverage
KEY PERFORMANCE INFORMATION CONCEPTS
Chapter 3 KEY PERFORMANCE INFORMATION CONCEPTS Performance information needs to be structured to demonstrate clearly how government uses available resources to deliver on its mandate. 3.1 Inputs, activities,
Identification. Preparation and formulation. Evaluation. Review and approval. Implementation. A. Phase 1: Project identification
II. Figure 5: 6 The project cycle can be explained in terms of five phases: identification, preparation and formulation, review and approval, implementation, and evaluation. Distinctions among these phases,
IPDET Module 6: Descriptive, Normative, and Impact Evaluation Designs
IPDET Module 6: Descriptive, Normative, and Impact Evaluation Designs Intervention or Policy Evaluation Questions Design Questions Elements Types Key Points Introduction What Is Evaluation Design? Connecting
Objective Oriented Planning Module 1. Stakeholder Analysis
III. Stakeholder Analysis Stakeholders are people, groups, or institutions, which are likely to be affected by a proposed project (either negatively or positively), or those which can affect the outcome
The integrated leadership system. ILS support tools. Leadership pathway: Individual profile EL1
The integrated leadership system ILS support tools Leadership pathway: Individual profile Executive Level 1 profile Shapes strategic thinking Achieves results Cultivates productive working relationships
Resources for Implementing the WWF Project & Programme Standards. Step 2.3 Design Operational Plan
Resources for Implementing the WWF Project & Programme Standards Step 2.3 Design Operational Plan June 2007 Step 2.3 Design Operational Plan Contents What Is An Operational Plan?... 1 Why Is An Operational
Internal Quality Assurance Arrangements
National Commission for Academic Accreditation & Assessment Handbook for Quality Assurance and Accreditation in Saudi Arabia PART 2 Internal Quality Assurance Arrangements Version 2.0 Internal Quality
Guidelines for the mid term evaluation of rural development programmes 2000-2006 supported from the European Agricultural Guidance and Guarantee Fund
EUROPEAN COMMISSION AGRICULTURE DIRECTORATE-GENERAL Guidelines for the mid term evaluation of rural development programmes 2000-2006 supported from the European Agricultural Guidance and Guarantee Fund
Workshop on Impact Evaluation of Public Health Programs: Introduction. NIE-SAATHII-Berkeley
Workshop on Impact Evaluation of Public Health Programs: Introduction NHRM Goals & Interventions Increase health service use and healthrelated community mobilization (ASHAs) Increase institutional deliveries
TOOL D14 Monitoring and evaluation: a framework
TOOL D14 Monitoring and evaluation: a framework 159 TOOL D14 Monitoring and evaluation: a framework TOOL D14 For: About: Purpose: Use: Resource: Commissioners in primary care trusts (PCTs) and local authorities
evaluation outcome-level
outcome-level evaluation a companion guide to the handbook on planning monitoring and evaluating for development results for programme units and evaluators december 2011 Table of Contents Acronyms and
Reporting Service Performance Information
AASB Exposure Draft ED 270 August 2015 Reporting Service Performance Information Comments to the AASB by 12 February 2016 PLEASE NOTE THIS DATE HAS BEEN EXTENDED TO 29 APRIL 2016 How to comment on this
Policy for Monitoring and Evaluation of Compacts and Threshold Programs
Policy for Monitoring and Evaluation of Compacts and Threshold Programs May 1, 2012 Version: Final Submitted by: Department of Policy and Evaluation Millennium Challenge Corporation 875 15th Street N.W.
Peace operations 2010 reform strategy (excerpts from the report of the Secretary-General)
Peace operations 2010 reform strategy (excerpts from the report of the Secretary-General) Reporting to the General Assembly on 24 February 2006 on the financing of the United Nations peacekeeping operations
PRINCIPLES FOR EVALUATION OF DEVELOPMENT ASSISTANCE
PRINCIPLES FOR EVALUATION OF DEVELOPMENT ASSISTANCE DEVELOPMENT ASSISTANCE COMMITTEE PARIS, 1991 DAC Principles for Evaluation of Development Assistance Development Assistance Committee Abstract: The following
TIPS BASELINES AND TARGETS ABOUT TIPS
NUMBER 8 2 ND EDITION, 2010 PERFORMANCE MONITORING & EVALUATION TIPS BASELINES AND TARGETS ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues related to performance
Sound Transit Internal Audit Report - No. 2014-3
Sound Transit Internal Audit Report - No. 2014-3 IT Project Management Report Date: Dec. 26, 2014 Table of Contents Page Background 2 Audit Approach and Methodology 2 Summary of Results 4 Findings & Management
Improving Development Results Through Excellence in Evaluation
Improving Development Results Through Excellence in Evaluation Because of Its Independence, IEG Has a Unique Role IEG is an independent unit within the World Bank Group. It reports directly to the Board
TOR - Consultancy Announcement Final Evaluation of the Cash assistance and recovery support project (CARSP)
TOR - Consultancy Announcement Final Evaluation of the Cash assistance and recovery support project (CARSP) Organization Project Position type Adeso African Development Solutions and ACTED - Agency for
The Gateway Review Process
The Gateway Review Process The Gateway Review Process examines programs and projects at key decision points. It aims to provide timely advice to the Senior Responsible Owner (SRO) as the person responsible
Meta-Evaluation of interventions undertaken by the Luxembourg Cooperation in the subsector of professional training in hospitality and tourism.
Meta-Evaluation of interventions undertaken by the Luxembourg Cooperation in the subsector of professional training in hospitality and tourism. In 2013, the Directorate for Development Cooperation and
How To Monitor A Project
Module 4: Monitoring and Reporting 4-1 Module 4: Monitoring and Reporting 4-2 Module 4: Monitoring and Reporting TABLE OF CONTENTS 1. MONITORING... 3 1.1. WHY MONITOR?... 3 1.2. OPERATIONAL MONITORING...
7. ASSESSING EXISTING INFORMATION SYSTEMS AND INFORMATION NEEDS: INFORMATION GAP ANALYSIS
7. ASSESSING EXISTING INFORMATION 6. COMMUNITY SYSTEMS AND LEVEL INFORMATION MONITORING NEEDS: OF THE INFORMATION RIGHT TO ADEQUATE GAP ANALYSIS FOOD 7. ASSESSING EXISTING INFORMATION SYSTEMS AND INFORMATION
7 Directorate Performance Managers. 7 Performance Reporting and Data Quality Officer. 8 Responsible Officers
Contents Page 1 Introduction 2 2 Objectives of the Strategy 2 3 Data Quality Standards 3 4 The National Indicator Set 3 5 Structure of this Strategy 3 5.1 Awareness 4 5.2 Definitions 4 5.3 Recording 4
GUIDELINES FOR PILOT INTERVENTIONS. www.ewaproject.eu [email protected]
GUIDELINES FOR PILOT INTERVENTIONS www.ewaproject.eu [email protected] Project Lead: GENCAT CONTENTS A Introduction 2 1 Purpose of the Document 2 2 Background and Context 2 3 Overview of the Pilot Interventions
Module 2: Introduction to M&E frameworks and Describing the program
Module 2: Introduction to M&E frameworks and Describing the program Learning Objectives Understand the 4 basic steps required to develop M & E plans Learn to develop goals and objectives Learn to develop
A FRAMEWORK FOR NATIONAL HEALTH POLICIES, STRATEGIES AND PLANS
A FRAMEWORK FOR NATIONAL HEALTH POLICIES, STRATEGIES AND PLANS June 2010 A FRAMEWORK FOR NATIONAL HEALTH POLICIES, STRATEGIES AND PLANS June 2010 This paper reviews current practice in and the potential
Framework for Managing Programme Performance Information
Framework for Managing Programme Performance Information Published by the National Treasury Private Bag X115 Pretoria 0001 South Africa Tel: +27 12 315 5948 Fax: +27 12 315 5126 The Framework for Managing
Canadian International Development Agency 200 Promenade du Portage Gatineau, Quebec K1A 0G4 Tel: (819) 997-5006 Toll free: 1-800-230-6349 Fax: (819)
Canadian International Development Agency 200 Promenade du Portage Gatineau, Quebec K1A 0G4 Tel: (819) 997-5006 Toll free: 1-800-230-6349 Fax: (819) 953-6088 (For the hearing and speech impaired only (TDD/TTY):
Evaluation Plan: Process Evaluation for Hypothetical AmeriCorps Program
Evaluation Plan: Process Evaluation for Hypothetical AmeriCorps Program Introduction: This evaluation plan describes a process evaluation for Financial Empowerment Corps (FEC), an AmeriCorps State and
INDICATIVE GUIDELINES ON EVALUATION METHODS: EVALUATION DURING THE PROGRAMMING PERIOD
EUROPEAN COMMISSION DIRECTORATE-GENERAL REGIONAL POLICY Thematic development, impact, evaluation and innovative actions Evaluation and additionality The New Programming Period 2007-2013 INDICATIVE GUIDELINES
The Netherlands response to the public consultation on the revision of the European Commission s Impact Assessment guidelines
The Netherlands response to the public consultation on the revision of the European Commission s Impact Assessment guidelines Introduction Robust impact assessment is a vital element of both the Dutch
Part 1. MfDR Concepts, Tools and Principles
Part 1. Concepts, Tools and Principles 3 Overview Part 1. MfDR Concepts, Tools and Principles M anaging for Development Results (MfDR) is multidimensional, relating back to concepts about how to make international
STRATEGIC REVIEW OF HUMAN RESOURCE MANAGEMENT IN UNICEF. Terms of Reference July 20 2005
1. Background STRATEGIC REVIEW OF HUMAN RESOURCE MANAGEMENT IN UNICEF Terms of Reference July 20 2005 UNICEF has decided to undertake a Strategic Review of its approach to and performance in Human Resource
ENTERPRISE RISK MANAGEMENT FRAMEWORK
ENTERPRISE RISK MANAGEMENT FRAMEWORK COVENANT HEALTH LEGAL & RISK MANAGEMENT CONTENTS 1.0 PURPOSE OF THE DOCUMENT... 3 2.0 INTRODUCTION AND OVERVIEW... 4 3.0 GOVERNANCE STRUCTURE AND ACCOUNTABILITY...
Effective objective setting provides structure and direction to the University/Faculties/Schools/Departments and teams as well as people development.
Effective objective setting provides structure and direction to the University/Faculties/Schools/Departments and teams as well as people development. The main purpose of setting objectives is to reflect
Draft conclusions proposed by the Chair. Recommendation of the Subsidiary Body for Implementation
United Nations FCCC/SBI/2012/L.47 Distr.: Limited 1 December 2012 Original: English Subsidiary Body for Implementation Thirty-seventh session Doha, 26 November to 1 December 2012 Agenda item 14 Article
REFERENCE MATERIAL 1 ON THE OPERATIONAL RISK ASSESSMENT FRAMEWORK (ORAF)
REFERENCE MATERIAL 1 ON THE OPERATIONAL RISK ASSESSMENT FRAMEWORK (ORAF) FINANCIAL MANAGEMENT I. Introduction 1- By identifying and assessing key FM risks, FMSs contribute to decisions on the design and
INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION CONTENTS
INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION (Effective for assurance reports dated on or after January 1,
Planning, Monitoring, Evaluation and Learning Program
Planning, Monitoring, Evaluation and Learning Program Program Description Engineers Without Borders USA EWB-USA Planning, Monitoring, Evaluation 02/2014 and Learning Program Description This document describes
DG ENLARGEMENT SECTOR BUDGET SUPPORT GUIDELINES
Ref. Ares(2014)571140-04/03/2014 DG ENLARGEMENT SECTOR BUDGET SUPPORT GUIDELINES EXECUTIVE SUMMARY January 2014 TABLE OF CONTENTS Introduction 1. RATIONALE FOR BUDGET SUPPORT 1.1 What is Budget Support?
GEF/LDCF.SCCF.17/05/Rev.01 October 15, 2014. 17 th LDCF/SCCF Council Meeting October 30, 2014 Washington, DC. Agenda Item 5
17 th LDCF/SCCF Council Meeting October 30, 2014 Washington, DC GEF/LDCF.SCCF.17/05/Rev.01 October 15, 2014 Agenda Item 5 UPDATED RESULTS-BASED MANAGEMENT FRAMEWORK FOR ADAPTATION TO CLIMATE CHANGE UNDER
TABLE OF CONTENTS. The Concept of School Accreditation:... 4. Objectives of School Accreditation:... 4
TABLE OF CONTENTS QNSA Handbook Foreword... 3 The Concept of School Accreditation:... 4 Objectives of School Accreditation:... 4 The Difference between the Accreditation and Licensing Process:... 6 Developing
U.S. Department of the Treasury. Treasury IT Performance Measures Guide
U.S. Department of the Treasury Treasury IT Performance Measures Guide Office of the Chief Information Officer (OCIO) Enterprise Architecture Program June 2007 Revision History June 13, 2007 (Version 1.1)
Performance Monitoring and Evaluation System (PMES) Framework Document
Performance Monitoring and Evaluation System (PMES) Framework Document Performance Management and Evaluation Unit Cabinet Support and Policy Division Cabinet Office 8 th November 2010 Table of Contents
Logical Framework: Making it Results-Oriented
1 of 11 2001-04-05 09:58 Logical Framework: Making it Results-Oriented Table of Contents 1. Introduction 2. Purpose of Guide 3. The LFA Process 4. The Logical Framework Structure 5. The Vertical Logic
Consultation and Engagement Strategy
Consultation and Engagement Strategy Contents: 1. Introduction 2 2. Purpose 3 3. Aims and Objectives 4 4. Key principles 5 5. Delivery of the Strategy 6 6. Action Plan 2011-12 7 Appendix 1 Understanding
Risk Management Primer
Risk Management Primer Purpose: To obtain strong project outcomes by implementing an appropriate risk management process Audience: Project managers, project sponsors, team members and other key stakeholders
Building HR Capabilities. Through the Employee Survey Process
Building Capabilities Through the Employee Survey Process Survey results are only data unless you have the capabilities to analyze, interpret, understand and act on them. Your organization may conduct
GOOD PRACTICE NOTE ADDRESSING FINANCIAL MANAGEMENT ISSUES IN DEVELOPMENT POLICY LENDING. Financial Management Sector Board
GOOD PRACTICE NOTE ADDRESSING FINANCIAL MANAGEMENT ISSUES IN DEVELOPMENT POLICY LENDING Financial Management Sector Board October 2010 ABBREVIATIONS AND ACRONYMS CFAA DPO FM FMS IBRD ICR IDA IMF OP PEFA
FINAL. World Education Forum. The Dakar Framework for Action. Education For All: Meeting our Collective Commitments. Revised Final Draft
28/04/2000, 3 P.m. FINAL The Dakar Framework for Action Education For All: Meeting our Collective Commitments Revised Final Draft World Education Forum Dakar, Senegal, 26-28 April 2000 1 1 The Dakar Framework
Procurement Performance Measurement System
Public Procurement and Disposal of Public Assets Authority Procurement Performance Measurement System User's Guide August 2008 Public Procurement and Disposal of Public Assets Authority Procurement Performance
INTERNATIONAL FRAMEWORK FOR ASSURANCE ENGAGEMENTS CONTENTS
INTERNATIONAL FOR ASSURANCE ENGAGEMENTS (Effective for assurance reports issued on or after January 1, 2005) CONTENTS Paragraph Introduction... 1 6 Definition and Objective of an Assurance Engagement...
New Approaches to Economic Challenges - A Framework Paper
New Approaches to Economic Challenges - A Framework Paper 1. The global crisis was a wake-up call to policymakers around the world. Market and governance failures have led to the most pressing financial,
OUTLINE OF PRINCIPLES OF IMPACT EVALUATION
OUTLINE OF PRINCIPLES OF IMPACT EVALUATION PART I KEY CONCEPTS Definition Impact evaluation is an assessment of how the intervention being evaluated affects outcomes, whether these effects are intended
Terms of Reference for LEAP II Final Evaluation Consultant
UNESCO Office Kabul, Afghanistan November 2015 Terms of Reference for LEAP II Final Evaluation Consultant Post Title: Final Evaluation Specialist Organization: UNESCO Office Kabul Location: Kabul, Afghanistan
UNITED NATIONS OFFICE FOR PROJECT SERVICES. ORGANIZATIONAL DIRECTIVE No. 33. UNOPS Strategic Risk Management Planning Framework
UNOPS UNITED NATIONS OFFICE FOR PROJECT SERVICES Headquarters, Copenhagen O.D. No. 33 16 April 2010 ORGANIZATIONAL DIRECTIVE No. 33 UNOPS Strategic Risk Management Planning Framework 1. Introduction 1.1.
Performance Management. Date: November 2012
Performance Management Date: November 2012 SSBA Background Document Background 3 4 Governance in Saskatchewan Education System 5 Role of School Boards 6 Performance Management Performance Management Overview
SLDS Workshop Summary: Data Use
SLDS Workshop Summary: Data Use Developing a Data Use Strategy This publication aims to help states detail the current status of their State Longitudinal Data System (SLDS) data use strategy and identify
United Nations Programme on Youth. Interagency Expert Group Meeting on. Goals and Targets for Monitoring the Progress of Youth in the Global Economy
BACKGROUND PAPER United Nations Programme on Youth Interagency Expert Group Meeting on Goals and Targets for Monitoring the Progress of Youth in the Global Economy New York, 30-31 May 2007 INTRODUCTION
M&E/Learning Guidelines for IPs. (To be used for preparation of Concept Notes and Proposals to LIFT)
Published: 17 March 2015 Background M&E/Learning Guidelines for IPs (To be used for preparation of Concept Notes and Proposals to LIFT) LIFT's new strategy (2015-2018) envisions LIFT as a knowledge platform.
Data quality and metadata
Chapter IX. Data quality and metadata This draft is based on the text adopted by the UN Statistical Commission for purposes of international recommendations for industrial and distributive trade statistics.
Delaware Performance Appraisal System
Delaware Performance Appraisal System Building greater skills and knowledge for educators DPAS-II Guide for Administrators (Principals) Principal Practice Rubric Updated July 2015 1 INEFFECTIVE A. DEVELOPS
the role of the head of internal audit in public service organisations 2010
the role of the head of internal audit in public service organisations 2010 CIPFA Statement on the role of the Head of Internal Audit in public service organisations The Head of Internal Audit in a public
Guidance on using the revised Logical Framework
A DFID practice paper How to note FEBRUARY 2009 Guidance on using the revised Logical Framework The 2009 revision to the logframe format The principal changes to the logframe from the earlier (2008) 4x4
Procurement Programmes & Projects P3M3 v2.1 Self-Assessment Instructions and Questionnaire. P3M3 Project Management Self-Assessment
Procurement Programmes & Projects P3M3 v2.1 Self-Assessment Instructions and Questionnaire P3M3 Project Management Self-Assessment Contents Introduction 3 User Guidance 4 P3M3 Self-Assessment Questionnaire
Section 7. Terms of Reference
APPENDIX-A TERMS OF REFERENCE UNION-LEVEL TECHNICAL ASSISTANCE TO PROVIDE INSTITUTIONAL SUPPORT TO THE MYANMAR NATIONAL COMMUNITY DRIVEN DEVELOPMENT PROJECT I. INTRODUCTION IDA GRANT H814MM FY 2013-16
Hertsmere Borough Council. Data Quality Strategy. December 2009 1
Hertsmere Borough Council Data Quality Strategy December 2009 1 INTRODUCTION Public services need reliable, accurate and timely information with which to manage services, inform users and account for performance.
GUIDE ON DEVELOPING SERVICE STANDARDS
GUIDE ON DEVELOPING SERVICE STANDARDS Contents 1. Background... 3 2. Why the Guide/ Toolkit?... 4 3. Service Delivery Planning Value Chain Components... 4 4. Definition and types of Service Standards...
World Health Organization
March 1, 2005 Proposed Networks to Support Health Decision-Making and Health Policy Formulation in Low and Lower Middle Income Countries & Considerations for Implementation World Health Organization A
IS0 14040 INTERNATIONAL STANDARD. Environmental management - Life cycle assessment - Principles and framework
INTERNATIONAL STANDARD IS0 14040 First edition 1997006-15 Environmental management - Life cycle assessment - Principles and framework Management environnemental - Analyse du cycle de vie - Principes et
REPORT OF THE CONFERENCE OF THE PARTIES ON ITS SEVENTH SESSION, HELD AT MARRAKESH FROM 29 OCTOBER TO 10 NOVEMBER 2001 Addendum
UNITED NATIONS Distr. GENERAL FCCC/CP/2001/13/Add.1 21 January 2002 Original: ENGLISH CONFERENCE OF THE PARTIES REPORT OF THE CONFERENCE OF THE PARTIES ON ITS SEVENTH SESSION, HELD AT MARRAKESH FROM 29
TIPS. Performance Monitoring and Evaluation BUILDING A RESULTS FRAMEWORK. What Is a Results Framework? What Functions Does a Results Framework Serve?
Performance Monitoring and Evaluation TIPS 2000, Number 13 USAID Center for Development Information and Evaluation BUILDING A RESULTS FRAMEWORK What Is a Results Framework? The strategic planning process
TIPS PREPARING A PERFORMANCE MANAGEMENT PLAN
NUMBER 7 2 ND EDITION, 2010 PERFORMANCE MONITORING & EVALUATION TIPS PREPARING A PERFORMANCE MANAGEMENT PLAN ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues related
Planning for monitoring and evaluation activities during the programme's lifetime
Developing a Monitoring and Evaluation Plan for Food Security and Agriculture Programmes Planning for monitoring and evaluation activities during the programme's lifetime Text-only version In this lesson
4: Proposals for Best Practice Principles for GHP activities at country level
DOC 2.05-8.1 From: WORKING GROUP ON GLOBAL HEALTH PARTNERSHIPS: REPORT --- BEST PRACTICE PRINCIPLES FOR GLOBAL HEALTH PARTNERSHIP ACTIVITIES AT COUNTRY LEVEL (Oct. 2005) 4: Proposals for Best Practice
Evaluation. Evaluation Document 2006, No. 1. Office GLOBAL ENVIRONMENT FACILITY. The GEF Monitoring and. Evaluation. Policy
Evaluation Office GLOBAL ENVIRONMENT FACILITY Evaluation Document 2006, No. 1 The GEF Monitoring and Evaluation Policy Global Environment Facility Evaluation Office The GEF Monitoring and Evaluation Policy
Chapter 3 Data Interpretation and Reporting Evaluation
Chapter 3 Data Interpretation and Reporting Evaluation Results This chapter explains how to interpret data and how to bring together evaluation results. Tips! - Evaluation does not end with data collection
QUAๆASSURANCE IN FINANCIAL AUDITING
Table of contents Subject Page no. A: CHAPTERS Foreword 5 Section 1: Overview of the Handbook 6 Section 2: Quality Control and Quality Assurance 8 2. Quality, quality control and quality assurance 9 2.1
[project.headway] Integrating Project HEADWAY And CMMI
[project.headway] I N T E G R A T I O N S E R I E S Integrating Project HEADWAY And CMMI P R O J E C T H E A D W A Y W H I T E P A P E R Integrating Project HEADWAY And CMMI Introduction This white paper
Part B1: Business case developing the business case
Overview Part A: Strategic assessment Part B1: Business case developing the business case Part B2: Business case procurement options Part B3: Business case funding and financing options Part C: Project
<Business Case Name> <Responsible Entity> <Date>
(The entity Chief Information Officer, Chief Financial Officer and Business Area programme Lead must sign-off the completed business case) Signed: Date:
