Primary tabs

Show search hints
Did you mean
results main

Search results

  1. Contribution Analysis


    Contribution Analysis is an approach for assessing causal questions and inferring causality in real-life program evaluations. It offers a step-by-step approach designed to help managers, researchers, and policymakers arrive at conclusions about the contribution their program has made (or is currently making) to particular outcomes. The essential value of contribution analysis is that it offers an approach designed to reduce uncertainty about the contribution the intervention is making to the observed results through an increased understanding of why the observed results have occurred (or not!) and the roles played by the intervention and other internal and external factors.

    Also Available In: Portugues, Español
  2. Cases in Outcome Harvesting


    This report from The World Bank documents the pilot of a program that examines the use of outcome harvesting, in combination with the Bank's results management approach, to understanding how change happens in complex environments. Two to five years of program results were analysed from 10 ongoing initiatives during this pilot program.

  3. 2nd Asia Pacific Evaluation Association (APEA) International Evaluation Conference

    25th February, 2019 to 1st March, 2019

    The 2nd Asia Pacific Evaluation Association (APEA) International Evaluation Conference will be held in Pasig City from February 25 to March 1. The conference's overarching theme is: Reducing Poverty-Enabling Peace: Evaluation for Accountability, Transparency and Sustainable Development.

  4. Impact Evaluation in Practice


    This book from the World Bank provides a detailed introduction to impact evaluations in the development field. It also provides a number of tools and approaches for conducting impact evaluations.

  5. Summer School Programme: Result-based M&E and Outcome and Impact Evaluation

    7th September, 2015 to 18th September, 2015

    As part of the 10th Annual Edition of the joint Bologna Centre for International Development / Department of Economics Summer School Programme on Monitoring and Evaluation, the programme's focus for the September 2015 modules is on Result-based Monitoring and Evaluation (first module) and Outcome and Impact Evaluation (second module). 

  6. Catholic Relief Services' (CRS) Guidance for Developing Logical and Results Frameworks


    This document was primarily written to provide guidance for conceptualizing, writing, selecting and measuring project performance indicators.

  7. Identify potential unintended results


    Many evaluations and logic models only focus on intended outcomes and impacts - but positive or negative unintended results can be important too.

    Use these options before a program is implemented to identify possible unintended outcomes and impacts, especially negative impacts (that make things worse not better) that should also be investigated and tracked.

    Make sure your data collection remains open to unintended results that you have not anticipated by including some open-ended questions in interviews and questionnaires, and by encouraging reporting of unexpected results.

  8. Monitoring and Evaluation for Results (Multiple dates/locations)

    20th June, 2016 to 16th November, 2016
    United Kingdom

    Monitoring and Evaluation (M&E) for Results is for managers and monitoring and evaluation officers who need to supervise, manage, plan and implement M&E in their projects and programmes. The course addresses M&E for the entire results chain, including the all-important outcomes.  The approach taken is participatory so participants are able to put the theory covered into action.  

  9. Check Results Match Expert Predictions

    Evaluation Option

    Expert predictions can be a useful part of developing the program theory. Program staff can draw expert predictions from the literature or by engaging a group of experts.

  10. Compare results to the counterfactual


    One of the three tasks involved in understanding causes is to compare the observed results to those you would expect if the intervention had not been implemented - this is known as the 'counterfactual'.