Search
18 results
Filter search resultsThe use of monitoring and evaluation in agriculture and rural development projects
The document reviews monitoring and evaluation practises carried out in agricultural and rural development projects, financed by the World Bank.ResourceFrom Evidence to Action: The Story of Cash Transfers and Impact Evaluation in Sub-Saharan Africa
This book presents a detailed overview of the impact evaluations of cash transfer programmes, carried out by the Transfer Project and the Food and Agriculture Organization of the United Nations (FAO)'s From Protection to Production project.ResourceEvaluability assessment for impact evaluation
This document provides an overview of the utility of and specific guidance and a tool for implementing an evaluability assessment before an impact evaluation is undertaken.ResourceUNICEF webinar: Overview of impact evaluation
We often talk about the importance of knowing the impact of our work, but how is impact measured in practice? What are the ten basic things about impact evaluation that a UNICEF officer should know?ResourceUNICEF webinar: Theory of change
What is a Theory of Change? How is it different from a logframe? Why is it such an important part of an impact evaluation?ResourceUNICEF webinar: Randomized controlled trials
What are the key features of an RCT? Are RCTs really the gold standard? What ethical and practical issues do I need to consider before deciding to do an RCT?ResourceUNICEF Webinar: Quasi-experimental design and methods
What is the main difference between quasi-experiments and RCTs? How can I measure impact when establishing a control group is not an option?ResourceEvaluability assessments and choice of evaluation methods
In this Centre for Development Impact seminar, Richard Longhurst (IDS) and Sarah Mistry (BOND) will highlight the importance of evaluability assessments for development projectsResource52 weeks of BetterEvaluation: Using evaluability assessment to improve Terms of Reference
Many problems with evaluations can be traced back to the Terms of Reference (ToR) - the statement of what is required in an evaluation. Many ToRs are too vague, too ambitious, inaccurate or not appropriate.BlogBetter Monitoring: Help us address the neglected ‘M’ in M&E
Effective monitoring is essential for managing performance, however, despite this, monitoring is often undervalued and understood quite narrowly.BlogWhat do we need for better monitoring?
This blog by Jo Hall and Patricia Rogers provides an update on the Global Partnership for Better Monitoring project.BlogConversations to have when designing a program: Fostering evaluative thinking
The first step in evaluating a program is knowing whether you can evaluate it – that the program is ‘evaluable’.BlogConducting and using evaluability assessments in CGIAR
This resource forms part of CGIAR's evaluation guidelines, describing how to use evaluability assessments to facilitate better evaluation outcomes.ResourceEvaluability assessments are an essential new tool for managers
The evaluation report has been finalized, recommendations have been made, the findings have been presented to management and funders, and then … nothing happens. In this post, originally published by CGIAR, Rick Davies and Keith Child, discuss the new…BlogPlanning evaluability assessments: A synthesis of the literature with recommendations
The report presents a synthesis of the literature on Evaluability Assessments.ResourceImpact evaluation: UNICEF's briefs and videos
Nikola Balvin, Knowledge Management Specialist at the UNICEF Office of Research – Innocenti, presents new resources on impact evaluation and discusses how they can be used to support managers who commission impact evaluations.BlogUNICEF webinar: Overview of data collection and analysis methods in Impact Evaluation
What is the value of using mixed methods in impact evaluation? What methods and designs are appropriate for answering descriptive, causal and evaluative questions?ResourceUNICEF webinar: Comparative case studies
What does a non-experimental evaluation look like? How can we evaluate interventions implemented across multiple contexts, where constructing a control group is not feasible?Resource