- Did you mean
- results main
An impact evaluation provides information about the impacts produced by an intervention - positive and negative, intended and unintended, direct and indirect. This means that an impact evaluation must establish what has been the cause of observed changes (in this case ‘impacts’) referred to as causal attribution (also referred to as causal inference).
EventCourse7th September, 2015 to 18th September, 2015ItalyPaid
As part of the 10th Annual Edition of the joint Bologna Centre for International Development / Department of Economics Summer School Programme on Monitoring and Evaluation, the programme's focus for the September 2015 modules is on Result-based Monitoring and Evaluation (first module) and Outcome and Impact Evaluation (second module).
Blog17th March, 2016
What does a non-experimental evaluation look like? How can we evaluate interventions implemented across multiple contexts, where constructing a control group is not feasible?
Webinar 6 on comparative case studies was presented by Dr. Delwyn Goodrick, with a Q&A session between the presenter and audience at the end. It took place on Thursday, 27th of August, with a repeat session on Monday, 31st of August.
EventCourse7th June, 2015United KingdomPaid
For government and its agencies, the European Commission, the Lottery, and charitable Trusts, evaluation of impact has become a cornerstone in understanding the accountability and effectiveness of programmes and initiatives. In an environment where resources for such activity are often scarce, those tasked with designing and managing evaluations, find themselves confronted with confusing choices about ‘the right’ approaches and techniques. This course helps to demystify impact evaluation and help those commissioning and conducting evaluation make effective choices.
Are quantitative or qualitative methods better for undertaking impact evaluations? What about true experiments? Is contribution analysis the new 'state of the art' in impact evaluation or should I just do a survey and use statistical methods to create comparison groups?
Determining one's plan for an impact evaluation occurs within the constraints of a specific context. Since method choices must always be context specific, debates in the professional literature about impact methods can at best only provide partial guidance to evaluation practitioners. The way to break out of this methods impasse is by focusing on the evidentiary requirements for assessing casual impacts.
Choosing appropriate designs and methods for impact evaluation- Department of Industry, Innovation and ScienceResourceGuide2016
The Department of Industry, Innovation and Science has commissioned this report to explore the challenges and document a range of possible approaches for the impact evaluations that the department conducts. Research for the project comprised interviews with key internal stakeholders to understand their needs, and a review of the literature on impact evaluation, especially in the industry, innovation and science context. That research led directly to the development of this guide. This research project is the first stage of a larger project to develop materials as the basis for building departmental capability in impact evaluation.
EventCourse1st August, 2016 to 12th August, 2016ChinaPaid
EventCourse11th July, 2016 to 13th July, 2016United KingdomPaid
There is increasing emphasis placed by impact evaluation commissioners on assessing the contribution made by projects and programmes to changing people’s lives, commonly referred to as a ‘contribution claim’. It can be argued that current theory-based approaches fail to provide evaluators with guidance on the ‘right’ data to gather and the quality of that data in relation to a particular contribution claim. This course aims to guide evaluators to collect data which can help assess how strongly or weakly such data support contribution claims.