What is the value of using mixed methods in impact evaluation? What methods and designs are appropriate for answering descriptive, causal and evaluative questions?
The second webinar in this series provides an overview of data collection and analysis methods in an impact evaluation, including how to choose methods to match different types of key evaluation questions, good data management, sampling options, and the value of using mixed methods. Select questions from the Q&A at the end of the webinar have been included.
In partnership with the UNICEF Office of Research – Innocenti, the RMIT University-based BetterEvaluation team worked with evaluation experts and the International Initiative for Impact Evaluation (3ie) to deliver a series of webinars on impact evaluaton for UNICEF staff on topics pertinent to development professionals. These webinars follow on from a series of 13 methodological briefs on impact evaluation methods.
Listen to the Q&A
Do you have advice for involving stakeholders from the beginning of an evaluation to help with the uptake of the findings?
Is there a rule of thumb for the appropriate response rate to surveys? As in do we need a 70% response rate of total population size to consider the data of good quality or is it more complicated?
I understand that using mixed methods is best practice in order to triangulate data, but are there cases where this may not be necessary? E.g. in impact evaluations that use modelling methods to estimate the likely impact of a policy.
About this webinar series
Throughout 2015, BetterEvaluation partnered with the UNICEF Office of Research – Innocenti to develop eight impact evaluation webinars for UNICEF staff. The objective was to provide an interactive capacity-building experience, customized to focus on UNICEF’s work and the unique circumstances of conducting impact evaluations of programs and policies in international development. The webinars were based on the Impact Evaluation Series – a user-friendly package of 13 methodological briefs and four animated videos – and presented by the briefs' authors. This page provides links not only to the eight webinars, but also to the practical questions and their answers which followed each webinar presentation.
The findings, interpretations and opinions expressed in the webinars are those of the presenters and do not necessarily reflect the policies or views of the United Nations Children’s Fund (UNICEF). The presenters are independent impact evaluation experts who were commissioned by UNICEF to prepare the webinars and use their own knowledge and judgement on key issues and to provide advice. The questions and comments reflected in the Q & A materials are based on those submitted by UNICEF staff as part of this capacity-building initiative. They do not necessarily reflect the policies or views of UNICEF.
The webinars were commissioned by UNICEF and UNICEF is entitled to all intellectual property and other proprietary rights which bear a direct relation to the contract under which this work was produced. The materials on this page are subject to a Creative Commons license CC BY-NC (Attribution-NonCommercial) and may be used and reproduced in line with the conditions of this licence.
View all eight webinars in this series:
- Overview of Impact Evaluation - Presented by Patricia Rogers
- Overview: Data Collection and Analysis Methods in Impact Evaluation - Presented by Patricia Rogers
- Theory of Change - Presented by Patricia Rogers
- Overview: Strategies for causal attribution - Presented by Patricia Rogers
- Participatory Approaches in Impact Evaluation - Presented by Irene Guijt
- Randomized Controlled Trials (RCTs) - Presented by Howard White
- Comparative Case Studies - Presented by Delwyn Goodrick
- Quasi-experimental design and methods - Presented by Howard White