What is the main difference between quasi-experiments and RCTs? How can I measure impact when establishing a control group is not an option?
In the second last webinar of the series, Dr. Howard White of the International Initiative for Impact Evaluation (3ie), covers the basics of quasi-experiments.
Quasi-experimental research designs, like experimental designs, test causal hypotheses. The main difference is that the former lacks random assignment to a control group and assignment to conditions is by means of self-selection, administrative selection or both. There are different methods for creating a valid comparison group, such as regression discontinuity design (RDD) and propensity score matching (PSM). The webinar will cover the most common methods and answer questions on the topic.
In partnership with the UNICEF Office of Research – Innocenti, the RMIT University-based BetterEvaluation team worked with evaluation experts and the International Initiative for Impact Evaluation (3ie) to deliver a series of webinars on impact evaluaton for UNICEF staff on topics pertinent to development professionals. These webinars follow on from a series of 13 methodological briefs on impact evaluation methods. Like the methodological briefs, the webinars are best suited to UNICEF staff who commission or utilize the results from impact evaluations, but will likely be of interest to others. The objective is to provide an interactive capacity-building experience to UNICEF staff, covering common challenges from the field and answering practical questions.
Listen to Q&A
Do you have any good examples of using indirect methods to assess the impact of child grants on health/ education outcomes? For example in cases where there is no baseline - and using e.g. IV or RDD against threshold?
Are there any sample size requirements for any of the methods presented that would prevent biases?
About this webinar series
Throughout 2015, BetterEvaluation partnered with the UNICEF Office of Research – Innocenti to develop eight impact evaluation webinars for UNICEF staff. The objective was to provide an interactive capacity-building experience, customized to focus on UNICEF’s work and the unique circumstances of conducting impact evaluations of programs and policies in international development. The webinars were based on the Impact Evaluation Series – a user-friendly package of 13 methodological briefs and four animated videos – and presented by the briefs' authors. This page provides links not only to the eight webinars, but also to the practical questions and their answers which followed each webinar presentation.
The findings, interpretations and opinions expressed in the webinars are those of the presenters and do not necessarily reflect the policies or views of the United Nations Children’s Fund (UNICEF). The presenters are independent impact evaluation experts who were commissioned by UNICEF to prepare the webinars and use their own knowledge and judgement on key issues and to provide advice. The questions and comments reflected in the Q & A materials are based on those submitted by UNICEF staff as part of this capacity-building initiative. They do not necessarily reflect the policies or views of UNICEF.
The webinars were commissioned by UNICEF and UNICEF is entitled to all intellectual property and other proprietary rights which bear a direct relation to the contract under which this work was produced. The materials on this page are subject to a Creative Commons license CC BY-NC (Attribution-NonCommercial) and may be used and reproduced in line with the conditions of this licence.
View all eight webinars in this series:
- Overview of Impact Evaluation - Presented by Patricia Rogers
- Overview: Data Collection and Analysis Methods in Impact Evaluation - Presented by Patricia Rogers
- Theory of Change - Presented by Patricia Rogers
- Overview: Strategies for causal attribution - Presented by Patricia Rogers
- Participatory Approaches in Impact Evaluation - Presented by Irene Guijt
- Randomized Controlled Trials (RCTs) - Presented by Howard White
- Comparative Case Studies - Presented by Delwyn Goodrick
- Quasi-experimental design and methods - Presented by Howard White