Impact evaluation: UNICEF's briefs and videos
Nikola Balvin, Knowledge Management Specialist at the UNICEF Office of Research – Innocenti, presents new resources on impact evaluation and discusses how they can be used to support managers who commission impact evaluations.
The UNICEF Office of Research – Innocenti recently launched the Impact Evaluation Series - a package of 13 user-friendly, introductory methodological briefs and four animated videos on impact evaluation. UNICEF is particularly interested in impact evaluation because of its obligation to invest where it makes the most difference for children and recognition that impact evaluation is an integral component of an evidence-based decision maker’s toolkit. While the series was designed primarily for UNICEF staff members who commission and utilize impact evaluations in international development, it is relevant and useful to a broader range of users.
The process for developing the briefs
To develop the briefs, UNICEF worked in collaboration with RMIT University, BetterEvaluation and the International Initiative for Impact Evaluation (3ie). Lead authors were BetterEvaluation’s Patricia Rogers and Greet Peersman, and the (then) Executive Director of 3ie (International Initiative for Impact Evaluation), Howard White. They worked with seven other evaluators with specialised expertise – E. Jane Davidson, Dugan Fraser, Thomas de Hoop, Delwyn Goodrick, Irene Guijt, Bron McDonald, and Shagun Sabarwal.
The series benefitted from the guidance of an Advisory Board comprised of UNICEF researchers and evaluators from both the field and headquarters. As well as reviewing the technical content of the briefs and providing relevant examples from UNICEF’s work, the Advisory Board was also responsible for helping to shape the conceptual framework around which the topics of the briefs were structured.
The building blocks of impact evaluation
A good impact evaluation toolkit for policy and programme interventions in international development needs to recognize the realities of limited resources, capacity, time, and data, and the complex political contexts we work in. It needs to cover many approaches to impact evaluation, including quasi-experimental and non-experimental designs.
One of the important issues we wrestled with was how to help impact evaluation managers think through the various aspects of an impact evaluation, rather than focusing only on a research design or a data collection method. To do this we developed the notion of a “wall”, which organises these issues in terms of three rows of bricks.
At the base of the “wall” are foundational ‘building blocks’ which are essential to consider in any impact evaluation – here, there are briefs on theory of change, evaluative reasoning, evaluative criteria, and participatory approaches. The next row looks at design options for impact evaluation to address causal attribution (experimental, quasi-experimental and non-experimental) – here, there are briefs on randomised controlled trials, quasi-experimental designs, and comparative case studies. The third row has methods of data collection and analysis – here there are briefs on measures of child well-being, interviewing and modelling.
While the bottom row outlines most of the founding components of impact evaluation, rows 2 and 3 are not exhaustive. For example, there are many other data collection and analysis methods that can be used in impact evaluation besides the two that are covered. The topics of interviewing and modelling were chosen because of their relevance to UNICEF’s work rather than any priority or importance they may hold in impact evaluation more generally.
Each row is introduced by an overview brief (i.e. briefs 1, 6, and 10) which summarizes the key concepts and provides an overview of other designs and methods which could have had an entire brief dedicated to them, but were not chosen as a priority in this round of writing (such as, for example, different types of numeric or textual analysis).
Recognizing that a 12-page brief can only cover so much, each document ends with key readings and links where the reader can find more information on the topic. A glossary of terms is also provided.
The three overview briefs and the brief on RCTs are also complemented with a short animated video. The videos capture the key content of the briefs and explain it with the help of drawings and voiceover. Although the role of the videos is to visualise and support the main points in the briefs, they can be used as stand-alone tools.
The three overview briefs and all four animated videos have been translated into French and Spanish to increase their accessibility and usage. We’re currently monitoring demand for the French and Spanish versions and will revisit the need to translate the remaining 10 briefs in mid-2015.
Using the videos
The Office of Research- Innocenti has used the videos at the start of impact evaluation workshops to set the scene and get participants thinking about the topic and how it relates to their work. Participants often mentioned images from the videos throughout the workshop, which suggests that the visuals stuck in their mind and managed to communicate some of the more abstract concepts (e.g. random assignment or stratified sampling). We hope that the videos motivate viewers to go the extra mile and read the methodological briefs and other materials recommended in them.
The entire series is available on the UNICEF Office of Research – Innocenti interactive website. We look forward to celebrating the International Year of Evaluation by building on this work with a webinar series by the authors throughout 2015. More information about upcoming impact evaluation materials from this collaboration will be available here on the BetterEvaluation site.
*The views expressed in this blog are those of the author and are published to stimulate further dialogue on the role of impact evaluation in UNICEF’s work. They do not necessarily reflect the policies or views of UNICEF. The text has not been edited to official publication standards, and UNICEF accepts no responsibility for errors.