UNICEF Office of Research – Innocenti

UNICEF office of research-innocenti logo, an adult and a child in front of the UN logo  - a globe above olive branches
"The Office of Research – Innocenti is UNICEF’s dedicated research centre. Its core mandate is to undertake cutting-edge, policy-relevant research that equips the organization and the wider global community to deliver results for children. To achieve its mandate UNICEF Innocenti must work closely with all parts of its parent organization as well as a wide array of external academic and research institutions." (UNICEF-IRC, How we work)

Content supported by this partner

  • Esta síntesis recoge una visión general de los distintos elementos de la evaluación de impacto y las diferentes opciones de que disponen los directores de programas de UNICEF para cada uno de estos elementos, con relación a las fases de planificación y gestión de una evaluación de impacto. Estas fases son las siguientes, aunque a veces su orden puede variar o revisarse:
  • This guide, written by Delwyn Goodrick for UNICEF, focuses on the use of comparative case studies in impact evaluation. The paper gives a brief discussion of their use and then outlines when it is appropriate to use them. It then provides step by step guidance on their use for an impact evaluation.
  • This guide, written by Howard White and Shagun Sabarwal for UNICEF, focuses on the development and the selection of measures of child well-being for impact evaluations. The paper also provides an overview of some of the ethical issues and practical limitations that may be present and outlines an example of indicators that have been used in UNICEF studies.
  • This guide, written by Greet Peersman for UNICEF, looks at the use of evaluative criteria in impact evaluation. While evaluations use a combination of facts and values in order to judge the worth of an intervention, evaluative criteria specify the values that will be used.
  • This guide, written by Jane Davidson for UNICEF, looks at the use of evaluative reasoning in impact evaluation. Evaluative reasoning synthesises the answers to lower and mid-level evaluation into defensible arguments that directly answer the evaluation questions.
  • This guide, written by Bronwen McDonald and Patricia Rogers for UNICEF, looks at interviews, with a particular focus on their use in impact evaluation. The paper focuses on how to conduct an interview and provides detailed guidance on approaches to different kinds of interviews. It also provides an overview of ethical issues that may be faced when conducting interviews and provides examples of good practices and challenges in this area.
  • This guide, written by Howard White and Shagun Sabarwal for UNICEF, focuses on the use of modelling in impact evaluations. The paper looks at when it is appropriate to use this method and then provides step-by-step guidance on its use. It also provides some examples of models and looks at some of the ethical implications of their use.
  • This paper, written by Patricia Rogers for UNICEF, outlines the basic ideas and principles of impact evaluation. It includes a discussion of the different elements and options for the different stages of conducting an impact evaluation.
  • This guide, written by Greet Peersman for UNICEF looks at the different types of data collection and analysis methods that can be used for impact evaluation. The paper describes how to plan for data collection and analysis and outlines the importance of good data management practices. The guide also focuses on specific issues to ensure the quality of data collected.
  • This guide, written by Patricia Rogers for UNICEF, looks at the process of causal attribution with a particular emphasis on its use in impact evaluation. The guide specifically focuses on the three broad strategies for causal attribution: estimating the counterfactual; checking the consistency of evidence for the causal relationships made explicit in the theory of change; and ruling out alternative explanations, through a logical, evidence-based process.
  • This guide, written by Irene Guijt for UNICEF, looks at the use of participatory approaches in impact evaluation. Using participatory approaches means involving stakeholders, particularly those affected by intervention, in the evaluation process. This includes involvement in the design, data collection, analysis, reporting, and management of the study.
  • La présente note donne un aperçu des différents éléments de l’évaluation d’impact et des diverses options qui s’offrent aux responsables de programme de l’UNICEF pour chacun de ces éléments, et précise les étapes nécessaires pour la planification et la gestion d’une évaluation d’impact. Ces étapes sont les suivantes, bien que leur ordre et contenu puissent varier: 
  • Les évaluations d’impact ne doivent pas se cantonner à déterminer l’ampleur des effets (c’est-à-dire l’impact moyen), mais doivent également identifier qui a bénéficié de ces programmes ou politiques et comment.
  • L’un des éléments essentiels d’une évaluation d’impact est qu’il ne s’agit pas seulement de mesurer ou de décrire les changements survenus, mais également de comprendre le rôle joué par certaines interventions particulières (programmes ou politiques) dans ces changements.
  • This guide, written by Howard White and Shagun Sabarwal for UNICEF looks at the use of quasi-experimental design and methods in impact evaluation. The paper provides a brief overview and then provides an outline of when it is appropriate to use and some of the ethical and practical limitations of its use.
  • This video guide, produced by UNICEF, summarises the key features of RCTs with a particular emphasis on their use in impact evaluation. It looks at how RCTs test the extent to which specific, planned impacts are being achieved, the random assignment of units (e.g. people, schools, villages, etc.) to the intervention or control group and demonstrates how they provide a powerful response to questions of causality and help evaluators and programme implementers know that what is being achieved is as a result of the intervention and not anything else.
  • Uno de los aspectos esenciales de una evaluación de impacto es que no solo mide o describe cambios que han ocurrido, sino que también procura entender la función de determinadas intervenciones (es decir, programas o políticas) en la generación de estos cambios. Este proceso se suele conocer como atribución causal, contribución causal o inferencia causal. Esta síntesis expone una visión general de las distintas formas de examinar la atribución causal, utilizando una combinación de diseño de investigación y determinadas estrategias de recolección y análisis de datos.
  • Las evaluaciones de impacto deben ir más allá de la simple evaluación de la magnitud de los efectos (el impacto medio) para determinar con quién y de qué forma ha tenido éxito un programa o política.
  • This guide, written by Patricia Rogers for UNICEF, looks at the use of theory of change in an impact evaluation. It demonstrates how it can be useful for identifying the data that needs to be collected and how it should be analysed. It also highlights its use as a framework for reporting.
  • In development, government and philanthropy, there is increasing recognition of the potential value of impact evaluation and specific support to develop capacity for both commissioning and conducting impact evaluation, including the use of its findings. 
  • This series presents overviews of impact evaluation and its key strategies and methods. Methodological briefs and webinars cover the essential building blocks of impact evaluation and evaluation designs, and specific data collection and analysis methods.
  • What does a non-experimental evaluation look like? How can we evaluate interventions implemented across multiple contexts, where constructing a control group is not feasible? Comparative case studies can be used to answer questions about causal attribution and contribution when it is not feasible or desirable to create a comparison group or control group. They are particularly useful for understanding and explaining how context influences the success of an intervention and how better to tailor the intervention to the specific context to achieve intended outcomes.
  • What is the value of using mixed methods in impact evaluation? What methods and designs are appropriate for answering descriptive, causal and evaluative questions? The second webinar in this series provides an overview of data collection and analysis methods in an impact evaluation, including how to choose methods to match different types of key evaluation questions, good data management, sampling methods, and the value of using mixed methods. Select questions from the Q&A at the end of the webinar have been included.
  • We often talk about the importance of knowing the impact of our work, but how is impact measured in practice? What are the ten basic things about impact evaluation that a UNICEF officer should know? If these questions caught your eye, then you might be interested in viewing some or all of the eight impact evaluation webinars, organized by the Office of Research – Innocenti, and presented by evaluation experts from RMIT University, BetterEvaluation and the International Initiative for Impact Evaluation (3ie) throughout 2015.
  • What is causal attribution? Do you need a counterfactual to determine if something has caused a change? Professor Patricia Rogers provides an overview of how to determine causal attribution in impact evaluations. She covers three broad strategies for causal attribution:
  • Who should be involved in an impact evaluation, why and how? The underlying rationale for choosing a participatory approach to impact evaluation can be either pragmatic or ethical, or a combination of the two. In the final webinar of this series, Irene Guijt discusses taking a participatory approach in impact evaluation.
  • What is the main difference between quasi-experiments and RCTs? How can I measure impact when establishing a control group is not an option? Dr. Howard White of the International Initiative for Impact Evaluation (3ie), covers the basics of quasi-experiments.
  • What are the key features of an RCT? Are RCTs really the gold standard? What ethical and practical issues do I need to consider before deciding to do an RCT? The fifth webinar in this series is presented by international evaluation expert and former Executive Director of 3ie, Dr Howard White. Dr White provides an introduction to RCTs and cover different types of RCT design, the necessary conditions for successfully evaluating programmes using an RCT, ethical and practical considerations, and examples of good and bad practices.
  • What is a Theory of Change? How is it different from a logframe? Why is it such an important part of an impact evaluation? In the April webinar, Professor in Public Sector Evaluation from RMIT University, Patricia Rogers, discusses the different ways of developing and representing a theory of change (ToC) in an impact evaluation. She stresses the importance of reviewing and revising the ToC to guide data collection, analysis and reporting.