Evaluation of Humanitarian Action: A new page

healthcare workers sort through boxes of medicine

ALNAP is delighted to launch the ‘Evaluation of Humanitarian Action’ theme page in partnership with BetterEvaluation.

We hope that this page will serve as a useful directory for evaluators and commissioners alike who are looking for guidance and help with navigating the choppy waters of Evaluation of Humanitarian Action (EHA). We welcome you to explore!

As a bit of background, ALNAP is a global network of NGOs, UN agencies, members of the Red Cross/Crescent Movement, donors, academics and consultants dedicated to learning how to improve response to humanitarian crises. To do this, ALNAP facilitates learning between our Network Members and carries out original research and hosts events and conferences. One of our areas of strategic focus aims to help improve the quality and accessibility of evaluative evidence and related research and learning activities. To do this, ALNAP carries out original research and also supports a Community of Practice through events and knowledge sharing. Many of our objectives marry well with those of BetterEvaluation, and this collaboration seeks to bring together our communities where practitioners can explore lessons learned between different objectives and methods of evaluation.

Why visit the new page?

To date, limited platforms exist that pool together useful and up-to-date resources for practitioners working on evaluations in complex, humanitarian settings. So, we have created a page fully dedicated to advice for anyone who is planning, has been involved in or used – or simply interested in – EHA.

The page gives an overview of evaluative options and provides some key recommended resources are listed on the page, such as the Evaluation of Humanitarian Action Guide and the ALNAP Guide to Evaluation of Protection in Humanitarian Action. These can help to answer questions such as:

  • What evaluative options do I have in a humanitarian setting?

  • How should evaluation commissioners assess evaluability conditions in humanitarian contexts?

  • How should evaluation teams navigate the urgency and chaos of humanitarian emergencies?

  • And how should we assess programme performance and results whilst taking account of the complexity of polarised perspectives often brought on by conflicts?

Addressing evidential quality

In 2019 EHA practitioners are dealing with increasingly complex environments, which can make it more difficult to generate strong evidential quality. The most recent State of the Humanitarian System report (2018) identified that although average evidential quality of humanitarian evaluations is pretty decent, quality varies considerably.

Unfortunately, it appears that the main drivers of quality are not related to the evaluation context Rather, supply-side factors (such as the type of evaluation chosen, commissioning organisation and sector expertise of evaluation team) play a strong role. Methods used and design were also found to have a big impact on quality. Notably, the use of mixed methods instead of purely qualitative methods increased average quality and reduced quality variance. Although the importance of the localisation of evaluation is increasingly recognised, organisations to date have struggled to really engage evaluation commissioners and evaluators on the ground, who are ultimately responsible for increasing the quality and use of evaluations around the world. As a result, although OECD DAC evaluation criteria have provided a comparable framework for the structure of EHA, some criteria are covered less well than others. Poorly covered criteria include coverage, connectedness, coherence and coordination.

The variance in evidential quality raises challenges for conducting evaluation synthesis. The accumulation of humanitarian evaluations over the past 20 years should have built a strong evidence base for the system globally. Syntheses of these could provide digestible lessons learned for senior management, drawn from a wide range of programmes and interventions. However, the ALNAP Humanitarian Evaluation Community of Practice has discussed that many practitioners are still grappling with a range of different challenges to evaluation synthesis, many of which are reflections of wider challenges to evaluation. The four biggest challenges are:

  1. making evaluations fit for synthesis - the quality of the evaluation synthesis is heavily dependent on having evaluations that provide high quality evidence and do so in a clear and consistent way that supports synthesis across different evaluation reports

  2. assessing the quality of evaluations - finding a clear and consistent approach to assessing evaluation quality that can work for each of the evaluations being synthesised

  3. analysis and writing-up of findings - finding appropriate wording to describe findings without going beyond the generalisability limits of the evaluation reports being synthesised or leaving out all the useful findings present in the original reports

  4. timeliness - how to judge when syntheses should happen: on a periodic basis? or linked to a strategy level decision processes?

We hope that this new page can stimulate learning and contribute to improved quality of evaluations across the sector. We feel that we have pulled together a really comprehensive and up-to-date list of resources, and we hope that you find something that is useful for you! We look forward to seeing the page put to good use as a directory for support in future evaluations.

We'd love to know your thoughts on the new page, or any resources that you think would be good to add. Please comment below here, or on the theme page, or fill out the BetterEvaluation contribute content form.

Related content

'Evaluation of Humanitarian Action: A new page' is referenced in: