Evaluating Humanitarian Action

Humanitarian action is any activity taken with the objective of saving lives, alleviating suffering, and maintaining human dignity during and after human-induced crises and disasters resulting from natural hazards. Humanitarian action also includes prevention and preparation for these.  Humanitarian action includes both the provision of assistance (such as food, healthcare and shelter) and the protection of crisis-affected populations from violations of their rights (as defined by human rights law, international humanitarian law, and refugee law, see ALNAP, 2016). Different types of evaluation are used in humanitarian action for different purposes, including rapid internal reviews to improve implementation in real time and discrete external evaluations intended to draw out lessons learned with the broader aim of improving policy and practice, and enhancing accountability. The evaluation of humanitarian action (EHA) mostly focuses on evaluating humanitarian projects or programmes funded by an individual donor, although some work has also evaluated the multiple efforts of several actors in response to the same crisis.

What is different about EHA?

Evaluation practitioners in humanitarian contexts have a need for specific learning opportunities and support as evaluative challenges are often accentuated in these difficult environments. A few of the most common challenges include:

  1. Constrained access: speaking to affected populations may be challenging, limited or impossible. The evaluator may not be able to visit projects or programmes, thus doing most of the evaluation remotely.
  2. Lack of data: data may have been destroyed or made irrelevant because of conflict or population movements. Baseline data may be hard to come by, particularly in more protracted crises.
  3. Rapid and chaotic responses: Projects or programmes may not have clear project plans or Theories of Change. Work was planned quickly and needed to evolve to suit changes in the crisis. 
  4. High staff turnover: humanitarian projects have tended to be shorter compared to projects in other sectors, such as international development. Staff may not stay very long within a response. Finding key informants can be challenging for evaluators. 
  5. Data protection and ethical considerations: It is difficult to design data collection and management tools that meet the ethical and analytical challenges raised by ‘do no harm’ principles and protection risk reduction.

Subsequently, practitioners of EHA continue to struggle with producing strong evidential quality of humanitarian evaluations (ALNAP, 2018). To help address this, several coordination networks and learning resources now exist for humanitarian evaluation practitioners, including ALNAP’s Community of Practice and Evaluation of Humanitarian Action Guide

How to do EHA

The importance of EHA and discussion of its particular features and challenges is increasingly recognised. Indeed, the Development Assistance Committee (DAC) of the Organisation for Economic Co-operation and Development (OECD) refined its original four principles for the evaluation process for DAC members into seven criteria that have been adapted for evaluation of complex emergencies (please see the list below). These are typically used as the ‘industry standard’ for EHA.

  1. Relevance/appropriateness
  2. Connectedness
  3. Coherence
  4. Coverage
  5. Efficiency
  6. Effectiveness
  7. Impact

EHA can use a range of evaluative tools, from After-Action Reviews to discrete impact evaluations. Examples of these are presented in the spectrum below, extracted from the ALNAP EHA Guide (note that on the BetterEvaluation website, all of what ALNAP refers to as ‘evaluative tools’ in this diagram would fall under the umbrella term ‘evaluation’, which covers the full range of approaches to monitoring and evaluation).

ALNAP EHA spectrum of evaluative tools

View full-size image in new window


The Evaluation of the European Union’s humanitarian interventions in India and Nepal, 2013-2017, is worth a look given the very different country contexts it covers. Evaluating interventions across different countries and contexts is often challenging. This evaluation succeeds in grounding its assessments in a good understanding of the local contexts, whilst avoiding the pitfall of creating two separate evaluations under one name. It’s also worth checking out Case Study 3 on operating in complex and politically sensitive environments. The case study describes the policy framework, main challenges faced and how they were mitigated, all of which helps ground the findings, conclusions and recommendations in a good understanding of the broader context.

The World Food Programme’s (WFP) Operation Evaluation Series and its Regional Syntheses Project, 2016-2017, brings together findings from evaluations covering 15 different operations of quite varying types, durations, sizes and settings. The programmes covered by the synthesis targeted around 18 million beneficiaries a year with a total planned value of USD $2.6 billion. By bringing together the findings across the full cohort of 2016-2017 operations evaluations, this year’s Annual Synthesis provides another excellent example of how useful evaluation synthesis can be for an organisation.



  • Evaluation of Humanitarian Action Guide (Cosgrave, Buchanan-Smith, and Warner, 2016): This comprehensive guide covers all steps of the evaluation process while providing real-life examples, practical tips, definitions and step-by-step advice. This guide is available in English, Français and Espagnol.

  • Evaluation of Protection in Humanitarian Action (Christoplos and Dillon, with Bonino, 2018): This companion to the EHA guide offers protection-specific insights for evaluations and evaluation commissioners across the humanitarian sector.

  • Evaluating Humanitarian Action using the OECD-DAC Criteria (Beck, 2006): This guide provides practical support on how to use the OECD Development Assistance Committee (OECD/DAC) criteria in evaluation of humanitarian action (EHA). It offers clear definitions for the OECD DAC criteria with explanations, issues to consider, and examples of good practice.

  • Real-time Evaluations of Humanitarian Action - an ALNAP Guide (Cosgrave, Ramalingam and Beck, 2009): This guide helps evaluation managers to commission and oversee, and team leaders to conduct, Real-time evaluations (RTEs) of humanitarian operational responses. A RTE is a rapid participatory exercise carried out during the early stages of a humanitarian response. RTEs differ from other forms of evaluation because its products are intended to be used in real-time.

e-Learning course:


  • ALNAP Evaluation Report Library: Established at the end of the 1990s, ALNAP’s Evaluation Library offers to date the most complete collection of humanitarian evaluative materials – evaluation reports as well as evaluation methods and guidance material, and selected items on evaluation research.

  • ALNAP Website: For more evaluation guidance specific to humanitarian action see the evaluation section of the ALNAP website.

Discussion papers


ALNAP (2016) Evaluation of Humanitarian Action Guide. ALNAP Guide. London: ALNAP/OI.

ALNAP (2018) The state of the humanitarian system 2018. London: ALNAP/ODI

Feature image: An aid worker collects health and (mal)nutrition data during a field visit in Mandera, northeastern Kenya. July 2009. Malnutrition is a big problem among children under 5 in this arid border town. Source: marlenefrancia / Shutterstock.com 

Cite this page

Sundberg, A., Dillon, N., and Gili, M. (2019). Evaluating Humanitarian Action. BetterEvaluation. 

A special thanks to this page's contributors
Senior Research Officer, ALNAP.
London, United Kingdom.
Research Fellow, ALNAP.
London, United Kingdom.
Founder and former-CEO, BetterEvaluation.


There are currently no comments. Be the first to comment on this page!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.