Evaluating humanitarian action
Different types of evaluation are used in humanitarian action for different purposes, including rapid internal reviews to improve implementation in real time and discrete external evaluations intended to draw out lessons learned with the broader aim of improving policy and practice, and enhancing accountability.
Humanitarian action is any activity taken with the objective of saving lives, alleviating suffering, and maintaining human dignity during and after human-induced crises and disasters resulting from natural hazards. Humanitarian action also includes prevention and preparation for these. Humanitarian action includes both the provision of assistance (such as food, healthcare and shelter) and the protection of crisis-affected populations from violations of their rights (as defined by human rights law, international humanitarian law, and refugee law, see ALNAP, 2016).
Different types of evaluation are used in humanitarian action for different purposes, including rapid internal reviews to improve implementation in real time and discrete external evaluations intended to draw out lessons learned with the broader aim of improving policy and practice, and enhancing accountability. The evaluation of humanitarian action (EHA) mostly focuses on evaluating humanitarian projects or programmes funded by an individual donor, although some work has also evaluated the multiple efforts of several actors in response to the same crisis.
What is different about EHA?
Evaluation practitioners in humanitarian contexts have a need for specific learning opportunities and support as evaluative challenges are often accentuated in these difficult environments. A few of the most common challenges include:
- Constrained access: speaking to affected populations may be challenging, limited or impossible. The evaluator may not be able to visit projects or programmes, thus doing most of the evaluation remotely.
- Lack of data: data may have been destroyed or made irrelevant because of conflict or population movements. Baseline data may be hard to come by, particularly in more protracted crises.
- Rapid and chaotic responses: Projects or programmes may not have clear project plans or Theories of Change. Work was planned quickly and needed to evolve to suit changes in the crisis.
- High staff turnover: humanitarian projects have tended to be shorter compared to projects in other sectors, such as international development. Staff may not stay very long within a response. Finding key informants can be challenging for evaluators.
- Data protection and ethical considerations: It is difficult to design data collection and management tools that meet the ethical and analytical challenges raised by ‘do no harm’ principles and protection risk reduction.
Subsequently, practitioners of EHA continue to struggle with producing strong evidential quality of humanitarian evaluations (ALNAP, 2018). To help address this, several coordination networks and learning resources now exist for humanitarian evaluation practitioners, including ALNAP’s Community of Practice and Evaluation of Humanitarian Action Guide.
How to do EHA
The importance of EHA and discussion of its particular features and challenges is increasingly recognised. Indeed, the Development Assistance Committee (DAC) of the Organisation for Economic Co-operation and Development (OECD) refined its original four principles for the evaluation process for DAC members into seven criteria that have been adapted for evaluation of complex emergencies (please see the list below). These are typically used as the ‘industry standard’ for EHA.
EHA can use a range of evaluative tools, from After-Action Reviews to discrete impact evaluations. Examples of these are presented in the spectrum below, extracted from the ALNAP EHA Guide (note that on the BetterEvaluation website, all of what ALNAP refers to as ‘evaluative tools’ in this diagram would fall under the umbrella term ‘evaluation’, which covers the full range of approaches to monitoring and evaluation).
- The Evaluation of the European Union’s humanitarian interventions in India and Nepal, 2013-2017
Evaluating interventions across different countries and contexts is often challenging. This evaluation succeeds in grounding its assessments in a good understanding of the local contexts, whilst avoiding the pitfall of creating two separate evaluations under one name. It’s also worth checking out Case Study 3 on operating in complex and politically sensitive environments. The case study describes the policy framework, main challenges faced and how they were mitigated, all of which helps ground the findings, conclusions and recommendations in a good understanding of the broader context.
- The World Food Programme’s (WFP) Operation Evaluation Series and its Regional Syntheses Project, 2016-2017
This brings together findings from evaluations covering 15 different operations of quite varying types, durations, sizes and settings. The programmes covered by the synthesis targeted around 18 million beneficiaries a year with a total planned value of USD $2.6 billion. By bringing together the findings across the full cohort of 2016-2017 operations evaluations, this year’s Annual Synthesis provides another excellent example of how useful evaluation synthesis can be for an organisation.
- ALNAP-UNICEF Introduction to Evaluating Humanitarian Action
developed in collaboration with EvalPartners and UNEG. This free course offers an overview of evaluation practice in humanitarian contexts, and features concrete guidance, tips and insights from experienced practitioners.
- ALNAP Evaluation Report Library
Established at the end of the 1990s, ALNAP’s Evaluation Library offers to date the most complete collection of humanitarian evaluative materials – evaluation reports as well as evaluation methods and guidance material, and selected items on evaluation research.
- ALNAP Website
For more evaluation guidance specific to humanitarian action see the evaluation section of the ALNAP website.
ALNAP (2016) Evaluation of Humanitarian Action Guide. ALNAP Guide. London: ALNAP/OI.
ALNAP (2018) The state of the humanitarian system 2018. London: ALNAP/ODI
Feature image: An aid worker collects health and (mal)nutrition data during a field visit in Mandera, northeastern Kenya. July 2009. Malnutrition is a big problem among children under 5 in this arid border town. Source: marlenefrancia / Shutterstock.com
'Evaluating humanitarian action' is referenced in: