An issue of increasing interest in evaluation, especially development evaluation, is whether and how we might apply ideas and methods from complexity science to evaluation.
Complexity ideas and methods have important applications for how we think about programs and policies, how we collect and analyse data, and how we report findings and support their use.
Since 2014 this has been one of the priority themes that BetterEvaluation has focused on.
Complexity is sometimes dismissed as a ‘trendy’ term that is used to avoid accountability and planning. But discussions of complexity often raise one or two important issues:
- Multiple components (sometimes labelled ‘complicated’)
Interventions can have some simple aspects, some complicated aspects and some complex aspects, and it is more useful to identify these than to classify a whole intervention as complex.
Many evaluations have to deal with programs with multiple components, multiple levels of implementation, multiple implementing agencies with multiple agendas, and long causal chains with many intermediate outcomes, or outcomes that can only be achieved through a 'causal package' involving multiple interventions or favourable contexts.
In these situations, evaluations have to be based on a logic model and data collection and analysis plan that provides information about the different components which all need to work effectively and together, or processes that work differently in different contexts, or which only work in combination with other programs or favourable environments. It is essential to report on these in terms of ‘what works for whom in what contexts’.
In some frameworks (especially two classic papers by Glouberman and Zimmerman (2002) (see http://albordedelcaos.com/2011/09/27/conociendo-el-blog/ for a Spanish language version of these ideas)and Kurtz and Snowden (2003) , this aspect is referred to as “complicated’ to distinguish it from emergence.
Many evaluations have to deal with programs that involve emergent and responsive strategies and causal processes which cannot be completely controlled or predicted in advance. While there is an overall goal in mind, the details of the program will unfold and change over time as different people become engaged and as it responds to new challenges and opportunities.Projects that focus on community development or leadership development are particularly likely to have these features.
In these situations, evaluations have to be able to identify and document emergent partners, strategies and outcomes, rather than only paying attention to the objectives and targets identified at the beginning. Real-time evaluation will be needed to answer the question “What is working?” and to inform ongoing adaptation and learning. Effective evaluation will not involve building a detailed model of how the intervention works and calculating the optimal mix of implementation activities - because what is needed, what is possible, and what will be optimal will be always changing.
The paper details each of the 10 concepts of complexity science, using real world examples where possible. It then examines the implications of each concept for those working in the aid world. Here, we list the 10 concepts for reference, using the next section of this summary to suggest some overall implications of using the concepts for work in international development and humanitarian spheres.
USAID’s Office of Learning, Evaluation and Research (LER) has produced a Discussion Note: Complexity-Aware Monitoring, intended for those seeking cutting-edge solutions to monitoring complex aspects of strategies and projects.
Looking at how complexity science could be used in health systems which are characterised by nonlinear dynamics and emergent properties arising from diverse populations of individuals interacting with each other and which are capable of undergoing spontaneous self-organisation..